Abstract: The present disclosure relates to a voice-enabled modular projection system wherein the projection system may be utilised for the purposes of imparting educational content to one or more users. A user may project content to the users through voice-enabled and text-based search of the content wherein the user may give one or more voice commands to the projection system. The projection system is further enabled to take plurality of commands including but not limited to operational commands such as search, open applications and action commands such as left, right, move, up, down, toggle from the user through voice in order to traverse through the contents displayed onto the projection screen. Such novel projection system allows the unique functionality of a modular voice-enabled projection system that enables easy replacement of the different components, thereby eliminating the need for replacing the complete system in case of failure of the system.
[1] The present disclosure relates to projection systems and more particularly, to
10 a system and an associated method thereof for a voice-enabled modular projection
system which allows a user to give one or more voice commands for displaying
search contents through the projection system for use in the domain of education and
shared learning.
BACKGROUND
15 [2] Currently, smart and digital classrooms have been heavily relied upon in the
educational sector for making the classroom studying interactive and user-friendly.
Such smart classrooms involve advanced technologies that empower educational
systems to cope up with the dynamic needs of an evolving educational scenario. In
particular, sophisticated learning tools with age-appropriate instructional designs,
20 effectively disseminating knowledge, in a manner that is easier and far more
interesting for the students to understand, are increasingly being deployed at various
educational institutions. To this end, the smart classrooms have extensively utilised
projection systems as a mode of content delivery to the students. However, the
existing projection system require operation through remote control or any other
25 additional connected system to operate the afore-mentioned projecting system. Such
remote-control operations may include one or more manual operations such as use of
switch control, adjustment of projection color brightness, adjustment of projected
picture, volume increase and/or decrease and the like. Further, usage of such systems
being dependent on a remote control often pose problems such as misplacement of
30 remote control, keys of the remote not working, roof-top placement of the projecting
apparatus which makes it difficult to operate through buttons fixed in the system in
case of misplacement of the remote control.
[3] To overcome this, some computing systems (e.g., mobile phones, tablet
computers, personal digital assistants, projection apparatuses etc.) were introduced
35 that were voice-enabled. The voice-enabled computing systems were controlled by
means of audio data, such as a human voice. Such computing systems provide
3
5 functionality to detect speech, determine an action indicated by the detected speech,
and execute the indicated action. For example, a computing system may receive
audio input corresponding to a voice command, such as “search,” “navigate,” “play,”
“pause,” “call,” or the like. In such instances, the computing system may analyse the
audio input using speech-recognition techniques to determine a command and then
10 execute an action associated with the command (e.g., provide a search option,
execute a map application, begin playing a media file, stop playing a media file,
place a phone call, etc.). In this way, a voice-enabled computing system may provide
the users with the ability to operate some features of the computing system without
physical involvement.
15 [4] However, such voice-enabled projection computing systems had several
interconnected components with huge number of wires running across the entire
computing system. Such voice-enabled computing systems required several speakers,
voice-recording apparatuses, processing capabilities, projection apparatuses etc.
These systems were not easy to carry anywhere and everywhere, thereby making it
20 difficult for the users to efficiently use it for providing learning in several
classrooms. Moreover, due to all components of the systems being placed in vicinity
to each other caused heating problems, thereby reducing the life of the entire
projection system. Although the operations of the system are controlled by voice
commands, the content displayed to the students is still monitored and/or controlled
25 using a remote control which in turn creates the problems discussed previously in
case of operation through a remote control. Therefore, these computing systems only
enable minimal operational functions through voice commands and fail to enable the
user to scroll and/or search the contents displayed to the students through the voice
commands.
30 [5] Therefore, to overcome the existing problems described above, there arises a
need for a voice-enabled modular projection system for delivering educational
content to the students which provides convenient voice-enabled navigation and
searching of the contents. The content can be displayed onto the projected screen
wherein the system has the inherent characteristics of being portable and free from
35 heating issues, easy replacement of any faulty component within the system without
4
5 the need for replacing the entire system, thereby significantly improving the
scalability and life of the projection system.
SUMMARY
[6] One or more shortcomings of the prior art are overcome, and additional
10 advantages are provided through the present disclosure. Additional features and
advantages are realized through the techniques of the present disclosure. Other
embodiments and aspects of the disclosure are described in detail herein and are
considered a part of the claimed disclosure.
[7] In one aspect of the disclosure, a system for projecting digital data is
15 disclosed. The system comprises of a retrieval engine that may be configured to
search a predefined voice-initiated action repository based on at least one voice
command provided by a user; wherein the predefined voice-initiated action
repository may store a plurality of text commands associated to a plurality of voiceinitiated actions. Such at least one-voice initiated action associated to the plurality of
20 text commands may be executed by a processor. The system further includes a data
acquisition unit connected to the processor for recording the at least one voice
command provided by the user wherein the processor is further configured to convert
the plurality of recorded voice commands into the plurality of text commands.
[8] In another aspect of the disclosure, a method for projecting digital data is
25 disclosed. The method includes recording a plurality of voice commands provided by
a user through a data acquisition unit and determining based on, an at least one text
command, a voice-initiated action indicated by voice command through a retrieval
engine, wherein the retrieval engine may be configured to search a predefined voiceinitiated action repository based on an at least one translated text command; further
30 wherein the voice-initiated action is a particular voice-initiated action from a
plurality of voice-initiated actions associated with a plurality of text commands
executed by a processor.
[9] The foregoing summary is illustrative only and is not intended to be in any
way limiting. In addition to the illustrative aspects, embodiments, and features
5
5 described above, further aspects, embodiments, and features will become apparent by
reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
[10] The accompanying drawings, which are incorporated in and constitute a part
10 of this disclosure, illustrate exemplary embodiments and, together with the
description, serve to explain the disclosed embodiments. In the figures, the left-most
digit(s) of a reference number identifies the figure in which the reference number
first appears. The same numbers are used throughout the figures to reference like
features and components. Some embodiments of system and/or methods in
15 accordance with embodiments of the present subject matter are now described, by
way of example only, and with reference to the accompanying figures, in which:
[11] Fig. 1 is an overall external view of the portable modular voice-enabled
projection system in accordance with some embodiments of the present disclosure;
[12] Fig. 2 is a block diagram illustrating an exemplary modular voice-enabled
20 projection system;
[13] Fig.3 illustrates a block diagram describing a graphical user interface to
provide visual indication of a recognised voice-initiated action;
[14] Fig.4 describes the various types of voice-initiated actions executed by the
system;
25 [15] Fig.5 illustrates a flowchart for the voice-enabled content search method
and/or process thereof.
[16] Fig.6 is a simplified schematic diagram illustrating an exemplary block
diagram of a system for implementing embodiments consistent with the present
disclosure.
30 [17] It should be appreciated by those skilled in the art that any block diagrams
herein represent conceptual views of illustrative systems embodying the principles of
the present subject matter. Similarly, it will be appreciated that any flow charts, flow
diagrams, and the like represent various processes which may be substantially
represented in computer readable medium and so executed by a computer or
35 processor, whether or not such computer or processor is explicitly shown.
6
5
DETAILED DESCRIPTION
[18] In the following detailed description of the embodiments of the present
disclosure, numerous specific details are set forth in order to provide a thorough
understanding of the embodiments of the disclosure. However, it will be obvious to
10 one skilled in the art that the embodiments of the disclosure may be practiced
without these specific details. In other instances, well known methods, procedures,
components, and circuits have not been described in detail so as not to unnecessarily
obscure aspects of the embodiments of the disclosure.
[19] References in the present disclosure to “one embodiment” or “an
15 embodiment” mean that a particular feature, structure, characteristic, or function
described in connection with the embodiment is included in at least one embodiment
of the disclosure. The appearances of the phrase “in one embodiment” in various
places in the present disclosure are not necessarily all referring to the same
embodiment.
20 [20] In the present disclosure, the word "exemplary" is used herein to mean
"serving as an example, instance, or illustration." Any embodiment or
implementation of the present subject matter described herein as "exemplary" is not
necessarily to be construed as preferred or advantageous over other embodiments.
[21] While the disclosure is susceptible to various modifications and alternative
25 forms, specific embodiment thereof has been shown by way of example in the
drawings and will be described in detail below. It should be understood, however that
it is not intended to limit the disclosure to the particular forms disclosed, but on the
contrary, the disclosure is to cover all modifications, equivalents, and alternative
falling within the scope of the disclosure.
30 [22] The terms “comprises”, “comprising”, or any other variations thereof, are
intended to cover a non-exclusive inclusion, such that a setup, system or method that
comprises a list of components or steps does not include only those components or
steps but may include other components or steps not expressly listed or inherent to
such setup or system or method. In other words, one or more elements in a system or
35 apparatus proceeded by “comprises… a” does not, without more constraints,
7
5 preclude the existence of other elements or additional elements in the system or
apparatus.
[23] In the following detailed description of the embodiments of the disclosure,
reference is made to the accompanying drawings that form a part hereof, and in
which are shown by way of illustration specific embodiments in which the disclosure
10 may be practiced. These embodiments are described in sufficient detail to enable
those skilled in the art to practice the disclosure, and it is to be understood that other
embodiments may be utilized and that changes may be made without departing from
the scope of the present disclosure. The following description is, therefore, not to be
taken in a limiting sense.
15 [24] Referring to Fig.1, in the embodiment shown, an overall external view of the
portable voice-enabled projection system 100 according to the present disclosure is
described. The system 100 comprises a projector 103 for projecting digital data
displaying data onto an external surface; a processor 105 for processing and
receiving the data input; and an inbuilt microphone 107 provided to the user for
20 recording and/or capturing data such as audio, video and/or digital data wherein said
components are interconnected to display content over an external surface. The term
"interconnected" is used herein in this application to state that the system 100 has all
the components internally connected and configured such that the user has to press
only one button to switch ON all the functions of the system thereby eliminating the
25 complicated process of connecting various components by user, enabling extreme
ease of use and operation. However, such components are capable of being
segregated from the system 100 as standalone component. Herein, in case of failure
of any of the component, such component may be easily taken out and repaired
separately, while the system 100 may continue to work as desired, thereby providing
30 the unique functionality of being a modular system. The computer processor may
include a central processing unit (“CPU” or “processor”) wherein the processor 105
may comprise at least one data processor for executing program components for
executing user or system-generated business processes. The processor 105 may
include specialised processing units such as integrated system (bus) controllers,
8
5 memory management control units, floating point units, graphics processing units,
digital signal processing units, etc.
[25] Such components may be grouped into one or more modules within the
housing or may be individually housed without such modular arrangement.
Additionally, the housing components further includes one or more speakers 109,
10 one or more amplifier circuits 111, a keyboard and mouse with tray 112, a power
controller and jack 113, a cable VGA power 114, one or more audio cables and
casings 115, a single on/off switch 116. The system 100 may be envisioned as a
freestanding assembly housing a digital data image screen as well as an audio output
speaker assemblies. The system 100 may produce a projected visual image upon the
15 display screen. Located upon the front of the voice-enabled video projector 103 is a
control panel for allowing manual input in a manner which will be described in
greater detail herein below. Further, a touch screen control mechanism may be
provided for interaction with the display screen. Finally, the voice-enabled video
projector 103 is powered by a power controller and jack 113, allowing the system
20 100 to be powered by any readily available electrical power source.
[26] The processor 105 may be disposed in communication with one or more
input/output (I/O) systems via user interface 201. The user interface 201 may employ
communication protocols and/or methods such as, without limitation, audio, analog,
digital, stereo, IEEE-1394, serial bus, Universal Serial Bus (USB), infrared, PS/2,
25 BNC, coaxial, component, composite, Digital Visual Interface (DVI), high-definition
multimedia interface (HDMI), Radio Frequency (RF) antennas, S-Video, Video
Graphics Array (VGA), IEEE 802.n /b/g/n/x, Bluetooth, cellular (e.g., Code-Division
Multiple Access (CDMA), High-Speed Packet Access (HSPA+), Global System For
Mobile Communications (GSM), Long-Term Evolution (LTE) or the like), etc.
30 Using the user interface 201, the system 100 may communicate with one or more I/O
systems.
[27] The system 100 may be made of a material such as metal, alloy or composite
material which is preferably used for housing material of conventional electronic
systems. Preferably, the system 100 has a hollow and/or solid cuboid shape
35 comprising a front panel and a rear panel. Alternatively, the housing may be
9
5 configured to have any size, shape and ergonomics to be portable and enable
carrying by hand from one location to another.
[28] The system 100 shown in Fig. 1 may also include an external microphone
118. The external microphone 118 may be one of one or more input systems of
system 100. The system acts upon the voice whose frequency is the maximum in the
10 vicinity of the external microphone 118.
[29] Referring to Fig. 2, which is a block diagram illustrating the exemplary
modular voice-enabled projection system 100; wherein the system may include the
user interface 201, the external microphone 118, a retrieval engine 210, a data
acquisition unit 215 and a voice-initiated data action repository 220. The system 100
15 may further include the user interface 201 for providing a graphical user interface
that includes a visual indication of a recognized voice-initiated action, in accordance
with one or more aspects of the present disclosure. The inbuilt microphone 107 or the
external microphone 118 may be used to capture and send the user's voice commands
to the system 100. The type of microphone is variable. It may be any wired or
20 wireless microphone on the market that will connect to a computer sound card and/or
audio system. Multiple microphones including but not limited to cardioid, super
cardioid, omni and figure 8 may also be used in the system 100. A data acquisition
unit 215 is connected to the external microphone 118 and the processor 105 for
recording the at least one voice command provided by the user wherein the processor
25 105 may be further configured to convert the plurality of recorded voice commands
into the plurality of text commands. The retrieval engine 210 is configured to search
a predefined voice-initiated action repository 220 based on at least one voice
command provided by the user; wherein the predefined voice-initiated action
repository 220 may store a plurality of text commands associated to a plurality of
30 voice-initiated actions. The processor 105 further executes the at least one voiceinitiated action associated to the plurality of text commands.
[30] Referring to Fig. 3 describing an exemplary embodiment, the user through
the user interface 201 by speaking using the microphone 118 gives a voice command
to search the definition of “Photosynthesis”. The retrieval engine 210 records a
35 plurality of voice commands provided by the user through the data acquisition unit
10
5 215. The data acquisition unit 215 may forward the recorded commands to the
processor 105 wherein the processor 105 may be configured to convert the plurality
of recorded voice commands into the plurality of text command “Photosynthesis”.
The text command may be subsequently searched in a predefined voice-initiated
action repository 220 that may store a plurality of text commands associated to a
10 plurality of voice-initiated actions by the retrieval engine 210. On fetching the voiceinitiated action corresponding to the converted text command “Photosynthesis”, the
processor 105 may execute the at least one voice-initiated action “Display definition”
to be displayed on the user interface 201. Referring to Fig. 4, the voice commands
may also include a plurality of action commands 410 such as left 410a, right 410b,
15 move 410c, up 410d, down 410e, and toggle 410f. The voice commands may be
provided by the user may include, but not limited to, a plurality of operational
commands 420 such as search 420a, open browser 420b, open text editor 420c.
Further, the user interface 201 may be enabled to allow at least one input through one
of a voice command or a text command from the user. The data that may be
20 displayed through the user interface 201 may include video, audio and text form. The
pre-defined voice initiated repository 220 is capable of saving the previously input
voice commands and their corresponding voice-initiated actions to be able to present
the similar output in case of the previously input voice command being presented as
an input in future by the other users. Herein along with the voice-enabled search, the
25 user has the option to even search using text commands which he/she could type
using the keyboard.
[31] Referring to Fig 5, which describes the method of the voice-enabled content
search process 500, the user gives the voice command through the microphone 118 at
501 wherein the data acquisition unit 215 may captures the voice and may send it to
30 the retrieval engine 210. The input voice is then converted to text command at 502.
Such converted text form is matched with a set of predefined voice-initiated actions
304 through searching the voice-initiated action repository 220 at 503. If a match is
found for the user’s voice command, which has now been translated into text form,
the matching content tagged to the text form is displayed onto the projected screen
35 through the user interface 201 at 504 and 505. It is also possible that a matching
11
5 phrase or verbal word corresponding to the text form is also played through the
speaker 109 along with the displayed content on the screen. The voice-initiated
action repository 220 is a predefined custom database of stored commands
corresponding to text forms that are accepted as verbal commands from the user. The
system provides a means in the form of a ‘start search’ voice command and a ‘stop
10 search’ voice command in the user interface 201 for the user to start and stop the
voice recognition action of the system. The projection system 100 may be further
enabled to take plurality of commands including but not limited to operational
commands 420 such as search 420a, open applications browser 420b, text editor
420c and action commands 410 such as left 410a, right 410b, move 410c, up 410d,
15 down 410e, toggle 410f from the user through voice in order to traverse through the
contents displayed onto the projection screen. Also, any phrase or combination of
phrases could be given as voice commands by the user and the system will display
the corresponding matching content in the database. The custom database keeps on
learning and updating itself based on the new keywords, phrases spoken by the users.
20 [32] Such system 100 provides several advantages over existing state-of-the-art. It
eliminates the need for replacing the complete system in case of failure of the
projection system since it is modular in nature, thereby enabling easy replacement of
only the failed component/ part of the projection system. Further, it enables easy
navigation and searching of contents through voice-enabled commands thereby
25 providing the users with the ability to fully operate the system 100 without use of the
user's hands and input systems such as keyboard. The system 100 does not involve
external connection of various input and output systems and hence provides a
portable integrated system, thereby making it convenient to carry. Moreover, the
system 100 eliminates the heating issue posed in the existing similar products since
30 the projector and the computing system is intelligently separated by a heat-absorbing
tray and proper ventilation spaces have been provided in the system, thereby
significantly increasing the life of the components in the projection system due to
reduction in heating problems. Also, the system 100 advantageously integrates the
functions of an Internet enabled multi-media computer, audio player, VCD player,
35 DVD player, game station and a data projector in a single portable housing.
12
5 [33] In one the exemplary embodiments, the system 100 includes a compact
housing of dimensions 29.2 x 32.3 x 28.3 cm. The projector 103 further has
dimensions of 29.6 x 12.0 x 23.9 cm. Further, the processor 105 has dimensions of
18.3 x 18 x 3.5 cm. However, it should be noted that the disclosed system 100 may
be made in any size and shape and should not be construed as limiting to the
10 mentioned dimensions only.
[34] Referring to Fig.6, in one of the exemplary embodiments, the processor 105
may be disposed in communication with internal memory 603 e.g., RAM 605, and
ROM 610 etc. that may connect to memory including, without limitation, memory
drives, removable disc drives, etc., employing connection protocols such as Serial
15 Advanced Technology Attachment (SATA), Integrated Drive Electronics (IDE),
IEEE-1394, Universal Serial Bus (USB), fiber channel, Small Computer Systems
Interface (SCSI), etc. The memory drives may further include a drum, magnetic disc
drive, magneto-optical drive, optical drive, Redundant Array of Independent Discs
(RAID), solid-state memory systems, solid-state drives, etc.
20 [35] The memory may store a collection of program or database components,
including, without limitation, user/application, an operating system and the like. In
some embodiments, system 100 may store user/application data, such as the data,
variables, records, etc. as described in this disclosure. Such databases may be
implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or
25 Sybase.
[36] The operating system may facilitate resource management and operation of
the system 100. Examples of operating systems 650 include, without limitation,
Apple Macintosh TM OS X TM, UNIX TM, Unix-like system distributions (e.g.,
Berkeley Software Distribution (BSD), FreeBSD TM, Net BSD TM, Open BSD TM,
etc.), Linux distributions (e.g., Red Hat TM, Ubuntu TM, K-Ubuntu TM 30 , etc.),
International Business Machines (IBM TM) OS/2 TM, Microsoft Windows TM (XP TM,
Vista/7/8, etc.), Apple iOS TM, Google Android TM, Blackberry TM Operating System
(OS), or the like. The user interface 201 may facilitate display, execution, interaction,
manipulation, or operation of program components through textual or graphical
35 facilities. For example, user interfaces may provide computer interaction interface
13
5 elements on a display system operatively connected to the system 100, such as
cursors, icons, check boxes, menus, windows, widgets, etc. Graphical User Interfaces
(GUIs) may be employed, including, without limitation, Apple TM Macintosh TM
operating systems’ Aqua TM, IBM TM OS/2 TM, Microsoft TM Windows TM (e.g., Aero,
Metro, etc.), Unix X-Windows TM, web interface libraries (e.g., ActiveX, Java,
10 JavaScript, AJAX, HTML, Adobe Flash, etc.), or the like.
[37] A network 601 interconnects all the components and the predefined voiceinitiated repository 220 with the system 100. The network includes wired and
wireless networks. Examples of the wired networks include a Wide Area Network
(WAN) or a Local Area Network (LAN), a client-server network, a peer-to-peer
15 network, and so forth. Examples of the wireless networks include Wi-Fi, a Global
System for Mobile communications (GSM) network, and a General Packet Radio
Service (GPRS) network, an enhanced data GSM environment (EDGE) network,
802.5 communication networks, Code Division Multiple Access (CDMA) networks,
or Bluetooth networks.
20 [38] In the present implementation, the voice-initiated action repository 220 may
be implemented as enterprise database, remote database, local database, and the like.
The voice-initiated action repository 220 may be located within the vicinity of the
system 100 or may be located at different geographic locations as compared to that
of the system 100. Further, the voice-initiated action repository 220 may themselves
25 be located either within the vicinity of each other or may be located at different
geographic locations. Furthermore, the voice-initiated action repository 220 may be
implemented inside the system 100 and the voice-initiated action repository 220 may
be implemented as a single database or as multiple databases.
[39] In the present implementation, the system 100 includes one or more
30 processors 105. The processor 105 may be implemented as one or more
microprocessors, microcomputers, microcontrollers, digital signal processors, central
processing units, state machines, logic circuitries, and/or any systems that manipulate
signals based on operational instructions. Among other capabilities, the at least one
processor 105 is configured to fetch and execute computer-readable instructions
35 stored in the memory.
14
5 [40] The user interface 201 may include a variety of software and hardware
interfaces, for example, a web interface, a graphical user interface, and the like. The
user interface 201 may allow the system 100 to interact with a user directly or
through the user systems. Further, the user interface 201 may enable the system 100
to communicate with other user systems or computing systems, such as web servers.
10 The user interface 201 can facilitate multiple communications within a wide variety
of networks and protocol types, including wired networks, for example, LAN, cable,
etc., and wireless networks, such as WLAN, cellular, or satellite. The user interface
201 may include one or more ports for connecting a number of systems to one
another or to another server.
15 [41] The memory may be coupled to the processor 105. The memory can include
any computer-readable medium known in the art including, for example, volatile
memory, such as static random access memory (SRAM) and dynamic random access
memory (DRAM), and/or non-volatile memory, such as read only memory (ROM),
erasable programmable ROM, flash memories, hard disks, optical disks, and
20 magnetic tapes.
[42] As described above, the modules, amongst other things, include routines,
programs, objects, components, and data structures, which perform particular tasks
or implement particular abstract data types. The modules may also be implemented
as, signal processor(s), state machine(s), logic circuitries, and/or any other system or
25 component that manipulate signals based on operational instructions. Further, the
modules can be implemented by one or more hardware components, by computerreadable instructions executed by a processing unit, or by a combination thereof.
[43] Furthermore, one or more computer-readable storage media may be utilized
in implementing some of the embodiments consistent with the present disclosure. A
30 computer-readable storage medium refers to any type of physical memory on which
information or data readable by a processor may be stored. Thus, a computerreadable storage medium may store instructions for execution by one or more
processors, including instructions for causing the processor(s) to perform steps or
stages consistent with the embodiments described herein. The term “computer35 readable medium” should be understood to include tangible items and exclude carrier
15
5 waves and transient signals, i.e., non-transitory. Examples include Random Access
Memory (RAM), Read-Only Memory (ROM), volatile memory, non-volatile
memory, hard drives, Compact Disc (CD) ROMs, Digital Video Disc (DVDs), flash
drives, disks, and any other known physical storage media.
[44] The illustrated steps are set out to explain the exemplary embodiments
10 shown, and it should be anticipated that ongoing technological development will
change the manner in which particular functions are performed. These examples are
presented herein for purposes of illustration, and not limitation. Further, the
boundaries of the functional building blocks have been arbitrarily defined herein for
the convenience of the description. Alternative boundaries can be defined so long as
15 the specified functions and relationships thereof are appropriately performed.
Alternatives (including equivalents, extensions, variations, deviations, etc., of those
described herein) will be apparent to persons skilled in the relevant art(s) based on
the teachings contained herein. Such alternatives fall within the scope and spirit of
the disclosed embodiments. Also, the words "comprising," "having," "containing,"
20 and "including," and other similar forms are intended to be equivalent in meaning
and be open ended in that an item or items following any one of these words is not
meant to be an exhaustive listing of such item or items, or meant to be limited to only
the listed item or items. It must also be noted that as used herein and in the appended
claims, the singular forms “a,” “an,” and “the” include plural references unless the
25 context clearly dictates otherwise.
[45] It is intended that the disclosure and examples be considered as exemplary
only, with a true scope of disclosed embodiments being indicated by the following
claims.
WE CLAIM :
1. A system for projecting digital data comprising:
10 a retrieval engine configured to search a predefined voice-initiated action
repository based on at least one voice command provided by a user;
wherein the predefined voice-initiated action repository stores a plurality
of text commands associated to a plurality of voice-initiated actions,
further wherein a processor executes the at least one voice-initiated action
15 associated to the plurality of text commands;
a data acquisition unit connected to the processor for recording the at least
one voice command provided by the user wherein the processor is further
configured to convert the plurality of recorded voice commands into the
20 plurality of text commands.
2. The system as claimed in claim 1, wherein the at least one voice command
comprises a plurality of operational commands including but not limited to
search, open browser, open text editor.
25
3. The system as claimed in claim 1, wherein the at least one voice command
comprises a plurality of action commands including but not limited to left,
right, move, up, down, toggle.
30 4. The system as claimed in claim 1, further comprises a modular display
system for projecting a plurality of digital data onto an external surface based
on the plurality of voice-initiated actions associated with the at least one
converted text command.
35
17
5 5. The system as claimed in claim 1, further comprises a user interface
configured to allow at least one input through one of a voice command or a
text command from the user.
6. The system as claimed in claim 1, wherein the retrieval engine is configured
10 to search the voice-initiated action repository through the text command
inputted by the user through the user interface.
7. The system as claimed in claim 1,wherein the predefined voice-initiated
action repository further comprises the capability for saving and repeating the
15 plurality of converted text commands.
8. The system as claimed in claim 1, wherein the digital data comprises video,
audio, and text.
20 9. A method for projecting digital data comprises:
recording a plurality of voice commands provided by a user through a data
acquisition unit;
determining, based on an at least one text command, a voice-initiated
25 action indicated by the at least one voice command through a retrieval
engine, wherein the retrieval engine is configured to search a predefined
voice-initiated action repository based on an at least one translated text
command; further wherein the voice-initiated action is a particular voiceinitiated action from a plurality of voice-initiated actions associated with a
30 plurality of text commands executed by a processor.
10. The method as claimed in claim 9, further comprises converting the plurality
of recorded input voice commands to the plurality of text commands through
the processor.
35
18
5 11. The method as claimed in claim 9, wherein the at least one voice command
comprises a plurality of operational commands including but not limited to
search, open browser, open text editor.
12. The method as claimed in claim 9, wherein the at least one voice command
10 comprises a plurality of action commands including but not limited to left,
right, move, up, down, toggle.
13. The method as claimed in claim 9, further comprises projecting a plurality of
digital data onto an external surface based on the plurality of voice-initiated
15 actions associated with the at least one converted text commands.
14. The method as claimed in claim 9, further comprises allowing at least one
input through one of a voice command or a text command from the user
received through a user interface.
20
15. The method as claimed in claim 9, further comprises searching the voiceinitiated action repository based on the text command inputted by the user
through the user interface.
25 16. The method as claimed in claim 9, wherein the predefined voice-initiated
action repository further comprises the capability for saving and repeating the
plurality of converted text commands.
17. The method as claimed in claim 9, wherein the digital data comprises video,
30 audio, and text.
| # | Name | Date |
|---|---|---|
| 1 | 202011010458-Correspondence to notify the Controller [07-02-2025(online)].pdf | 2025-02-07 |
| 1 | 202011010458-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [25-11-2024(online)].pdf | 2024-11-25 |
| 1 | 202011010458-STATEMENT OF UNDERTAKING (FORM 3) [11-03-2020(online)].pdf | 2020-03-11 |
| 1 | 202011010458-US(14)-HearingNotice-(HearingDate-27-11-2024).pdf | 2024-10-30 |
| 2 | 202011010458-ABSTRACT [03-08-2023(online)].pdf | 2023-08-03 |
| 2 | 202011010458-FORM 1 [11-03-2020(online)].pdf | 2020-03-11 |
| 2 | 202011010458-US(14)-ExtendedHearingNotice-(HearingDate-11-02-2025)-1200.pdf | 2025-01-24 |
| 2 | 202011010458-US(14)-HearingNotice-(HearingDate-27-11-2024).pdf | 2024-10-30 |
| 3 | 202011010458-ABSTRACT [03-08-2023(online)].pdf | 2023-08-03 |
| 3 | 202011010458-CLAIMS [03-08-2023(online)].pdf | 2023-08-03 |
| 3 | 202011010458-DRAWINGS [11-03-2020(online)].pdf | 2020-03-11 |
| 3 | 202011010458-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [25-11-2024(online)].pdf | 2024-11-25 |
| 4 | 202011010458-CLAIMS [03-08-2023(online)].pdf | 2023-08-03 |
| 4 | 202011010458-COMPLETE SPECIFICATION [11-03-2020(online)].pdf | 2020-03-11 |
| 4 | 202011010458-CORRESPONDENCE [03-08-2023(online)].pdf | 2023-08-03 |
| 4 | 202011010458-US(14)-HearingNotice-(HearingDate-27-11-2024).pdf | 2024-10-30 |
| 5 | 202011010458-Proof of Right [18-03-2020(online)].pdf | 2020-03-18 |
| 5 | 202011010458-DRAWING [03-08-2023(online)].pdf | 2023-08-03 |
| 5 | 202011010458-CORRESPONDENCE [03-08-2023(online)].pdf | 2023-08-03 |
| 5 | 202011010458-ABSTRACT [03-08-2023(online)].pdf | 2023-08-03 |
| 6 | 202011010458-FORM-26 [15-07-2020(online)].pdf | 2020-07-15 |
| 6 | 202011010458-FER_SER_REPLY [03-08-2023(online)].pdf | 2023-08-03 |
| 6 | 202011010458-DRAWING [03-08-2023(online)].pdf | 2023-08-03 |
| 6 | 202011010458-CLAIMS [03-08-2023(online)].pdf | 2023-08-03 |
| 7 | 202011010458-CORRESPONDENCE [03-08-2023(online)].pdf | 2023-08-03 |
| 7 | 202011010458-FER_SER_REPLY [03-08-2023(online)].pdf | 2023-08-03 |
| 7 | 202011010458-FORM 4(ii) [03-07-2023(online)].pdf | 2023-07-03 |
| 7 | 202011010458-RELEVANT DOCUMENTS [10-05-2021(online)].pdf | 2021-05-10 |
| 8 | 202011010458-DRAWING [03-08-2023(online)].pdf | 2023-08-03 |
| 8 | 202011010458-FER.pdf | 2023-01-03 |
| 8 | 202011010458-FORM 13 [10-05-2021(online)].pdf | 2021-05-10 |
| 8 | 202011010458-FORM 4(ii) [03-07-2023(online)].pdf | 2023-07-03 |
| 9 | 202011010458-FER.pdf | 2023-01-03 |
| 9 | 202011010458-FER_SER_REPLY [03-08-2023(online)].pdf | 2023-08-03 |
| 9 | 202011010458-FORM 18 [24-08-2022(online)].pdf | 2022-08-24 |
| 9 | abstract.jpg | 2021-10-18 |
| 10 | 202011010458-FORM 18 [24-08-2022(online)].pdf | 2022-08-24 |
| 10 | 202011010458-FORM 4(ii) [03-07-2023(online)].pdf | 2023-07-03 |
| 10 | abstract.jpg | 2021-10-18 |
| 11 | 202011010458-FER.pdf | 2023-01-03 |
| 11 | 202011010458-FORM 13 [10-05-2021(online)].pdf | 2021-05-10 |
| 11 | abstract.jpg | 2021-10-18 |
| 12 | 202011010458-FORM 13 [10-05-2021(online)].pdf | 2021-05-10 |
| 12 | 202011010458-FORM 18 [24-08-2022(online)].pdf | 2022-08-24 |
| 12 | 202011010458-FORM 4(ii) [03-07-2023(online)].pdf | 2023-07-03 |
| 12 | 202011010458-RELEVANT DOCUMENTS [10-05-2021(online)].pdf | 2021-05-10 |
| 13 | abstract.jpg | 2021-10-18 |
| 13 | 202011010458-RELEVANT DOCUMENTS [10-05-2021(online)].pdf | 2021-05-10 |
| 13 | 202011010458-FORM-26 [15-07-2020(online)].pdf | 2020-07-15 |
| 13 | 202011010458-FER_SER_REPLY [03-08-2023(online)].pdf | 2023-08-03 |
| 14 | 202011010458-DRAWING [03-08-2023(online)].pdf | 2023-08-03 |
| 14 | 202011010458-FORM 13 [10-05-2021(online)].pdf | 2021-05-10 |
| 14 | 202011010458-FORM-26 [15-07-2020(online)].pdf | 2020-07-15 |
| 14 | 202011010458-Proof of Right [18-03-2020(online)].pdf | 2020-03-18 |
| 15 | 202011010458-COMPLETE SPECIFICATION [11-03-2020(online)].pdf | 2020-03-11 |
| 15 | 202011010458-CORRESPONDENCE [03-08-2023(online)].pdf | 2023-08-03 |
| 15 | 202011010458-Proof of Right [18-03-2020(online)].pdf | 2020-03-18 |
| 15 | 202011010458-RELEVANT DOCUMENTS [10-05-2021(online)].pdf | 2021-05-10 |
| 16 | 202011010458-CLAIMS [03-08-2023(online)].pdf | 2023-08-03 |
| 16 | 202011010458-COMPLETE SPECIFICATION [11-03-2020(online)].pdf | 2020-03-11 |
| 16 | 202011010458-DRAWINGS [11-03-2020(online)].pdf | 2020-03-11 |
| 16 | 202011010458-FORM-26 [15-07-2020(online)].pdf | 2020-07-15 |
| 17 | 202011010458-ABSTRACT [03-08-2023(online)].pdf | 2023-08-03 |
| 17 | 202011010458-DRAWINGS [11-03-2020(online)].pdf | 2020-03-11 |
| 17 | 202011010458-FORM 1 [11-03-2020(online)].pdf | 2020-03-11 |
| 17 | 202011010458-Proof of Right [18-03-2020(online)].pdf | 2020-03-18 |
| 18 | 202011010458-STATEMENT OF UNDERTAKING (FORM 3) [11-03-2020(online)].pdf | 2020-03-11 |
| 18 | 202011010458-US(14)-HearingNotice-(HearingDate-27-11-2024).pdf | 2024-10-30 |
| 18 | 202011010458-FORM 1 [11-03-2020(online)].pdf | 2020-03-11 |
| 18 | 202011010458-COMPLETE SPECIFICATION [11-03-2020(online)].pdf | 2020-03-11 |
| 19 | 202011010458-STATEMENT OF UNDERTAKING (FORM 3) [11-03-2020(online)].pdf | 2020-03-11 |
| 19 | 202011010458-REQUEST FOR ADJOURNMENT OF HEARING UNDER RULE 129A [25-11-2024(online)].pdf | 2024-11-25 |
| 19 | 202011010458-DRAWINGS [11-03-2020(online)].pdf | 2020-03-11 |
| 20 | 202011010458-FORM 1 [11-03-2020(online)].pdf | 2020-03-11 |
| 20 | 202011010458-US(14)-ExtendedHearingNotice-(HearingDate-11-02-2025)-1200.pdf | 2025-01-24 |
| 21 | 202011010458-Correspondence to notify the Controller [07-02-2025(online)].pdf | 2025-02-07 |
| 21 | 202011010458-STATEMENT OF UNDERTAKING (FORM 3) [11-03-2020(online)].pdf | 2020-03-11 |
| 1 | 31DEC2022202011010458E_30-12-2022.pdf |