Sign In to Follow Application
View All Documents & Correspondence

An Innovative Communication And Control Device For People With Speech And Motor Disabilities

Abstract: ABSTRACT The present invention is a communication and control device for non-verbal persons comprising: a computing unit (4) having predictive and adaptive application being configured to construct sentences using single click, a switch (5) integrated to the said computing unit (4) to generate the click to select language elements to complete a sentence, user interaction device interfaced with the said computing unit (4) to display and/or read out the language elements, a text to speech synthesizer (18) to convert the sentence to series of audio signals, and an audio codec (14) to convert the audio signals into analog form, and a communication unit configured to control the environment around the person and a method of operating the device thereof. Figure 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
23 October 2008
Publication Number
18/2010
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

INVENTION LABS ENGINEERING PRODUCTS PVT. LTD.
CGE-2A, KUPPAM BEACH ROAD, THIRUVANMIYUR, CHENNAI-600041

Inventors

1. AJIT NARAYANAN
CGE-2A, KUPPAM BEACH ROAD, THIRUVANMIYUR, CHENNAI-600041
2. MOHAMMED ADIB IBRAHIM
A-21 , SAKTHI TOWERS, GANDHI ROAD, VELACHERY, CHENNAI-600042

Specification

.

FIELD OF THE INVENTION
The present innovation is a communication and control device for disabled persons.
BACKGROUND OF THE INVENTION
Several conditions, such as cerebral palsy, strokes or motor degenerative illnesses, can render persons unable to speak, and at the same time unable to exercise control over their muscular motions. This inhibits them from speaking or using alternative communication mechanisms such as typing, writing or sign language. In the absence of any of these modes of communication, it is a challenge for such people to get an education, find a job, co-ordinate with their environment, or communicate with other people.
The primary mechanism used by such persons today is eye-pointing, where the person uses his or her eye movements to point to a certain point in a flip-chart. Usually the flip chart contains a number of communication elements (words or pictures) that the person will commonly use. For instance, the alphabets of the English language may have been divided into triplets, to form 9 sets which may be arranged in the form of a 3x3 grid on a chart. The person looks at one of the nine triplets. The interlocutor to whom the person is communicating interprets the person's intent by following their gaze to the chart. There may be a sequence of inter-connected charts which allow the person to communicate in a rudimentary fashion.
The obvious disadvantages of eye-pointing are that it is slow and limits the person to a very rudimentary vocabulary. Another disadvantage is that the person and his or her interlocutor are both focused on the charts and not on each other, which impedes communication. The lack of speech also hinders the disabled person from initiating a
conversation.
A recent class of devices called Voice Output Communication Aids (VOCA) has recently been introduced to the market. These electronic devices can output a set of pre-recorded voice messages upon actuation. The message to be spoken out is determined either by selecting a different button for each message, or by highlighting icons on the device one by one and interrupting the scan when the appropriate message is highlighted. These

VOCA devices are called static message generation VOCA units since they have a static set of pre-recorded messages that they can play back. They have the limitation that they permit the expression of only a very rudimentary vocabulary.
Most recently, a set of devices have arisen that are able to dynamically generate the message that is constructed, by employing a personal computer with text-to-speech software. The user may create a sentence through eye-pointing or other similar means, which is read out by software running on the computer.
While this is the most technologically advanced device available in the market, it still falls short of the expectations of the disabled. It is difficult to actuate since it relies primarily on eye-pointing. Automated eye-pointing recognition requires sophisticated equipment which is expensive. The software that constitutes the device runs on a personal computer, which is non-portable. The computer also needs to be switched on, logged in and shut down through the use of devices like a mouse or keyboard, which cannot be operated by the disabled person. Therefore, these solutions do not entirely allow the disabled person to independently and fluently communicate.
It is these shortcomings that the present invention addresses.
OBJECTS OF THE INVENTION
One of the principle objects of the invention is to develop a communication and control device for non-verbal persons.
Another object of the present invention is a method of operating a communication and control device for disabled persons
STATEMENT OF INVENTION
Accordingly the present invention provides a communication and control device for nonverbal persons comprising: a computing unit (4) having predictive and adaptive application being configured to construct sentences using single click, a switch (5) integrated to the said computing unit (4) to generate the click to select language elements to complete a sentence, user interaction device interfaced with the said computing unit

(4) to display and/or read out the language elements, a text to speech synthesizer {18) to convert the sentence to series of audio signals, and an audio codec (14) to convert the audio signals into analog form, and a communication unit configured to control the environment around the person; The present invention also provides a method of operating a communication and control device for disabled persons comprising steps of: pressing a switch (5) for activating the communication device, for displaying and/or reading out language elements on user interaction device, scanning through the elements to reach a particular element that user preferred to communicate, clicking the switch (5) to select the elements) for framing a sentence, and converting the framed sentence into series of audio signals, and thereafter into amplified analog signals to read out the signals.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 shows a traditional method of using eye-pointing to select an alphabet.
Figure 2 shows a block diagram of the system in one embodiment of input and output
devices.
Figure 3 shows a user interface representation of the software component of the system when it is used to begin a sentence.
Figure 4 shows how the device reacts when a substring has already been selected.
Figure 5 shows how the device may be used with the option of scanning multiple items at a time.
Figure 6 shows environmental elements to be controlled by the device.
Figure 7 shows a representation of the switch (5) functionality in one embodiment of its mode of operation.
Figure 8 shows the method of mounting the switch (5) on the wheelchair.
Figure 9 shows a block diagram of the auditory output components of the system.
Figure 10 shows how the system may be used with two audio channels for visually impaired persons.
Figure 11 shows the block diagram of the microprocessor-based embedded system comprising the device.

Figure 12 shows an electrical circuit to switch (5) on the system with a single click.
DETAILED DESCRIPTION OF THE INVENTION
The primary embodiment of the present invention is a communication and control device for non-verbal persons comprising: a computing unit (4) having predictive and adaptive application being configured to construct sentences using single click, a switch (5) integrated to the said computing unit (4) to generate the click to select language elements to complete a sentence, user interaction device interfaced with the said computing unit (4) to display and/or read out the language elements, a text to speech synthesizer (18) to convert the sentence to series of audio signals, and an audio codec (14) to convert the audio signals into analog form, and a communication unit configured to control the environment around the person.
In yet another embodiment the communication and control device comprises an audio amplifier (17) to amplify audio signals, a speaker (2) mounted on an enclosure of the communication device to read out the amplified audio signals to interlocutor, and a mount (3) provided on the enclosure for mounting the device.
In still another embodiment of the computing unit (4) is selected from a group comprising microprocessors or microcontrollers selected from a group comprising Intel x86, ARM, MIPS and Digital Signal Processors (DSPs).
In still another embodiment of the communication and control device the user interactive device (1) selected from a group comprising an LCD display, a CRT display, an electronic paper display to display the language elements.
In still another embodiment the user interactive device for visually impaired comprises a secondary audio channel via headphones (16).
In still another embodiment the switch (5) is connected to the computing unit (4) through a communication interface, selected from a group comprising USB interface (13), general purpose input pins, serial ports, wireless connection (13) and combinations thereof.
In still another embodiment the switch (5) is mounted in a location that is convenient to the user including on a wheelchair or on a bed or on a desk.

In still another embodiment the computing unit (4) employs a prediction based scanning system for scanning the language elements.
In still another embodiment the language elements are selected from a group comprising pictures, words, sentences, alphabets and combinations thereof.
In still another embodiment the switch (5) acts as an input to the device and is actuated by plurality of ways selected from a group comprising contact switches, non-contact switches, wireless switch, modulation of breathing, eye-pointing/eye-tracking, face tracking, tilt sensors, twist sensors and combinations thereof.
In still another embodiment plurality of states of a switch (5) or atleast one switch (5) of types are used as an input to the device.
In still another embodiment the device includes a power supply (11) circuitry that does not turn OFF the device due to multiple or spurious clicks by inadvertent motion.
In still another embodiment the device optionally provides other output mechanism selected from a group comprising printing on a paper, redirecting to computer, sending signals over a network (12), displaying on a computer monitor or television, storing in a memory and combinations thereof.
In still another embodiment the device is portable, language independent and provides the ability for a person with disabilities to use without any external assistance.
In still another embodiment the environment around the person comprises electric switches and wheelchair motion.
Another embodiment of the present invention is a method of operating a communication and control device for disabled persons comprising steps of: pressing a switch (5) for activating the communication device, for displaying and/or reading out language elements on user interaction device, scanning through the elements to reach a particular element that user preferred to communicate, clicking the switch (5) to select the elements) for framing a sentence, and converting the framed sentence into series of audio signals, and thereafter into amplified analog signals to read out the signals.

In yet another embodiment of the method, the language elements are selected from a group comprising pictures, words, phrases, sentences, alphabets and combinations
thereof.
In still another embodiment of the method, the scanning highlights one of the language elements on the display (1) in a visual way and highlight moves from element to element pausing at each element for a predetermined time interval.
In still another embodiment of the method, the framing sentence is carried out by clicking the switch (5) plurality of times to select particular element(s) that user preferred to communicate.
In still another embodiment the method provides options to signify completion of sentences, erasure, entering spell mode and special commands.
In still another embodiment of the method, the special commands are selected from a group comprising increasing and decreasing the volume, increasing and decreasing the speed, shutting down the device, and controlling the characteristics of the input and output devices.
In still another embodiment of the method, the language elements are commands that control elements of the user's environment selected from a group comprising lighting switches, fan switches, air conditioner settings, wheelchair settings, bed orientation, buzzers, computers and a plurality of electrical and mechanical equipment
The device described in the present invention is a communication and control system designed for use by people with speech impairment and lack of muscle control. These conditions are most frequently found to be present amongst people with cerebral palsy, stroke and muscular dystrophy, In the following detailed description, references are made to the accompanying drawings that form a part hereof, and are shown by way of illustrating specific embodiments or examples. Throughout the description, the term 'interlocutor' is used to refer to the person who is conversing or communicating with the person with disabilities. It must be noted that while the invention is described in an embodiment that is appropriate for use by a disabled person, it is also equally useful for

persons who are not disabled but who require assistance in communicating due to temporary impairments, environmental conditions or work-related issues.
Figures la to Id show the traditional method in which eye-pointing is used by a disabled person to point to one of several choices on a chart. This has been the most common method for non-verbal disabled people with muscle disabilities to communicate. In this method, a chart is prepared with several options that the disabled person may choose to communicate. Such a chart is shown in Figure la. The interlocutor is positioned with a view of the chart as well as the person.
Figure lb shows the eye position of the person when he selects object 1 on the chart. Figure ] c shows the eye position of the person when he selects object 6 on the chart.
Selection of a particular object on the chart may lead to the interlocutor switching to a different chart, such as the one shown in Figure Id, to make further choices. By employing a series of such charts, along with substantial questioning by the interlocutor, the disabled person can make an attempt at communication.
This method of communication has several obvious disadvantages, and is presented here solely to provide a context in which the present invention may be considered.
The present invention is schematically represented in Figure 2. It consists of the following components: a micro-processor based computing device (4); a switch (5); an output device (shown here as a speaker (2)); a user interaction device (shown here as an LCD display (1)); an enclosure; and a mount (3).
The computing device shown in Figure 2 is sufficiently powerful to run special purpose software, described hereinafter, that is used for the purpose of specifying sentences and converting the specified sentence into speech. It may also be used for a variety of other applications, including word processing, internet browsing, sending e-mail and experiencing a variety of media such as documents, movies and music.
The user interface on the computing device is in the form of a Liquid Crystal Display (LCD) (1). In the case of a person with visual impairment in addition to motor disabilities, the user interface could also be through a secondary audio channel, which

could either replace or supplement the LCD (1). Such a secondary audio channel may be provided through headphones (16), so that the audio prompts to the user do not interfere with the audio communication being provided by the device as a whole.
The input mechanism of the computing device is a switch (5) which can be actuated by the disabled person in a variety of ways which is consistent with the faculties available to the person. In the embodiment shown in Figure 2, it takes the form of a simple press-button switch (5). In other embodiments, it may be replaced by contact switches, non-contact switches, modulation of breathing, eye-pointing or eye-tracking, face tracking, tilt sensors, twist sensors or other devices with at least one degree-of-freedom. The switch (5) is connected to the computing device through a USB interface (13), though it may also be connected via general-purpose input pins, serial ports, wireless (13) connection or other communication medium. The software component of the device is responsible for communicating with the switch (5) and registering click signals given out by it.
The output mechanism of the computing device is most commonly speech, in the form of audio emanating from a speaker (2) in-built into the system and as shown in Figure 2. The speaker (2) outputs the audio resulting from the text-to-speech conversion of sentences created by the user through the software. The speaker (2) may be supplemented or replaced with a number of other output mechanisms. Such mechanisms include (but are not limited to) as redirection to a computer, printing out on a paper, sending of electronic signals over a network (12), display on a computer monitor or television, or storage on an electronic data storage medium.
The entire system, excluding the switch (5), is enclosed within a robust enclosure. A flexible mount (3) is provided on the enclosure for the primary purpose of mounting on a wheelchair, bed or desk that the disabled person may be working in. The mount (3) is described hereinafter in more detail.
Figure 3 is an embodiment of the software present on the device, which communicates to the switch (5) and controls the construction of sentences. The software displays a set of language elements on the screen in the relative frequency of occurrence. In Figure 3, no sentence has been begun yet, so the language elements displayed include alphabets of the

English language, commonly used words, and commonly used phrases that are likely to occur in the beginning of a sentence. Though the figure shows only alphabetical language elements, the invention also encompasses elements that may be in the form of images and audio.
After the software displays the language elements on the screen, it enters a mode known hereafter as scanning. When the software is in scanning mode, one of the language elements on the screen is highlighted in a visual way. This highlight moves from element to element, pausing at each element for a few seconds or a time interval previously configured by the user. The user may choose an element by actuating the switch (5) when the scan has highlighted the element of choice. The scan may cycle back to the first element after it has scanned the last element.
Once a part of the sentence has been constructed, the software displays a new set of language elements on the screen which are context-specific to the sentence portion constructed so far. An example is shown in Figure 4, where the user has constructed the fragment 'Go t' and is in the process of selecting the next element. From statistical analysis on a corpus of training data, the software is able to rank suffixes for the fragment created so far. In the example shown, statistical analysis shows that the fragment 'Go t-7 is most likely 'Go to'. Other options are 'Go tr-', 'Go ti-', 'Go tel-', 'Go to room' and 'Go the'.
In addition to the options shown in Figure 4, every stage also has options to signify completion of sentence, erasure, entering spell mode, and special commands. These are not shown in the figure for the sake of simplicity.
When the user has completed a sentence, the sentence is fed as an input to a text-to-speech software (18) module, which synthesizes speech corresponding to the sentence. In addition, the sentence and its constituent words are fed back to update the statistical probabilities of the words, so that over time, the probabilities stored for each language element represents the personal vocabulary of the user instead of the generic language of the system.

It must be noted that the system described above is not specific to any language, and is not specific to any language element. It can be easily extended to languages other than English. It can also be used for specialized languages, such as mathematical formulae, computer languages etc. In conjunction with the flexibility to redirect output to a plurality of devices, this makes the device a very useful tool for vocational use by persons with disabilities.
Figure 4 shows a method of scanning which involves only one choice at a time, but it is possible to have a more sophisticated scanning technique as shown in Figure 5. In this figure, the scan moves two elements at a time, reducing the time taken for a complete scan. It is possible to use models such as Hidden Markov Models to be able to predict accurately even when there is an ambiguity in each step of the prediction. This is analogous to the predictive system in common use in cell-phones, though its specific use in this context may be considered novel. This is used to good effect to reduce the time taken for sentence construction, as well as to decrease the strain due to repeated clicks of the switch (5).
The software described above allows a user to create sentences which are read out by the system, thereby aiding communication. In addition, the present invention can also be used for controlling the environment around a disabled person, as shown in Figure 6. Here, the software is modified to display environmental elements instead of, or in addition to, language elements. Environmental elements shown in Figure 6 include the ability to switch on and off several lights, increase or decrease the speed of a fan, open and close a door, switch on or off a computer, and ring a buzzer. This application of the present invention allows a disabled person to be substantially independent of caregivers in limited surroundings.
Figure 7 shows the typical flow of operation of the switch (5) used in the system. It captures the abstract functionality, as the actual method of actuation (pressing) may be different for different disabilities. In its simplest form, shown in Figure 7, the switch (5) sends a click signal when it is pressed and a release signal when it is released, to the computing system. The definition of'press' and 'release' in this context may be different for each type of switch (5). In a non-contact switch (5), the press action may be indicated

by proximity of a body appendage to the switch (5). In a breath-controlled switch, the press action may be indicated by a pre-defined blowing pattern. It is evident to one skilled in the art that the claim made with respect to the use of this switch (5) for disability assistance is independent of the specific 'press' and 'release' actions that actuate it.
A number of variations of the basic algorithm shown in Figure 7 are possible, and are claimed by this invention. For instance, the click and release actions may be reversed in the switch (5) depending on which state is more convenient for the user to hold for a longer period of time. Also, the click signal and the release signal may be used independently to signify clicks, in order to reduce strain to the user and possibly speed up text entry.
A key innovation claimed in the present invention is the ability to mount the switch (5) in a location that is convenient to the user. Typically, switches have been activated by a particular muscle or appendage. In this case, the switch (5) may be mounted in a location where the user has maximum control of a body part. In Figure 8, two such possibilities are shown. In Figure 8a, the switch (5) is mounted behind the head. This facilitates actuation in the case of persons who are quadriplegic, i.e. paralyzed below the neck. In Figure 8b, the switch (5) is placed near the knee. This facilitates actuation with lesser strain if the person has been trained through physiotherapy to have slight degree of freedom in their knee area. These two options have been shown in Figure 8 for illustration purposes only, as the range of locations may be unlimited.
Also claimed in the present invention is the use of a wireless switch (5), in order to further increase the flexibility of mounting the switch (5) in a variety of environments.
Figure 9 depicts the processes that occur once the entire sentence has been framed. The sentence is converted to a series of audio signals by a text-to-speech synthesizer (18). This is fed to an audio codec (14) as a set of electrical signals, and further to an audio amplifier (17). The amplified audio is fed to the speaker (2) mounted on the enclosure, and is thereby read out to the interlocutor.

Figure 10 depicts the manner in which auditory prompts may be used by persons with combined muscle and visual impairment in a satisfactory manner. The software constructs sentences similar to the mode described previously. During scanning, whenever a particular choice is highlighted, the text-to-speech module (18) is invoked to convert the choice to an intermediate auditory signal. This auditory signal is not played through the speaker (2) since that would be confusing as well as disturbing to the interlocutor. Instead, it is routed through a separate audio channel to the user. The user wears headphones (16) or earphones that allow him or her to understand the current scan location, and press the switch if it is the right one.
Figure 11 shows the overall organization of the computing system in which the software and the switch (5) are interfaced. It must be noted that the organization shown in Figure 11 is only illustrative of one embodiment based on available technology. It is evident to one skilled in the art that the computer system's components may be substituted by blocks with equivalent functionality without affecting the novelty of the present system.
As shown in Figure 11 the computing system has a micro-processor (4) which is the central processing unit of the system. Accompanying the microprocessor (4) but not shown in the diagram may be fixed amounts of Random Access memory and non-volatile storage, such as NAND Flash memory. The micro-processor (4) is integrated through a display interface (1) to the LCD screen. A USB or wireless block (13) is present in order to interface the switch (5) to the system. An audio codec (14) is connected to both a speaker (2) and a pair of headphones (16) to provide audio output and audio prompts respectively.
Power to the system is provided either through batteries or through AC mains which is rectified and regulated appropriately. There is a provision for charging of the battery from the AC mains when the device is on AC power.
A key innovation in the device is the ability for a person with disabilities to use the device without any external assistance. In a traditional system based on a personal computer, this is very difficult or impossible, due to the size and nature of the power button, the necessity to log in, start a program and other practical difficulties that cannot

be overcome with a single click interface. In the present invention, there is a single switch (21) which turns on the device, and the device is shut down by selecting a special option in the software. Once the ON switch is pressed, it is further not allowed to turn off the device. This is achieved through a mechanism shown in Figure 12 whereby the ON switch (21) (represented as a push-button type switch) provides an input which is latched. The latch (22) output controls an electronic switch represented as 1-2 in the figure. This electronic switch may be a Field Effect Transistor, a Bipolar Junction transistor, or a combination of several electronic components. The first terminal of this electronic switch is connected to the power supply (11) and the second terminal is connected to the system load (23).
By utilizing a switch mechanism as shown in Figure 12, it is convenient for a user with a disability to operate the system without fear of inadvertent motions causing the device to be turned off, and without the assistance of another person to turn on and off the device. The push button switch (21) may be further integrated with the switch (5) that constitutes the input mechanism of the device, thereby simplifying the system and minimizing the number of independent muscle movements to be made by the person.
The entire system is enclosed in a casing that takes into account the portability as well as robustness of the system. An innovative element of the enclosure is that it is specifically provided with a universal joint with a stalk attached to it. This allows the device to be mounted on a wheel-chair, bed, desk or any other convenient location, or to be folded out to carry around the device in the manner of a hand-held computer.
In summary, the key innovations claimed by the present invention allow for the design of a greatly enhanced communication and control device for people with verbal and motor disabilities. The device features are that it can be used independently, flexibly, with minimum strain, and in a manner which allows the user to conduct conversations in a natural and portable setting.

WE CLAIM
1. A communication and control device for non-verbal persons comprising:
a. a computing unit (4) having predictive and adaptive application being
configured to construct sentences using single click,
b. a switch (5) integrated to the said computing unit (4) to generate the click to
select language elements to complete a sentence,
c. user interaction device interfaced with the said computing unit (4) to display
and/or read out the language elements,
d. a text to speech synthesizer (18)to convert the sentence to series of audio
signals, and an audio codec (14) to convert the audio signals into analog form,
and
e. a communication unit configured to control the environment around the
person.
2. The communication and control device as claimed in claim 1, wherein the communication and control device comprises an audio amplifier (17) to amplify audio signals, a speaker (2) mounted on an enclosure of the communication device to read out the amplified audio signals to interlocutor, and a mount (3) provided on the enclosure for mounting the device.
3. The communication and control device as claimed in claim 1, wherein the computing unit (4) is selected from a group comprising microprocessors or microcontrollers selected from a group comprising Intel x86, ARM, MIPS and Digital Signal Processors (DSPs).
4. The communication and control device as claimed in claim 1, wherein the user interactive device (1) selected from a group comprising an LCD display, a CRT display, an electronic paper display to display the language elements.
5. The communication and control device as claimed in claim 1, wherein the user interactive device for visually impaired comprises a secondary audio channel via headphones (16).
6. The communication and control device as claimed in claim 1, wherein the switch (5) is connected to the computing unit (4) through a communication interface, selected

from a group comprising USB interface (13), general purpose input pins, serial ports, wireless connection (13) and combinations thereof.
7. The communication and control device as claimed in claim 1, wherein the switch (5) is mounted in a location that is convenient to the user including on a wheelchair or on a bed or on a desk.
8. The communication and control device as claimed in claim 1, wherein the computing unit (4) employs a prediction based scanning system for scanning the language elements,
9. The communication and control device as claimed in claim 8, wherein the language elements are selected from a group comprising pictures, words, sentences, alphabets and combinations thereof.
10. The communication and control device as claimed in claim 1, wherein the switch (5) acts as an input to the device and is actuated by plurality of ways selected from a group comprising contact switches, non-contact switches, wireless switch, modulation of breathing, eye-pointing/eye-tracking, face tracking, tilt sensors, twist sensors and combinations thereof.
11. The communication and control device as claimed in claims 1 and 10, wherein plurality of states of a switch or atleast one switch of types are used as an input to the device.
12. The communication and control device as claimed in claims 1, wherein the device includes a power supply (11) circuitry that does not turn OFF the device due to multiple or spurious clicks by inadvertent motion.
13. The communication and control device as claimed in claim 1, wherein the device optionally provides other output mechanism selected from a group comprising printing on a paper, redirecting to computer, sending signals over a network (12), displaying on a computer monitor or television, storing in a memory and combinations thereof.
14. The communication and control device as claimed in claim 1, wherein the device is portable, language independent and provides the ability for a person with disabilities to use without any external assistance.

15. The communication and control device as claimed in claim 1, wherein the environment around the person comprises electric switches and wheelchair motion.
16. A method of operating a communication and control device for disabled persons comprising steps of:
a. pressing a switch (5) for activating the communication device, for displaying
and/or reading out language elements on user interaction device,
b. scanning through the elements to reach a particular element that user preferred
to communicate,
c. clicking the switch (5) to select the element(s) for framing a sentence, and
d. converting the framed sentence into series of audio signals, and thereafter into
amplified analog signals to read out the signals.
17. The method as claimed in the claim 16, wherein the language elements are selected from a group comprising pictures, words, phrases, sentences, alphabets and combinations thereof.
18. The method as claimed in the claim 16, wherein the scanning highlights one of the language elements on the display (1) in a visual way and highlight moves from element to element pausing at each element for a predetermined time interval.
19. The method as claimed in the claim 16, wherein the framing sentence is carried out by clicking the switch (5) plurality of times to select particular element(s) that user preferred to communicate.
20. The method as claimed in the claim 16, wherein the method provides options to signify completion of sentences, erasure, entering spell mode and special commands.
21. The method as claimed in the claim 20, wherein the special commands is selected from a group comprising increasing and decreasing the volume, increasing and decreasing the speed, shutting down the device, and controlling the characteristics of the input and output devices.
22. The method as claimed in claim 16, wherein the language elements are commands that control elements of the user's environment selected from a group comprising lighting switches, fan switches, air conditioner settings, wheelchair settings, bed orientation, buzzers, computers and a plurality of electrical and mechanical equipment

23. A communication device for disabled persons and a method to operate it are herein substantiated along with accompanied drawings.

Documents

Application Documents

# Name Date
1 2592-che-2008 form-5.pdf 2011-09-04
1 2592-CHE-2008_EXAMREPORT.pdf 2016-07-02
2 2592-che-2008 form-3.pdf 2011-09-04
2 2592-CHE-2008 CORRESPONDENCE OTHERS 17-07-2012.pdf 2012-07-17
3 2592-che-2008 form-18.pdf 2011-09-04
3 2592-CHE-2008 FORM-1 17-07-2012.pdf 2012-07-17
4 2592-CHE-2008 FORM-13 17-07-2012.pdf 2012-07-17
4 2592-che-2008 form-1.pdf 2011-09-04
5 2592-che-2008 drawings.pdf 2011-09-04
5 2592-che-2008 abstract.pdf 2011-09-04
6 2592-che-2008 description (complete).pdf 2011-09-04
6 2592-che-2008 claims.pdf 2011-09-04
7 2592-che-2008 correspondence others.pdf 2011-09-04
8 2592-che-2008 description (complete).pdf 2011-09-04
8 2592-che-2008 claims.pdf 2011-09-04
9 2592-che-2008 drawings.pdf 2011-09-04
9 2592-che-2008 abstract.pdf 2011-09-04
10 2592-CHE-2008 FORM-13 17-07-2012.pdf 2012-07-17
10 2592-che-2008 form-1.pdf 2011-09-04
11 2592-CHE-2008 FORM-1 17-07-2012.pdf 2012-07-17
11 2592-che-2008 form-18.pdf 2011-09-04
12 2592-che-2008 form-3.pdf 2011-09-04
12 2592-CHE-2008 CORRESPONDENCE OTHERS 17-07-2012.pdf 2012-07-17
13 2592-CHE-2008_EXAMREPORT.pdf 2016-07-02
13 2592-che-2008 form-5.pdf 2011-09-04