Sign In to Follow Application
View All Documents & Correspondence

“A Glove System For Two Way Communication And An Apparatus Thereof ”

Abstract: The present invention discloses a glove system (S) for two-way communication and an apparatus thereof. The invention utilizes flex sensors (SM1), 3-axis accelerometer (SM2), and gyroscope sensor (SM3), to accurately detect hand and finger movements. The said system (S) and apparatus comprises a micro processing unit (MPU), Machine Learning - Random Forest Algorithm and speech synthesizer (SC3) to generate speech output to communicate with normal people. The said system (S) also employs an array of microphones (HC2), and Deep Learning Algorithm for readable text output on LCD display (HC1). The glove features Deep Learning Model to identify the direction of sound, based on wake word detection to visually displayed using LEDs (HC3) for user. The present glove system and apparatus enables effective communication between the speech and hearing impaired community and the general public, leveraging technology such as data analysis, and Machine Learning Algorithm. Figure 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 November 2023
Publication Number
51/2023
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

AMRITA VISHWA VIDYAPEETHAM
Amrita Vishwa Vidyapeetham, Bengaluru Campus Kasavanahalli, Carmelaram P.O. Bangalore – 560035, India

Inventors

1. KULANGARAVALIPPIL, H Akhil
No. 60, Anukhil, 2nd Cross, BSV Park, , Bileshivale, Doddagubbi PO, Bangalore Karnataka 560077
2. SREEJESH, Rhethika
No. 60, 6th Main, 2nd Cross, 2nd Phase, Amrita Nagar, Kasavanahalli, Bangalore 560035
3. THARAYIL, H Hrithik
No. 25, Shree Hari Tharayil, Behind Apoorva Apartment, D.B.S, Vidyaranyapura, Bangalore-97 Karnataka 560097
4. KALIYAPERUMAL, Deepa
424, 4th floor , Sri Sai Acropolis, Hosa Road, Naganathapura Bangalore, Karnataka 560100

Specification

Description:FIELD OF THE INVENTION
The present invention relates to a glove system for two-way communication and an apparatus thereof. More particularly the present invention relates to a glove system and apparatus thereof which is able to detect hand and finger movements in real time. The said system is used to measure the hand signals to recognize sign language using sensors and convert them into audio, and external person’s speech into text, using Machine Learning-Random Forest Algorithm and Deep Learning Algorithm.

BACKGROUND OF THE INVENTION
Deaf and non-vocal individuals encounter significant obstacles when communicating with hearing people who are unfamiliar with sign language or finger spelling. This challenge is especially difficult for those who cannot speak clearly or read lips, as it hinders their interaction with hearing individuals lacking sign language knowledge. Deaf-blind individuals face even more difficulties as their communication relies on physical touch. These communication barriers negatively impact personal relationships and professional opportunities, often leading to unemployment and dependency.

The common methods used by non-vocal, deaf, or deaf-blind individuals to communicate with hearing individuals unfamiliar with sign language are interpreters or written notes. However, both methods have drawbacks. Access to interpreters is limited, and writing notes can be tedious. An alternative communication method involves using signs and gestures, which involve hand shape, location, movement, and palm orientation. Hand shape and palm orientation together form what is called "posture". Recognizing this posture and converting corresponding sign language into readable text or audio is called sign language recognition.

Existing systems for recognizing sign language primarily use two approaches: vision-based techniques with image processing or gloves with sensors and microcontrollers. The image processing approach captures gestures with a camera and analyzes them using algorithms. However, this technique requires complex algorithms for gesture detection and is dependent on suitable lighting, backgrounds, and field of view limitations.

A glove-based system employs accelerometers, gyroscopes, and flex sensors to detect hand movements. A number of literatures have been published including patents and non-patents documents in said domain explaining features.

In one of the published patents US20100023314A1, titled as: “ASL Glove with 3-Axis Accelerometers” by Hernandez-Rebollar Jose, describes a glove with 3-axis accelerometers on fingers and back of the palm, an analog multiplexer and a programmable micro controller to detect hand postures of American Sign Language and send them to a host via serial communication. However, the apparatus designed is bulky and covers arm and shoulder for better reading of the entire hand gestures, which makes the glove hard to use and uncomfortable for the user. Also, the invention does not address the sound source localization problem to alert the user about direction of the incoming sound.

In another non-patented document titled: “Techno-Talk: An American Sign Language (ASL) Translator” by Arslan Arif, Syed Tahir Hussain Rizvi and Iqra Jawaid, Muhammad Adam Waleed, describes portable electronic hand glove designed for deaf/mute individuals to effectively communicate, using five flex sensors to detect sign variations, an accelerometer to distinguish between static and dynamic signs, a contact sensor, Arduino Mega 2560 (MPU1) for data processing, and voice-box shield, LCD, and Speaker for visual and audible outputs. However, the signs are identified based on a predefined range of values, also apparatus lacks safety indicator for the user to alert him in critical situations.

In another prior art IN201841026260A titled as “A Sign Language Translator Glove” by B. Krishna Moorthy, Dr. K. Palanikumar, K. C. Suresh, this invention aims to empower the mute community by converting gestures into speech-based communication. It utilizes flex and gyroscopic sensors, data analytics, Machine Learning, and a computing device to achieve real-time output. However, the system lacks two-way communication using microphone and wake-word detection, user alert system by giving visual cues to the user in critical situations.

Existing inventions in this field have not fully utilized data analytics and Machine Learning techniques to achieve precise outputs based on sensor readings. Further, the existing glove system lacks predefined danger signals notification.

In order to obviate the drawbacks in the existing state of the art, the present invention provides an accurate glove system that enables two-way communication for the deaf and dumb community and provide superior results by integrating Machine Learning methodologies with sensor outputs.

OBJECT OF THE INVENTION
In order to overcome the shortcomings in the existing state of the art, the present invention provides a real time, safety driven, two-way communication-based glove system and apparatus thereof.

Yet another objective of the invention is to provide a system and apparatus capable of converting hand sign gestures into desired corresponding audible format.
Yet another objective of the invention is to provide a system that measures the captured gestures of wrist and palm and fingers, through the utilization of the flex, gyroscopic, and 3-axis accelerometer sensors positioned at the appropriate locations of the gloves.

Yet another objective of the invention is to provide a customizable system and an apparatus, adaptable to different sign languages by using Machine Learning Model - Random Forest Algorithm, making it accessible to a wider range of users.

Yet another objective of the invention is to provide synthesized voice output of the corresponding identified hand-gestures using integrated speaker.

Yet another objective of the invention is to provide a system and apparatus to convert an external person’s speech into text format allowing users to easily read the recorded content through LCD equipped in glove system.

Yet another objective of the present invention is to provide a system and apparatus that utilizes arrays of microphones to receive, record, and identify sound waves or speech emitted by an external person.

Yet another objective of the present invention is to provide a system and apparatus to identify predefined wake word(s), to alert users about dangerous situations.

Yet another objective of the present invention is to provide a glove system equipped with LEDs that indicate the direction from which the voice/danger signal is detected using a Deep Learning Model.

Yet another objective of the present invention is to provide a system that includes at least one glove designed for capturing at least one hand’s gestures performed by individuals with speech and hearing-impairments.

Yet another objective of the present invention is to provide a system comprising at least one micro processing unit for analyzing, executing the computer programs and processing the received analog signals from connected electronic components into digital.

Yet another objective of the present invention is to provide a compact and portable glove system and apparatus with integrated power source for two-way communication.

SUMMARY OF THE INVENTION:
The present invention relates to a glove system for two-way communication and an apparatus thereof.

The present invention discloses a portable glove system (S) for two-way communication designed to address the needs of the deaf and dumb community. It incorporates advanced technologies such as flex sensors (SM1), a 3-axis accelerometer (SM2), and a gyroscope (SM3) sensor to detect hand and finger movements accurately. These sensor readings are processed by an Arduino microcontroller (MPU1) and transmitted to a Raspberry Pi (MPU2), where a Machine Learning Algorithm and speech synthesizer (SC3) to generate speech output.

Additionally, the glove features an array of microphones (HC2) for speech input, which is converted into text using a Machine Learning Model on the Raspberry Pi (MPU2). The converted text is displayed on an LCD screen (HC1). The present glove system also includes wake word detection, enabling it to identify the direction of danger alert sound using a Deep Learning Model, which is displayed using LEDs (HC3) placed around the glove.

The present invention uses various software to achieve the object of the invention. The various software are selected but not limited to Thonny, speech synthesizer (SC3) a Python IDE, which is used for implementing Machine Learning with the Random Forest Algorithm, while the Arduino IDE is employed to program the Arduino Mega 2560 microcontroller board (MPU1).

BRIEF DESCRIPTION OF DRAWINGS
Figure 1 depicts flow chart of system(s) architecture.
Figure 2 depicts flow chart of the speech recognition.
Figure 3 depicts a flow chart for wake word detection and speech to text conversion.
Figure 4 depicts glove system (S) prototype.

DETAILED DESCRIPTION OF THE INVENTION WITH ILLUSTRATIONS AND EXAMPLES

While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made, and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.

Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of “a”, “an”, and “the” include plural references. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.

Table 1: Legend of Reference numerals
Ser no. Item description Reference numerals
1 Glove system S
2 Sensor module SM
Flex sensor SM1
3-axis accelerometer SM2
Gyroscope SM3
3 Micro processing unit MPU
Arduino MPU1
Raspberry Pi MPU2
4 Electronic Hardware component HC
LCD display HC1
Microphones arrays HC2
LEDs HC3
Speaker HC4
Battery HC5
Memory HC6
5 Software component SC
Thonny SC1
Arduino IDE SC2
Speech synthesizer SC3
6 External person ES
7 Preset word PSW

Some of the technical terms used in the specification are elaborated as below:

The term Machine Learning - Random Forest Algorithm used in description is a specific type of Machine Learning Algorithm that falls under the category of ensemble learning methods. It is used for both classification and regression tasks. In the context of speech synthesizer implementation, the Random Forest Algorithm is utilized here for tasks such as hand gesture recognition or predicting phonemes from received hand signals.

The Deep Learning Model in the Raspberry Pi (HC2) board refers to the usage of Deep Learning techniques for wake word detection, based on the preset word (PSW) also refers to as “trigger words” or “special words” or “alert word”. The said preset word (PSW) refers to a specific word or phrase that is pre-established or predefined and for which the model is trained to recognize in real time conversation. The term wake word detection used here in this specification, refers to detection of preset word (PSW) in real time conversation. The said wake word detection triggers or command to initiate a specific action or function. The Deep Learning Model on the Raspberry Pi board is configured to recognize wake words and convert spoken words of the external person (ES) into visible text for the user.

A glove system (S) is a wearable apparatus for wearing in at least one hand, with a palm section, dorsal surface covering, finger compartments, and an optional wrist cuff. The said apparatus constructed using materials included but not limited to leather, fabric, synthetics, or any other suitable materials. The said system includes accessibility features such as adjustable sizing and ergonomic design to ensure that the glove fits comfortably on different hand sizes and shapes.

The said system (S) and apparatus incorporates a combination of electronic components (HC) and software components (SC) to enable its functionality, as outlined below:

The said system (S) comprises of electronic components such as sensor module (SM) and the said sensor module (SM) further comprises of flex sensors, 3-axis accelerometer, and gyroscope. The said flex sensors (SM1) detects finger bending and provides analog output proportional to the degree of bending. These sensors are integrated into the glove design, enabling the measurement of flexion for each finger. The system (S) employs a 3-axis accelerometer (SM2) and gyroscope sensor (SM3) to capture hand orientation, direction and speed of hand movements.

The term microphones arrays (HC2) used in this specification is a combination of microphones integrated with the glove system (S) to receive sound waves as spoken from external person (ES) to communicate with speech and hearing-impaired user. The said microphone arrays (HC2) serves as the primary input provider for wake word detection and speech-to-text conversion. Its serially connected with micro-processing unit (MPU1), facilitating subsequent processing.

The term micro processing unit (MPU) used in this specification comprises Arduino (MPU1) Mega 2560 microcontroller and Raspberry-Pi (MPU2). The said system utilizes Arduino (MPU1) microcontroller board for further processing, analyzing the received analog signals from sensor module (SM) and microphone (HC2) arrays. It establishes communication with the Raspberry Pi (MPU2) microprocessor board via serial communication through USB. Additionally, it functions as an analog-to-digital converter for the Raspberry Pi.

The said Raspberry Pi (MPU2) used in the present invention, processes the data transmitted from the Arduino board (MPU1). It utilizes a speech synthesizer module (SC3), and Random Forest Machine Learning Algorithm, to generate speech output. Moreover, the Raspberry Pi (HC2) board deploys a Deep Learning Model for wake word detection and speech-to-text conversion.

The term speech synthesizer module (SC3) used here in this specification is a computer program which provides speech or audio output based on generated text. It takes text as an input and converts it into audible speech for the external person (ES).

The speaker (HC4) integrated with present invention provides audible output corresponding to the identified sign language. The speaker (HC4) effectively delivers the synthesized voice output based on the hand signal identification using Random Forest Algorithm.

LCD display (HC1) is employed in the system (S) to display processed speech as text for the user. To reduce the number of pins required for connection to the Raspberry Pi, an I2C interface adapter is included.

The term I2C interface adapter used in the specification is a specialized device designed to facilitate communication between multiple devices by utilizing the Inter-Integrated Circuit (I2C) protocol. The present invention uses the said adapter to conveniently manipulate and manage the LCD panel's (HC1) functionalities, such as displaying text or graphics, without the need for complex wiring or numerous input/output pins.

LEDs (HC3) are strategically placed around the gloves to indicate the direction from which the wake word was detected. The LED corresponding to the detected direction illuminates, providing a visual cue to the user.

Thonny, a widely used Python IDE, is employed on the Raspberry Pi (HC2) for implementing Machine Learning using the Random Forest Algorithm within the glove. This Python IDE comes pre-installed on the Raspberry Pi (HC2) operating system, providing a convenient environment for development.

The Arduino Integrated Development Environment (IDE) (SC), an open-source software platform, is utilized for programming and software development for Arduino boards (MPU1). With its user-friendly interface, the Arduino IDE simplifies writing, compiling, and uploading code to the Arduino Mega 2560 microcontroller board (MPU1). In the glove system (S), it is employed to program the Arduino board (MPU1) to receive inputs from the flex sensors (SM1) and gyroscope sensor (SM3), and to transmit this data serially to the Raspberry Pi (MPU2) board.

The following detailed description provides insights into each component of this system:

The working of said system and apparatus can be divided into four phases:
• Hand gesture recognition using sensors and audio output phase.
• Speech recognition and speech to text conversion phase.
• Wake-word detection phase.
• Sound-source identification and user alert phase.
Hand gesture recognition using sensors and audio output phase mentioned in the description for glove system (S) defines the process of detecting hand gestures and translating or converting the hand gestures into speech.
The system (S) includes a plurality of sensors, collectively referred to as sensor module (SM), to detect dynamic movement and position of the at least one said hand. The flex sensors (SM1) detect the movement of fingers, and a 3-axis accelerometer (SM2) and gyroscope sensor (SM3), which detect the hand motion, speed, and orientation, are connected to micro processing unit (MPU) that converts the received signals from the sensors into speech. The prototype of the invention is as shown in figure (4).

The flex sensors (SM1) used in the present invention are mounted over the thumb, index, middle, ring, and little fingers as shown Figure (4). They convert the finger bend into electrical resistance. If bending is more resistance value will be more. Using flex sensors (SM1) makes apparatus lightweight and comfortable to use. Inside the flex sensor (SM1) there is a carbon resistive element, the sensor produces a resistance output related to the bend radius.

The system (S) employs 3-axis accelerometer (SM2) and gyroscope (SM3) to determine the three directional orientations for detecting the motion, elevation, and position of the at least one human hand encompassed by the said glove including the said system (S) for the comprehensive determination of the said hand gesture.

The sensor module is (SM) connected serially with micro processing unit (MPU) which includes Arduino (MPU1) and Raspberry Pi (MPU2) connected via USB as shown in Figure (1) for the hand gesture recognition.

In the said micro processing unit (MPU) the Arduino microcontroller board (MPU1) also acts as analog to digital converter for Raspberry Pi (MPU2), which converts the received analog values of the sensor module (SM) into digital. This data is then fed serially to Raspberry Pi board (MPU2) as shown in Figure (1), as an input using Arduino IDE (SC2).
The said Raspberry Pi (MPU2) is equipped with a Random Forest Machine Learning Algorithm and speech synthesizer (SC3), which runs on its processing unit. The said algorithm and synthesizer (SC3) analyze the sensor data received from the Arduino (MPU1) to generate speech output. The said Machine Learning Model of the present invention is trained using relevant sign language dataset for which translation is required. In one of the embodiments of the present invention the said model supports various other sign languages, for which model can be trained using relevant dataset.

This training involves feeding the model with a large dataset of hand gesture samples and their corresponding ASCII characters or sign language meanings. After detecting a gesture, the system converts it into computer-readable characters. For instance, if the recognized gesture represents the letter "A," it is converted to the ASCII representation for "A" (65 in decimal) as shown in Figure (2). The model learns to recognize and associate specific hand gestures with their corresponding characters or meanings. In figure (2) the term “free state” refers to the rest or default position of the said hand for which no translation would be performed.

After detecting and converting sign language into text, the system (S) utilizes speech synthesizer module (SC3) to generate speech output for external person (ES). The converted characters are processed by a speech synthesizer module (SC3) to produce synthesized voice output. The present system (S) provides audio output in English language with an Indian accent. In another embodiment of the invention the system (S) may also support non-English languages, including regional languages, by training the model with corresponding language datasets. To enable others to hear the spoken representation of the sign language, a speaker (HC4) is connected to the system (S).

In the present invention for speech recognition and speech to text conversion, the glove system (S) features an array of microphones (HC2) that capture external person’s (ES) speech as an input. The live speech of the external person (ES) is recorded and fed to Arduino microcontroller (MPU1) for analog to digital conversion. Memory card (HC6) associated with micro processing unit (MPU) stores the recorded live audio of an external person (ES), for further processing.

The said recorded live audio is utilized as input and passed to a Deep Learning Model implemented on the Raspberry Pi (MPU2). This model incorporates natural language processing capabilities to recognize speech and subsequently converts it into readable text. The converted text is displayed on an LCD screen (HC1) integrated into the glove. This allows the user to visually perceive the generated text, enabling effective two-way communication. The present invention incorporates a lithium-ion battery (HC5) within the system (S) to serve as its power source, enabling the overall functionality of the system. which can be improved in future.

In addition to its previously mentioned components and functionalities, the system (S) also includes an advanced wake word detection feature using microphones (HC4) attached to glove system (S). The array of microphones (HC2) records the external sound, which helps to recognize a preset word (PSW), it could be the user’s name or any other “special word” or “alert word” or “trigger word” that can be set to alert user about the danger, when spoken aloud.

The wake word detection is implemented using a Deep Learning Model as shown in figure (3), which first converts received external audio into text format. The Deep Learning Model is trained to identify the preset word (PSW) and the detection of said preset word (PSW) referred here in the description as wake word detection. The Deep Learning Model matches the textual representation of processed live recorded sound with preset word (PSW), enabling the detection of a specific wake word. In one of the embodiments of the present invention, multiple preset words can be set for different alerts such as for environmental awareness for the user. The system (S) can detect and indicate the origin of specific sounds, such as sirens, alarms, or important announcements, helping users to stay informed in public spaces.

Once the wake word is detected, the glove system (S) determines the direction from which the wake word originated as shown in Figure (3). This is achieved through the analysis of the audio input using the Deep Learning Model, where the recorded audio’s amplitudes received via array of microphones (HC2) are compared, and the resulted highest amplitude, denotes the direction of the incoming sound. The direction information is then displayed to the user through strategically placed LEDs (HC3) around the glove.

In one of the embodiments of the present invention, it is understood for the person skilled in the art that integrated LEDs can be replaced with integrated display or similar technology. This would provide more detailed information about the direction and distance of the sound source, improving the user’s ability to locate the origin of the signal.

In one of the embodiments of the present invention a multi-sensory alert can be associated with the system (S) to enhance the user’s awareness, the system (S) could incorporate other sensory alerts such as vibration alerts. The system (S) may provide the options to set such as different level of vibration pattern, for different preset words (PSW), to provide a more comprehensive and intuitive experience for the users, allowing them to better understand the nature and urgency of the danger. Along with this the user may set specific hand gestures or movements of the hand to trigger different actions or commands. For example, in one embodiments of the present invention the system (S) may integrated with different communication technologies such as Bluetooth or Wi-Fi or Infrared or any other similar wireless communication technologies.

The said communication technology may provide the said glove system (S) to interact with other smart devices. The audio output from the speaker (HC4) integrated with the system (S), may also help to use voice enabled features of the smart devices such as smartphone, where output audio may in form of instructions for smartphone to perform certain task, such as calling, searching or any other similar tasks.

In one of the embodiments of the present invention customizable alerts may also be provided to the users, such as adjusting LED brightness, vibration intensity, or choosing specific alert patterns, would cater to individual needs and preferences.
, C , C , Claims:WE CLAIM:
1) A glove system (S) for two-way communication comprising:
- Sensor module (SM) integrated into the glove;
- Micro-processing unit (MPU) for analyzing, processing, and converting analog signal to digital signal as received from sensor module (SM), and microphone (HC2) arrays;
- Speech synthesizer (SC3) to provide processed hand gestures into desired audio format for an external person (ES) using speaker (HC4) attached to the system;
- Microphone (HC2) arrays for live sound recording and a Deep Learning Algorithm for real time speech recognition using the said micro processing unit (MPU) to generate readable text output;
- LCD (HC1) panel to display generated text to the user;
- LEDs (HC3)/display for sound direction indication to visually alert user about dangerous situations, based on wake-word detection feature
for converting hand sign gestures into desired corresponding audible format.
2) The glove system (S) for two-way communication as claimed in claim 1, wherein the said system comprises of at least one glove for capturing the hand gestures performed by the speech and hearing-impaired user.
3) The glove system (S) for two-way communication as claimed in claim 1, wherein the said sensor module (SM) comprises at least five flex sensors (SM1) mounted on the thumb, index, middle, ring, and little fingers. a 3-axis accelerometer (SM2), a gyroscope sensor (SM3) integrated into the glove.
4) The glove system (S) for two-way communication as claimed in claim 1, wherein the said sensor module (SM) provides analog hand signal output to micro processing unit (MPU).
5) The glove system (S) for two-way communication as claimed in claim 1, wherein the said micro processing unit (MPU) is selected from Arduino microcontroller (MPU1) and Raspberry-Pi (MPU2).
6) The glove system (S) for two-way communication as claimed in claim 5, the said Arduino microcontroller (MPU1) is configured to perform analog to digital conversion of the received hand signals for Raspberry-Pi (MPU2).
7) The glove system (S) for two-way communication as claimed in claim 1, wherein said micro-processing unit (MPU) comprises Machine Learning- Random Forest Algorithm to recognize digital hand signals and convert it into corresponding stream of readable text.
8) The glove system (S) for two-way communication as claimed in claim 1, wherein said speech synthesizer (SC3) provides the audio output for corresponding stream of readable text using speaker (HC4).
9) The glove system (S) for two-way communication as claimed in claim 1, wherein said microphone arrays (HC2) configured to receive live sound and records in memory (HC6) associated with the micro processing unit (MPU).
10) The glove system (S) for two-way communication as claimed in claim 1, wherein said Deep Learning Model employed for speech recognition and speech-to-text conversion of the said recorded live sound.
11) The glove system (S) for two-way communication as claimed in claim 1, the said Deep Learning Model recognize said preset word(s) (PSW) in real-time for wake word detection.
12) The method for speech to text conversion for working of glove system as claimed in claim 1, comprising of
- receiving of live sound wave as spoken by an external person by Microphone (HC2) arrays integrated to the glove system (S);
- storing of the received sound wave on a memory (HC6) associated with microcontroller.
- recognizing of speech by Deep Learning Model using natural language processing capabilities and converting the recognized speech into corresponding readable text for the user.
- displaying the processed text to the user on the LCD panel (HC1)/display incorporated with glove system (S).
13) A method as claimed in claim 14 for wake word detection comprising of following steps:
- receiving audio inputs from microphone arrays (HC) and feeding it to micro controller (MPU1) for analog to digital conversion.
- converting received processed digital signals into corresponding text format by Deep Learning Model in Raspberry-Pi (MPU1) using language processing capabilities.
- identifying and processing the text as received from Deep Learning Model in Raspberry-Pi (MPU1) and comparing the same with the preset words.
- alerting the user visually with the help of LEDs, in case the text is match with the preset words.
- lighting of LED panel (HC3)/display as a directional indicator indicating the source of the spoken preset word(s).
14) An apparatus for converting hand sign gestures into desired corresponding audible format as claimed in claim 1 wherein said apparatus comprises of
- a glove system wearable on at least one hand with a palm section, dorsal surface covering finger compartments and an optional wrist cuff;
- electronic hardware components (HC) and software components (SC)
wherein said apparatus includes accessibility features such as adjustable sizing and ergonomic design to ensure that the glove fits comfortably on different hand sizes and shapes.

Documents

Application Documents

# Name Date
1 202341080603-STATEMENT OF UNDERTAKING (FORM 3) [28-11-2023(online)].pdf 2023-11-28
2 202341080603-FORM FOR SMALL ENTITY(FORM-28) [28-11-2023(online)].pdf 2023-11-28
3 202341080603-FORM 1 [28-11-2023(online)].pdf 2023-11-28
4 202341080603-FIGURE OF ABSTRACT [28-11-2023(online)].pdf 2023-11-28
5 202341080603-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-11-2023(online)].pdf 2023-11-28
6 202341080603-EDUCATIONAL INSTITUTION(S) [28-11-2023(online)].pdf 2023-11-28
7 202341080603-DRAWINGS [28-11-2023(online)].pdf 2023-11-28
8 202341080603-DECLARATION OF INVENTORSHIP (FORM 5) [28-11-2023(online)].pdf 2023-11-28
9 202341080603-COMPLETE SPECIFICATION [28-11-2023(online)].pdf 2023-11-28
10 202341080603-FORM-9 [04-12-2023(online)].pdf 2023-12-04
11 202341080603-FORM-8 [04-12-2023(online)].pdf 2023-12-04
12 202341080603-FORM 18 [04-12-2023(online)].pdf 2023-12-04
13 202341080603-Proof of Right [19-12-2023(online)].pdf 2023-12-19
14 202341080603-FORM-26 [19-12-2023(online)].pdf 2023-12-19
15 202341080603-ENDORSEMENT BY INVENTORS [19-12-2023(online)].pdf 2023-12-19