Abstract: Sign Language is the primary way of communication for speech impaired people in world.They are able to express themselves by using those sign gestures. Most sign gestures have an associated word or a feeling with them. Unfortunately, the number of people who can interpret these sign languages are very few. This makes it very difficult for them to express even their basic needs to common people. This invention is aimed to produce an economic, and wearable electronic sensor based system which can detect and recognize human hand gestures, and can convert the recognized gestures into the corresponding spoken words.
Description:
The entire system is developed on a series of processes to achieve the required results. As given in Figure 1,the entire system works on a 9 step process. The first step involves the user to make a gesture. In the second step this gesture is recorded by the Inertial Measurement Units’ (IMU) which is captured by the Microcontroller Unit (MCU). In the third step this data is sent by the MCU to the App via Bluetooth Transmitter(BT). In the fourth step, transmitted data is received by smartphone App. In the fifth step the data is sent by the app to a cloud server which is processed and predicted by the server in the sixth step. In the seventh step, the Server sends the predicted word as a reply which is converted to speech in the eighth step. In the ninth or the final step, this speech is given out as an output via the smartphone speaker.
The entire system can be broadly divided into three parts i.e. the Data Acquisition System(DAS) (Figure 3) , the Communications Interface System App(CISA) (Figure 4) and the Gesture Recognition Module(GRM) (Figure 5) each of which plays an important role in the entire 9 step process.
Claims:
1. A nine step process for hand gesture detection and recognition, and translation into audible speech comprising:
user making a hand gesture to communicate a message;
recording of data by IMU’s and acquisition by the MCU;
transmitting the data by the MCU to the App via Bluetooth Transmitter;
reception of transmitted data by the App;
transmitting the data to cloud server by the App over Internet;
data processing and gesture prediction by cloud server;
transmitting the predicted word as response from Server to App;
conversion of response word into speech in the App via a Text-to-Speech
Engine;and
generate audible speech output from the smartphone speaker.
2. A system for hand gesture detection comprising of an input assembly of Gesture Acquisition Elements, said input assembly comprising:
an Interpreter glove to be worn by the user, said glove comprising of hex-axial sensors for detecting dynamic hand movements of each finger and thumb,a touch sensor to delimit the period of each gesture recording between two consecutive touches made by user;and
a data preprocessor and transmitter unit, said unit comprising of a microcontroller unit, bluetooth module for real time gathering and streaming of said sensor readings to the interface application, a low power positive voltage regulator, batteries for power supply;
3. A method of translating hand gesture into appropriate English word comprising:
storing of formatted gesture readings in specially identifiable text files;
preprocessing and extracting dataset from the said text files for implementing data analysis algorithms;
training of a Machine Learning model based on Support Vector Machines with the said dataset;
parameter tuning and selecting the best fit model using an exhaustive parameter grid search algorithm , saving the trained model for gesture prediction;
feeding of readings from the apparatus of claim 1 into the pretrained model to recognise the corresponding gesture;
sending the output of the said model to the interface application as HTTP response to generate audible speech;and
feeding the said response to a Text-to-Speech engine by the interface application to convey the user’s message.
4. The data extraction and preprocessing method of claim 3 to transform the raw sensor readings into useful and efficient format free of noise and outliers, wherein said method comprising of:
removal of unnecessary characters and white spaces to eliminate noise in data;
converting data readings into a common float data type, appending into data array for further operations;
scaling of the said data arrays to normalise the data values in a particular range;
resampling of resultant data into 50 uniformly spaced data points, for each of the six axes of the sensors, by interpolation, to create structured dataset suitable for model fitting;
concatenating all of the said axes data to form the feature set of the model;and
creating a data dictionary corresponding to the recorded gestures, wherein feature sets of gestures with the same name are assigned unique labels, to assign classes to the collection of recorded gestures, in order to fit into learning model;
5. A system of gesture training, recognition and storage wherein the system is hosted on a remote and scalable cloud server, that accepts requests from the said Interface Application over the Internet to record, train and predict gestures, and after performing the necessary processing method as stated in claim 3, returns the appropriate result back to the application, hence eliminates the need for local processor in the gloves as said in claim 2.
6. A system of User interaction with the system of claim 2, based on Android OS, comprising of:
a training mode to record and train new gestures in real time by the user as required;and
a normal mode to test or predict pre-trained gestures for translation into speech.
7. A method of compiling a new gesture dictionary for individual users comprising:
storing of formatted gesture readings in specially identifiable text files;
dataset preprocessing and extracting method from claim 6, from the said text files for implementing data analysis algorithms;and
training a Machine Learning Classifier dedicated for each user based on the user’s dataset allowing the user to train and custom sign gestures. ,
| # | Name | Date |
|---|---|---|
| 1 | 202131054347-FORM 1 [24-11-2021(online)].pdf | 2021-11-24 |
| 2 | 202131054347-DRAWINGS [24-11-2021(online)].pdf | 2021-11-24 |
| 3 | 202131054347-COMPLETE SPECIFICATION [24-11-2021(online)].pdf | 2021-11-24 |
| 4 | 202131054347-FORM 18 [24-08-2023(online)].pdf | 2023-08-24 |
| 5 | 202131054347-FER.pdf | 2025-03-04 |
| 1 | SearchHistory202131054347E_07-03-2024.pdf |