Abstract: Lot of people who have many disabilities in our world out of that people who are deaf and dumb cannot convey their messages to the normal people. Conversation becomes very difficult for this people. Deaf people cannot understand and hear what normal people is going to convey, similarly dumb people need to convey their message using sign languages where normal people cannot understand unless he/she knows or understands the sign language. This brings to a need of an application which can be useful for having conversation between deaf, dumb and normal people. Here we are using hand gestures of Indian sign language (ISL) which contain all the alphabets and 0-9digit gestures. The dataset of alphabets and digits is created by us. After dataset building, we extracted the features using bag-of-words and image preprocessing. With the feature extraction, histograms are been generated which maps alphabets to images. Finally, these features are fed to the supervised machine learning model to predict the gesture/sign. 4 claims & 1 figure
Claims:We claim the following from our invention,
Claim:
1. A system/method for designing the conversion engine for deaf and dumb people, said system/method comprising the steps of:
a) The user creates a dataset (1) with some sign images for deaf and dumb people.
b) The image processing technique (2) is used to identify the symbol of the sign and transfer the messages into the deaf peoples.
c) The sign image features are extracted (3) and sends the converted messages to the deaf people.
d) The classification algorithms (4) are applied at the end to convert the sign signals into the messages.
2. As mentioned in claim 1, the user should create the data set with some signed singles and respective images.
3. As mentioned in claim 2, we applying the image processing technique to identify the sign signals and images using machine learning algorithms.
4. As per claim 1, after that features of images are classified and given to the classification algorithms to convert the sign symbols to the corresponding messages of the symbols. , Description:Field of Invention
The present invention relates to, is to fill the bridge between deaf, dumb and normal people by using Indian sign language. We are using only alphabets and digits in our invention but can be easily extended to words and sentences very easily and help the deaf-dumb community.
Background of the invention
We will be defining some real time examples and relate them with the current invention. Lets take some simple example like automated dustbin. The dustbin will open automatically whenever a person goes closer to it and closes automatically when that person goes far from the bin. Each and every time the proximity sensors that are attached to the bin will send data to the application in the form of ‘0’ and ‘1’. With the help of that input, the bin will open and close [US8682034B2].
60392B1
The device employs a portable main processor, for example, one of the portable computers now in common use. For its input the appliance uses a data glove and for its output a speaker. Dynamic and static gestures are classified by a Continuous Hidden Markov Model (CHMM) which is capable of robust and rapid real time classification of both static and dynamic gestures. A natural language processor is used to transform the gesture classes into grammatically correct sequences of words. A speech synthesizer converts the word sequences into audible speech.[WO2001059741A1]
There is a technique which is been mentioned in earlier invention taking images using MATLAB. They used Indian sign language for classification. It has been compared and analyzed by using SIFT features using SIFT algorithm, key point localization and key point descriptor. After SIFT algorithm, we’ll get key points which symbolizes or represents a sign.
Many other methods have been used prior. Many machine learning algorithms have been used upon the key points which are mapped by SIFT. Features of the sign language are been extracted and these features are used to train the machine learning model with many classification algorithms. Lots of techniques are also been introduced in previous inventions. They used gloves over here. In order to get signal of the hand they used many calculations and techniques. Movements or hand gestures are tracked or recorded well. Basically, this is the Glove based approach.
Another techniques which are been used are dataset collection by capturing and saving the images in the folder and next using basic CNN architecture to train the model and use openCV to provide live feed and could capture the gestures in real time. To make communication easy between normal and disabled person. Education can be made easier using this application or conversation engine. This engine or application takes the gestures (live feed) as input and outputs result on the screen along with the speech which makes much more understandable. This is a much cheaper and easier solution to serve the community.
Summary of the invention
In this invention, we did use a bag of words model and classification technique support vector machine and also basic CNN model also. This can be used in real time for disabled persons to communicate effectively with the normal people and vice versa.
Brief description of Drawing
In the figures which are illustrate about the invention.
Figure 1 Architecture of the Proposed Invention
Detailed description of the invention
There are a lot of people who have many disabilities in our world out of which, people who are deaf and dumb cannot convey their messages to the normal people. Conversation becomes very difficult for this people. Deaf people cannot understand and hear what normal people is going to convey, similarly dumb people need to convey their message using sign languages where normal people cannot understand unless he/she knows or understands the sign language. This brings to a need of an application which can be useful for having conversation between deaf, dumb and normal people. Here we are using hand gestures of Indian sign language (ISL) which contain all the alphabets and 0-9digit gestures. The data set of alphabets and digits is created by us. After data set building, we extracted the features using bag-of-words and image preprocessing. With the feature extraction, histograms are been generated which maps alphabets to images. Finally, these features are fed to the supervised machine learning model to predict the gesture/sign. We did also use CNN model for training the model.
Gestures are used for conversation between deaf, dumb and normal person. Communication is the very important in order to convey any kind of information, messages with good understanding. It really becomes very difficult for disabled people like dead and dumb to convey their messages. They
Take help from sign languages to convey their messages. But normal people cannot understand the sign language. Deaf and dumb take help of sign language to convey messages. There are many sign languages in the world, every country has their own sign language.
The Main Goal of our project is to fill the bridge between deaf, dumb and normal people with using Indian sign language. We are using only alphabets and digits in our project but can be easily extended to words and sentences very easily and help the deaf-dumb community. To make communication easy between normal and disabled person. Education can be made easier using this application or conversation engine. This engine or application takes the gestures (live feed) as input and outputs result on the screen along with the speech which makes much more understandable. There are many papers which has the same idea and did implement in many different ways with different techniques.
There is a technique which is been mentioned in previous inventions. In previous inventions they took images using MATLAB. They used Indian sign language for classification. It has been compared and analyzed by using SIFT features using SIFT algorithm, key point localization and key point descriptor. After SIFT algorithm, we’ll get key points which symbolizes or represents a sign. Many other methods have been used prior. Many machine learning algorithms have been used upon the key points which are mapped by SIFT. Features of the sign language are been extracted and these features are used to train the machine learning model with many classification algorithms. Lots of techniques are also been introduced in this invention. They used gloves over here. In order to get signal of the hand they used many calculations and techniques. Movements or hand gestures are tracked or recorded well. Basically this is the Glove based approach.
Another techniques which are been used are data set collection by capturing and saving the images in the folder and next using basic CNN architecture to train the model and use open CV to provide live feed and could capture the gestures in real time. We did use classification machine learning and also CNN model for gesture recognition. Our proposed system has many steps involved which are Data set building / images collection, P reprocessing of image which includes segmentation, feature extraction and classification using Support vector machines (SVM) and CNN model which is shown in the invention is a much cheaper and easier solution to serve the community.
4 Claims & 1 Figure
| # | Name | Date |
|---|---|---|
| 1 | 202141057687-REQUEST FOR EARLY PUBLICATION(FORM-9) [11-12-2021(online)].pdf | 2021-12-11 |
| 2 | 202141057687-FORM-9 [11-12-2021(online)].pdf | 2021-12-11 |
| 3 | 202141057687-FORM FOR SMALL ENTITY(FORM-28) [11-12-2021(online)].pdf | 2021-12-11 |
| 4 | 202141057687-FORM FOR SMALL ENTITY [11-12-2021(online)].pdf | 2021-12-11 |
| 5 | 202141057687-FORM 1 [11-12-2021(online)].pdf | 2021-12-11 |
| 6 | 202141057687-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [11-12-2021(online)].pdf | 2021-12-11 |
| 7 | 202141057687-EVIDENCE FOR REGISTRATION UNDER SSI [11-12-2021(online)].pdf | 2021-12-11 |
| 8 | 202141057687-EDUCATIONAL INSTITUTION(S) [11-12-2021(online)].pdf | 2021-12-11 |
| 9 | 202141057687-DRAWINGS [11-12-2021(online)].pdf | 2021-12-11 |
| 10 | 202141057687-COMPLETE SPECIFICATION [11-12-2021(online)].pdf | 2021-12-11 |