Abstract: A technology called Sign Companion tries to close the gap in communication between users of spoken language and sign language. It offers real-time speech to sign translation so that those who are hard of hearing or deaf may communicate effectively. The system converts spoken language into text using automated speech recognition (ASR) and machine learning algorithms. A virtual avatar or animation model is then processed and translated into sign language. This approach focuses on capturing the subtleties of sign language, such as hand gestures, facial emotions, and body movements. Sign Companion improves communication accessibility, fosters inclusion, and makes it possible for people who are deaf or hard of hearing to participate in discussions, presentations, and many types of audio material thanks to this real-time translation. For people with hearing impairments, the technology has the potential to greatly improve their quality of life and social connections.
DESC:Examples of various devices or methods are provided for each claimed embodiment in the description that follows. Since the terminology used here only pertains to certain embodiments, it should be mentioned. The attached claims are not intended to be limiting because they are the only ones that will specify the scope of the present disclosure.
Figures 1 to 2 illustrate different embodiments of the sign companion system. In Figure 1, at block 101, a microphone is picking up the voice using the software. At block 102, an automatic speech recognition algorithm is used to pick up the voice from the microphone. At block 103, Real-time conversations and interactions are made possible through natural language processing which is an algorithm used for manipulating and comprehending human languages. At block 104, A vectorization approach has been used to map the spoken language model to reduce latency. At block 105, transforming the spoken words into an animated representation of the signed language.
The embodiment of Figure 2 describes (201) one who is able to communicate via spoken language. A person (202) who is exclusively conversant in signed language. A tablet with the applications installed in (203). Before speaking, the user must hit a software button for the microphone (204). A converted version of spoken language is represented by an animated figure (205) in signed form.
,CLAIMS:We Claim:
1. The sign companion: a real-time speech-to-signed language conversion system designed for converting speech into signed language in real-time. The system also provides the implementation of a Website or Android application. For those who are deaf or hard of hearing, a speech-to-signed language conversion technology encourages inclusion, accessibility, and successful communication. Bridging the communication gap between spoken and signed language promotes comprehension and equitable participation in various social, academic, and professional situations.
2. Sign companion: a real-time speech-to-signed language conversion system, as claimed in claim 1, wherein the system uses RNN and NLP for low latency and better communication.
3. The present embodiment, as claimed in claim 1, wherein the method converts precise signed language from English spoken language.
| # | Name | Date |
|---|---|---|
| 1 | 202331066474-REQUEST FOR EXAMINATION (FORM-18) [04-10-2023(online)].pdf | 2023-10-04 |
| 2 | 202331066474-FORM 18 [04-10-2023(online)].pdf | 2023-10-04 |
| 3 | 202331066474-FORM 1 [04-10-2023(online)].pdf | 2023-10-04 |
| 4 | 202331066474-DRAWINGS [04-10-2023(online)].pdf | 2023-10-04 |
| 5 | 202331066474-COMPLETE SPECIFICATION [04-10-2023(online)].pdf | 2023-10-04 |
| 6 | 202331066474-FORM 3 [11-10-2023(online)].pdf | 2023-10-11 |
| 7 | 202331066474-DRAWING [11-10-2023(online)].pdf | 2023-10-11 |
| 8 | 202331066474-COMPLETE SPECIFICATION [11-10-2023(online)].pdf | 2023-10-11 |
| 9 | 202331066474-Proof of Right [13-10-2023(online)].pdf | 2023-10-13 |
| 10 | 202331066474-ENDORSEMENT BY INVENTORS [13-10-2023(online)].pdf | 2023-10-13 |
| 11 | 202331066474-EDUCATIONAL INSTITUTION(S) [13-10-2023(online)].pdf | 2023-10-13 |