Sign In to Follow Application
View All Documents & Correspondence

Learning Assistive System For Visually Impaired

Abstract: A learning assistive system for visually impaired, comprising a transparent frame 101 accessed by a visually impaired user for wearing purpose, a pair of lens-holding rims 102 to retain optical lenses and groove for lens retention, a pair of temples 103 via hinges 104 to fold the temples 103 easy storage, a microphone 105 allow a user to provide voice commands for requiring assistance in reading any textual content, an artificial intelligence-based imaging unit 106 to capture multiple high-resolution images of surroundings from various angle, an OCR (Optical Character Recognition) module to monitor and convert text from images or physical documents into speech, a thin cuboidal member 201 with multiple pneumatic pins 202 get extend and retract in response to user’s finger movement for making Braille characters, a stylus pen 203 accessed by the user to write Braille characters onto upper surface of the member 201.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 December 2024
Publication Number
1/2025
Publication Type
INA
Invention Field
PHYSICS
Status
Email
Parent Application

Applicants

Marwadi University
Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.

Inventors

1. Dilip Moyal
Department of Electrical Engineering, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.
2. Dr. Tapankumar Anirudhdh Trivedi
Department of Electrical Engineering, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.
3. Dr. Krishna Ishvarbhai Patel
Department of Electrical Engineering, Marwadi University, Rajkot - Morbi Road, Rajkot 360003 Gujarat, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a learning assistive system for visually impaired that is capable of providing a convenient solution to a visually impaired user in reading and writing Braille characters and allow them to record notes, later be accessed, retrieved, or modified as well as helping the user to correct errors or irregularities in writing.

BACKGROUND OF THE INVENTION

[0002] Visually impaired individuals face significant challenges in their daily lives, particularly when it comes to accessing written information and communicating through writing. Traditional methods for assisting the visually impaired, such as Braille texts and audiobooks, have limitations in terms of accessibility, portability, and real-time feedback.

[0003] Braille, in particular, is a vital tool for visually impaired individuals, allowing them to read and write through tactile means. However, traditional Braille methods rely heavily on manual transcription and lack real-time feedback, making it difficult for users to correct errors or irregularities in their writing. Furthermore, Braille texts can be bulky and cumbersome to carry, limiting their accessibility and convenience.

[0004] WO2008015375A1 discloses a wireless system to assist visually impaired people comprising voice-activated portable devices comprising communicating means for transmitting and receiving data to and/or from a network; scanning means for scanning an object selected by a user; memory means for storing information regarding a scanned object; sensor and identification means for locating an object previously scanned; journey means for planning a route to a destination selected by the user and for identifying the correct transport for travel to a selected destination, wherein the sensor identification and journey means are able to communicate with the wireless system through the communicating means.

[0005] US20190026939A1 discloses a method, performed by a mobile device, for assisting blind or visually impaired users navigate a room or a new and unfamiliar environment. The method includes blind user acquiring one or more images using a mobile device and invoking processing algorithms. Processing algorithms include one of Multi View Stereo and Structure from Motion, whereby algorithms construct a 3D representation of the environment being imaged. Further algorithms are applied to identify and assign attributes to objects in the imaged environment. The 3D representation is responsive to mobile device orientation. The environment is presented to the user via a touch screen, enabling the user to virtually explore the environment using touch, whereby objects being touched are identified, and associated with dimensional and other attributes.

[0006] Conventionally, there exists many devices that are capable of helping visually impaired user in reading, however these existing devices are fails in providing a means to form a digital record of notes, later be accessed, retrieved, or modified by the user. In addition, these existing devices are also incapable of converting text from images or physical documents into speech, which cause problems in providing real-time information to the user.

[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a device that is need to be capable of empowering visually impaired individuals with an all-encompassing solution, enabling them to read, write, and manage Braille characters with enhanced autonomy and simplicity. Furthermore, the developed device required to be potent enough of offering a range of functionalities, including note-taking, recording, editing, error detection, and correction for enhancing communication and promoting accuracy and efficiency.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a system that is capable of assisting a visually impaired user in reading a textual content by providing audio speech, thereby providing real-time information to the user.

[0010] Another object of the present invention is to develop a system that is capable of forming Braille characters in response to the user’s finger movement for providing feedback as the user reads Braille characters.

[0011] Yet another object of the present invention is to develop a system that is capable of creating a digital record of notes, later be accessed, retrieved, or modified by the user, by creating tactile Braille impressions as the user writes.

[0012] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0013] The present invention relates to a learning assistive system for visually impaired that is capable of empowering visually impaired individuals to read and write Braille characters with ease, enabling them to record, store, and manage notes with simplicity. Additionally, the proposed device provides a valuable solution for error correction, allowing users to review and modify their written work, thereby promoting independence, accuracy, and confidence in their daily communication.

[0014] According to an embodiment of the present invention, a learning assistive system for visually impaired, comprising a transparent frame accessed by a visually impaired user for wearing purpose, a pair of lens-holding rims to retain optical lenses and features an integrated groove for lens retention, each rim are connected with a pair of temples via hinges to fold the temples easy storage, a microphone installed on the frame allow a user to provide voice commands for requiring assistance in reading any textual content, an artificial intelligence-based imaging unit mounted on the frame to capture multiple high-resolution images of surroundings from various angle.

[0015] According to another embodiment of the present invention, the proposed device further comprises of an OCR (Optical Character Recognition) module built within the microcontroller to monitor and convert text from images or physical documents into speech, a speaker installed on one of the temples to produce real-time audio information to the user, a thin cuboidal member with multiple pneumatic pins, accessed by the user and the pins get extend and retract in response to user’s finger movement for making Braille characters, a stylus pen installed with the member, which is accessed by the user to write Braille characters onto upper surface of the member.

[0016] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0017] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of a learning assistive system for visually impaired.

DETAILED DESCRIPTION OF THE INVENTION

[0018] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0019] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0020] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0021] The present invention relates to a learning assistive system for visually impaired that is capable of providing a comprehensive solution for visually impaired individuals, enabling them to read, write, and manage Braille characters with greater ease and independence, note-taking, recording, and editing, as well as error detection and correction, thereby ultimately facilitating more efficient and accurate communication.

[0022] Referring to Figure 1, an isometric view of a learning assistive system for visually impaired is illustrated, comprising a transparent frame 101 constructed with a pair of lens-holding rims 102, a pair of temples 103 are pivotally connected with the rims 102 via hinges 104, a microphone 105 embedded with the frame 101, an artificial intelligence-based imaging unit 106 installed on the frame 101, a speaker 107 mounted on one of the temples 103, a thin cuboidal member 201 associated with the system, the member 201 is equipped with pneumatic pins 202, the member 201 provided with a stylus pen 203.

[0023] The device disclosed herein, comprises of a transparent frame 101, which serves as a main structure of the system and accessed by a visually impaired user for wearing purpose, wherein the frame 101 constructed with a pair of lens-holding rims 102 to retain optical lenses and features an integrated groove for lens retention. Each of the rim are connected with a pair of temples 103 via hinges 104, to allow the user to fold the temples 103 easy storage.

[0024] The process begins, where the visually impaired user provide voice commands over a microphone 105 installed on the frame 101 about requirement of assistance in reading any textual content. The microphone 105 plays a crucial role by converting spoken words or commands into electrical signals which are then processed and analyzed to trigger specific actions. When the user speaks or commands for requiring assistance in reading texts, their vocal cords vibrate, creating sound waves. These sound waves travel through the air as variations in air pressure. The microphone 105 mentioned herein is a transducer that converts these variations in air into electric signals. The analog electrical signal is converted into digital form which is done by an analog-to-digital converter (ADC). The digital signal is then subjected to various signal processing techniques to enhance voice quality and eliminate noise.

[0025] The microphone 105 linked with a microcontroller, which processes these commands and actuates an artificial intelligence-based imaging unit 106 mounted on the frame 101 to capture multiple images of surroundings. The artificial intelligence based imaging unit 106 is constructed with a camera lens and a processor, wherein the camera lens is adapted to capture a series of images of the surrounding present in proximity to the frame 101. The processor carries out a sequence of image processing operations including pre-processing, feature extraction, and classification. The image captured by the imaging unit 106 is real-time images of the textual content present in the proximity of the user.

[0026] Synchronously, the microcontroller actuates an OCR (Optical Character Recognition) module built within the microcontroller to monitor and convert text from images or physical documents into speech. The recorder title of the document is then stored in a database linked with the microcontroller. The OCR module works in synchronization with the artificial intelligence-based imaging unit 106 and scans the text. The OCR module identifies and locates text regions within the image using techniques like edge detection, and contour analysis. The OCR module processes the text regions to recognize individual characters and words.

[0027] The microcontroller receives data from imaging unit 106 and OCR and accordingly actuates a speaker 107 installed on one of the temples 103 to produce real-time audio information to the user. The speaker 107 is capable of producing clear and natural sound and is capable of adjusting its volume based on ambient noise levels. The speaker 107 consists of audio information, which is in the form of recorded voice, synthesized voice, or other sounds, generated or stored as digital data. This data is often in the form of an audio file. The digital audio data is sent to a digital-to-analog converter (DAC).

[0028] The DAC converts the digital data into analog electrical signals. The analog signal is often weak and needs to be amplified. An amplifier boosts the strength to a level so that the speaker 107 drives it effectively. The amplified audio signal is then sent to the speaker 107. The core of the speaker 107 is an electromagnet attached to a flexible cone. These sound waves travel through the air as pressure waves and are picked by the user’s ear. Synchronously, the OCR module recognizing raised Braille dots and converting them into corresponding digital characters or words, and audibly reading the written characters and words aloud to provide real-time audio feedback to the user.

[0029] The system features a thin cuboidal member 201, which is accessed by the user. The member’s upper surface is segregated into two portion. The member having an inbuilt processor, memory in wireless connection with the microcontroller via a Bluetooth module. Simultaneously, the microcontroller actuates multiple pneumatic pins 202 installed at upper surface’s first portion of the member 201 to get extend and retract in response to user’s finger movement for making Braille characters to assist the user in reading Braille characters and the microphone 105 keenly records the user’s voice while reading.

[0030] Herein, a stylus pen 203 installed with the member 201, which is accessed by the user to write Braille characters onto upper surface’s second portion of the member 201. The pen 203 creates tactile Braille impressions as the user writes and the microcontroller allows the database for storing all written Braille content by the user, thereby making a digital record of notes, later be accessed, retrieved, or modified by the user.

[0031] The microcontroller detects deviations in shapes or curves of Braille characters or words as the user writes, and provides real-time tactile feedback through the stylus for aiding the user in correcting errors or irregularities in writing. In case the user deviates from correct Braille formation, the microcontroller provides gentle vibrations in forms of tactile signals for alerting the user to correct the formation at real time.

[0032] The device is integrated with IoT (Internet of Things), through which it is capable of providing real-time feedback to the user. The IoT component allows for communication between the member 201 and the microcontroller (such as Arduino), which processes the data generated by the user’s actions (like pressing the pins or writing). This data is then used to trigger audio responses, which are delivered through the speaker 107. The speaker 107 provides auditory information, helping the user understand the context, such as reading progress, correct or incorrect entries, or additional information about the content.

[0033] As the user writes, the glasses analyze the motion and identify the specific letter or word being formed. For example, if the user writes a "B," the microcontroller detects the stroke pattern and the speaker announces: "B, raised two dots, descend with a curve.

[0034] The microcontroller identify incorrect letters (e.g., a reversed "E" or incomplete "A") based on stroke patterns. The system will immediately announce: “The letter you wrote is incorrect. Try forming it again with a downward stroke and right-angle curve,” helping the user adjust.

[0035] Once the user completes a word, the glasses read it aloud, for example: "Hello, correct." This provides the user a chance to confirm the word and make any necessary corrections.

[0036] The present invention works best in following manner, where the transparent frame 101 accessed by the visually impaired user for wearing purpose, the pair of lens-holding rims 102 retains optical lenses and groove for lens retention, the pair of temples 103 via hinges 104 get fold for easy storage. Further, the microphone 105 allow the user to provide voice commands for requiring assistance in reading any textual content, the artificial intelligence-based imaging unit 106 to capture multiple high-resolution images of surroundings from various angle, the OCR (Optical Character Recognition) module monitors and converts text from images or physical documents into speech, the speaker 107 produces real-time audio information to the user, the thin cuboidal member 201 with multiple pneumatic pins 202, accessed by the user and the pins 202 get extend and retract in response to user’s finger movement for making Braille characters, the stylus pen 203 accessed by the user to write Braille characters onto upper surface of the member 201.

[0037] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , C , C , Claims:1) A learning assistive system for visually impaired, comprising a transparent frame 101 developed to be worn by a visually impaired user, comprising a pair of lens-holding rims 102 configured to securely retain optical lenses, each rim featuring an integrated groove for lens retention, wherein a pair of temples 103 are pivotally connected with said rims 102 via hinges 104, enabling said temples 103 to fold for easy storage, characterized in that
i) a microphone 105 embedded with said frame 101 for receiving voice commands of said user regarding requirement of assistance in reading a textual content, wherein a microcontroller is linked with said microphone 105 that upon receiving said user’s commands activates an artificial intelligence-based imaging unit 106 installed on said frame 101 and paired with a processor for capturing and processing multiple images of surroundings;
ii) an OCR (Optical Character Recognition) module integrated with said microcontroller that works in collaboration with said imaging unit 106 to detect and convert text from images or physical documents into speech, providing real-time information to said user, via a speaker 107 mounted on one of said temples 103;
iii) a thin cuboidal member 201 having an inbuilt processor, memory in wireless connection with said microcontroller via a Bluetooth module, wherein an upper surface of said member 201 is equipped with pneumatic pins 202 capable of raising or lowering by said processor in response to user’s finger movement, providing tactile feedback as said user reads Braille characters, wherein characters read out by said user are received by said microphone 105; and
iv) said member 201 provided with a stylus pen 203, designed for the user to write Braille characters onto upper surface of said member 201, wherein said pen 203 interacts with said surface to create tactile Braille impressions as said user writes, wherein said microcontroller comprises of a database for automatically storing all written Braille content as said user writes, creating a digital record of notes, later be accessed, retrieved, or modified by said user.

2) The device as claimed in claim 1, wherein said OCR module is capable of recognizing raised Braille dots, converting them into corresponding digital characters or words, and audibly reading these characters or words aloud to provide real-time audio feedback to said user.

3) The device as claimed in claim 1, wherein said microcontroller detects deviations in shapes or curves of Braille characters or words as said user writes, and provides real-time tactile feedback through said stylus, thereby helping user correcting errors or irregularities in writing.

4) The device as claimed in claim 1, wherein said feedback provides gentle vibrations in forms of tactile signals to alert said user when writing deviates from correct Braille formation, enabling real-time error correction.

Documents

Application Documents

# Name Date
1 202421095236-STATEMENT OF UNDERTAKING (FORM 3) [03-12-2024(online)].pdf 2024-12-03
2 202421095236-REQUEST FOR EXAMINATION (FORM-18) [03-12-2024(online)].pdf 2024-12-03
3 202421095236-REQUEST FOR EARLY PUBLICATION(FORM-9) [03-12-2024(online)].pdf 2024-12-03
4 202421095236-PROOF OF RIGHT [03-12-2024(online)].pdf 2024-12-03
5 202421095236-POWER OF AUTHORITY [03-12-2024(online)].pdf 2024-12-03
6 202421095236-FORM-9 [03-12-2024(online)].pdf 2024-12-03
7 202421095236-FORM FOR SMALL ENTITY(FORM-28) [03-12-2024(online)].pdf 2024-12-03
8 202421095236-FORM 18 [03-12-2024(online)].pdf 2024-12-03
9 202421095236-FORM 1 [03-12-2024(online)].pdf 2024-12-03
10 202421095236-FIGURE OF ABSTRACT [03-12-2024(online)].pdf 2024-12-03
11 202421095236-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [03-12-2024(online)].pdf 2024-12-03
12 202421095236-EVIDENCE FOR REGISTRATION UNDER SSI [03-12-2024(online)].pdf 2024-12-03
13 202421095236-EDUCATIONAL INSTITUTION(S) [03-12-2024(online)].pdf 2024-12-03
14 202421095236-DRAWINGS [03-12-2024(online)].pdf 2024-12-03
15 202421095236-DECLARATION OF INVENTORSHIP (FORM 5) [03-12-2024(online)].pdf 2024-12-03
16 202421095236-COMPLETE SPECIFICATION [03-12-2024(online)].pdf 2024-12-03
17 Abstract.jpg 2024-12-28
18 202421095236-FORM-26 [03-06-2025(online)].pdf 2025-06-03