Sign In to Follow Application
View All Documents & Correspondence

Braille Infused Reading Assistive Device For Visually Impaired Users

Abstract: A braille-infused reading assistive device for visually impaired users, comprising a cuboidal housing 101 having a first, second and third section 102, 103 and 104, positioned on a ground surface, a laser sensor with an artificial intelligence-based imaging unit 108 determines height of a user, a fingerprint scanner 109 accessed by user imprints the user’s fingerprint, a microphone 110 allows user to provide commands about their preference for reading material, a first OCR module analyses cover texts of reading materials, multiple pop out balls 119 translates the reading material outwards, a pair of motorized clippers 111 grabs the specified reading material, a second OCR module 202 analyses texts of the placed reading material within the second section 103, an extendable link 203 with a suction tip 204 opens page of the reading material via a semi-circular slider 201, multiple pneumatic pins 114 retrieves braille scripts corresponding to the analyzed texts.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 December 2024
Publication Number
05/2025
Publication Type
INA
Invention Field
PHYSICS
Status
Email
Parent Application

Applicants

Marwadi University
Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.

Inventors

1. Prof. Santushti Betgeri
Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
2. Jay Agravat
Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
3. Kishan Makadiya
Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.
4. Madhu Shukla
Marwadi University, Rajkot – Morbi Road, Rajkot 360003 Gujarat, India.

Specification

Description:FIELD OF THE INVENTION

[0001] The present invention relates to a braille-infused reading assistive device for visually impaired user that is capable of allowing a visually impaired users for independent reading, facilitating easy access to reading materials, and providing multilingual support and audio assistance, while adapting to user preferences for a personalized learning experience, thereby enhancing reading comprehension, breaks language barriers, and promotes literacy, independence, and confidence for visually impaired users.

BACKGROUND OF THE INVENTION

[0002] Reading is a fundamental aspect of human life, enabling individuals to acquire knowledge, explore new ideas, and connect with others. However, for the visually impaired, accessing written materials has long been a significant challenge. Despite advancements in technology, many visually impaired individuals continue to face barriers in accessing information, hindering their educational, social, and economic development.

[0003] Traditionally, visually impaired individuals have relied Braille books, audiobooks, and screen readers. While these methods have been helpful, but have several limitations. Braille books are often bulky, expensive, and limited in availability, making it difficult for users to access a wide range of materials. Audiobooks, on the other hand, require users to rely on others for narration, which are time-consuming and not always be available. Screen readers, although helpful, but cumbersome to use, especially for those with limited technical expertise. The traditional methods of accessing written materials have several drawbacks. Firstly, they often require visually impaired individuals to rely on others for assistance, which is frustrating and undermine their independence. Secondly, these methods are time-consuming, limiting the user's ability to access information quickly and efficiently. Finally, traditional methods are not always be accessible, particularly in situations where technology is not available or is unreliable.

[0004] EP2065871A1 discloses a reading device for blind or visually impaired persons for recognizing and reading text passages comprises an image capturing unit being configured to capture an image of an environment of a blind or visually impaired person and to output image data corresponding thereto, an image processing unit being configured to process the image data such that text is recognized and extracted from the image data and to output text data corresponding thereto, a text outputting unit being configured to output data corresponding to the text data in a form noticeable or convertible to be noticeable by the blind or visually impaired person, and a housing comprising a first part and a second part, the first part having the image capturing unit attached thereto and the second part accommodating the image processing unit and the text outputting unit.

[0005] US8606316B2 discloses a blind aid device including enabling a blind person to activate the blind aid device; capturing one or more images related to a blind person's surrounding environment; detecting moving objects from the one or more images captured; identifying a finite number of spatial relationships related to the moving objects; analysing the one or more images within the blind aid device to classify the finite number of spatial relationships related to the moving objects corresponding to predefined moving object data; converting select spatial relationship information related to the one or more analysed images into audible information; relaying select audible information to the blind person; and notifying the blind person of one or more occurrences predetermined by the blind person as actionable occurrences.

[0006] Conventionally, there exists many devices that aim to facilitate reading for the visually impaired, however these existing solutions fail in providing independent and efficient access to written materials. In addition, these existing solutions are also incapable of offering real-time language translation, multilingual support, and personalized learning experiences, thereby limiting their effectiveness in promoting literacy and independence.

[0007] In order to overcome the aforementioned drawbacks, there exists a need in the art to develop a device that requires to facilitate independent reading for the visually impaired by adapting to individual user preferences and providing real-time language translation and multilingual support. Additionally, the developed device also needs to be being potent enough to enable users to access a wide range of written materials efficiently, while also providing a personalized and interactive learning experience that promotes literacy and independence.

OBJECTS OF THE INVENTION

[0008] The principal object of the present invention is to overcome the disadvantages of the prior art.

[0009] An object of the present invention is to develop a device that is capable of enabling a visually impaired user to read on their own without assistance to promote self-reliance, confidence and flexibility, thereby enhancing reading comprehension.

[0010] Another object of the present invention is to develop a device that is capable of facilitating simple and intuitive navigation to desired reading content, providing quick access, reduced frustration, and eye strain, thereby enabling exploration of new genres and topics.

[0011] Another object of the present invention is to develop a device that is capable of providing a real-time language translation and audio support, enhancing understanding, engagement, and accessibility, while breaking language barriers and promoting literacy.

[0012] Another object of the present invention is to develop a device that is capable of adapting to user preferences by providing customized reading recommendations, adjustable settings, and real-time progress tracking, for a suitable and effective learning experience.

[0013] Yet another object of the present invention is to develop a device that is capable of providing a tactile experience, allowing users to learn and explore through touch, promoting improved retention, creativity, and fine skills development.

[0014] The foregoing and other objects, features, and advantages of the present invention will become readily apparent upon further review of the following detailed description of the preferred embodiment as illustrated in the accompanying drawings.

SUMMARY OF THE INVENTION

[0015] The present invention relates to a braille-infused reading assistive device for visually impaired users that is capable of enabling visually impaired users to read independently by allowing access to reading materials, offering multilingual support and audio assistance, ultimately fostering improved comprehension, language accessibility, literacy, self-reliance, and confidence.

[0016] According to an embodiment of the present invention, a braille-infused reading assistive device for visually impaired users, comprising a cuboidal housing, which consists a first, second, and third section, is designed to be positioned on a ground surface, an extendable bar is arranged on the underside of the housing to position a platform on the surface, providing support to the housing, a laser sensor, embedded in the housing and synchronized with an artificial intelligence-based imaging unit, determines the height of a user positioned in proximity to the housing, a microcontroller, linked with the laser sensor, processes the determined height to actuate the extendable bar, aligning the housing with the torso portion of the user, the housing's first section features a fingerprint scanner, allowing the user to imprint their fingerprint and enabling the microcontroller to retrieve the user's profile from a linked database, the database stores the user's facial recognition data for authentication purposes, facilitating personalized interaction with the device, a microphone, arranged on the first section, enables the user to provide input voice commands regarding their preference of reading material, based on these commands, the microcontroller actuates a first OCR (Optical Character Recognition) module embedded in the third section, analyzing cover texts of multiple reading materials stored in different slots provided in the third section, the third section also features plurality of pop out balls positioned along inner peripheries of the slots, which rotate to translate the user-specified reading material outwards.

[0017] According to another embodiment of the present invention, the proposed device further comprises of a pair of motorized clippers, arranged on front lateral sides of the housing via an extendable L-shaped rod, extend/retract to position the clippers in proximity to the specified reading material, the motorized clippers grab the specified reading material, and a motorized ball and socket joint, integrated between the clippers and rod, positions the grabbed reading material to place it inside the second section, a second OCR module, embedded in the second section, detects and analyzes texts of the placed reading material, an extendable link, installed at the inner back portion of the second section, extends/retracts to position a suction tip arranged on the end of the link behind the currently opened page of the reading material, a semi-circular slider, arranged between the link and second section, translates the link and tip, plurality of pneumatic pins, installed on the uppermost portion of the third section, are actuated by the microcontroller to form a tactile outline of evaluated braille patterns onto the third section, which allows the user to access and decode the text of the reading material, a translator module, integrated with the microcontroller, automatically detects the language of the text and translates it into the user's preferred language in real-time through a speaker installed on the housing, plurality of motorized wheels and suction units, arranged beneath the support, enable mobility and stability for the housing and a battery is configured with the device to provide a continuous power supply to electronically powered components associated with the device.

[0018] While the invention has been described and shown with particular reference to the preferred embodiment, it will be apparent that variations might be possible that would fall within the scope of the present invention.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
Figure 1 illustrates an isometric view of a braille-infused reading assistive device for visually impaired users; and
Figure 2 illustrates a perspective view of a second section associated with the proposed device.

DETAILED DESCRIPTION OF THE INVENTION

[0020] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention as defined in the claims.

[0021] In any embodiment described herein, the open-ended terms "comprising," "comprises,” and the like (which are synonymous with "including," "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of," consists essentially of," and the like or the respective closed phrases "consisting of," "consists of, the like.

[0022] As used herein, the singular forms “a,” “an,” and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.

[0023] The present invention relates to a braille-infused reading assistive device for visually impaired users that is capable of promoting literacy, breaking language barriers, and fostering confidence among visually impaired users by providing independent reading capabilities, streamlined access to materials, and personalized support.

[0024] Referring to Figure 1 and 2, an isometric view of a braille-infused reading assistive device for visually impaired users and a perspective view of a second section 103 associated with the proposed device are illustrated, respectively, comprising a cuboidal housing 101 having a first, second and third section 102, 103 and 104, an extendable bar 105 is arranged on underside of the housing 101 via a pivot joint 106, a platform 107 installed with the bar 105, an artificial intelligence-based imaging unit 108 installed on the housing 101, a fingerprint scanner 109 installed on the housing 101, a microphone 110 arranged on the housing 101, different slots 118 provided in the third section 104.

[0025] Figure 1 and 2 further illustrates plurality of pop out balls 119 positioned along inner peripheries of the slots 118, a pair of motorized clippers 111 arranged on front lateral sides of the housing 101 via an extendable L-shaped rod 112, a motorized ball and socket joint 113 is integrated in between the clippers 111 and rod 112, a second OCR module 202 embedded in the second section 103, an extendable link 203 installed at inner back portion of the second section 103, a suction tip 204 arranged on ends of the link 203, a semi-circular slider 201 arranged in between the link 203 and second section 103, plurality of pneumatic pins 114 installed on uppermost portion of the first section 102, a speaker 115 installed on the housing 101, plurality of motorized wheels 116 and suction units 117 are arranged beneath the platform 107.

[0026] The developed device disclosed herein, comprises of a cuboidal housing 101, which is cuboidal box-like structure with six flat faces. The housing 101 is divided into three distinct sections, namely the first, second, and third sections, each serving a specific purpose. The housing 101 is designed to be positioned on a ground surface, such as a floor or a table, and is intended to remain stationary during operation.

[0027] To ensure stability and support on the ground surface, an extendable bar 105 is arranged on the underside of the housing 101, designed to extend or retract as needed, allowing the housing 101 to be adjusted to different heights or angles. When extended, the bar 105 positions a platform 107 configured with the bar 105 on the surface, which provides a stable base for the housing 101, which helps to distribute the weight of the housing 101 evenly, preventing it from tipping or wobbling during use.

[0028] The bar 105 attached with the housing 101 by means of a pivot joint 106, which provide necessary movement to the housing 101 as per requirement. The pivot joint 106 connects the bar 105 to the housing 101, enabling rotational movement around a single axis. The pivot joint 106 is designed to provide the necessary flexibility and adjustability, allowing the housing 101 to move as required. At the centre of the pivot joint 106 is a pivot pin or shaft, a cylindrical rod that serves as the axis of rotation. The pivot pin connects the bar 105 to the housing 101, facilitating smooth movement.

[0029] The pivot pin is surrounded by bearings, small metal or plastic components that reduce friction and enable rotational movement. These bearings ensuring that the pivot joint 106 operates smoothly, even under load or stress. The bearings are typically housed within a bracket attached to the housing 101, which holds the pivot pin and bearings in place. This bracket, known as the housing 101 bracket, provides a secure foundation for the pivot joint 106, maintaining its axis of rotation. As the bar 105 moves, the pivot pin rotates within the bearings, allowing for smooth and consistent movement. The housing 101 bracket and bar 105 bracket work together to maintain the pivot pin's axis of rotation, ensuring that the movement is precise and controlled. The pivot joint 106's design enables it to absorb any stress or load, preventing damage to the housing 101.

[0030] During placement of the housing 101 over the surface via the platform 107, multiple wheels 116 and suction units 117 are installed underneath the platform 107, which allows mobility to the housing 101. In case there is requirement of locomotion of the housing 101, the wheels 116 get actuated. These wheels 116 are powered by electric motors that provide the necessary torque and speed to propel the housing 101 across various surfaces. The motorized wheels 116 are designed to be durable and long-lasting, with a robust construction that withstand regular use and potential obstacles.

[0031] In the other hand, in case the user requires to place the housing 101 at a place, the multiple suction units 117 get actuated to create a vacuum seal between the device and the surface it is placed on, preventing movement or slipping when the housing 101 is in use. The suction units 117 are typically powered by electric motors that generate the necessary vacuum pressure, thereby ensures that the housing 101 remains stable and secure, even when in use or during periods of movement.

[0032] After the housing 101 get placed successfully on the ground surface, a laser sensor installed with housing 101 get actuated by an inbuilt microcontroller to determine the height of a user positioned in proximity to the housing 101. The laser sensor emits a laser beam that is directed towards the user, and the reflections from the user's body are detected by the sensor. The laser sensor is synchronized with an artificial intelligence-based imaging unit 108, which enables accurate and precise measurements of the user's height.

[0033] The artificial intelligence based imaging unit 108 is constructed with a camera lens and a processor, wherein the camera lens is adapted to capture a series of images of the surrounding present in proximity to the housing 101. The processor carries out a sequence of image processing operations including pre-processing, feature extraction, and classification. The image captured by the imaging unit 108 is real-time images of the housing’s 101 surrounding. The artificial intelligence based imaging unit 108 transmits the captured image signal in the form of digital bits to the microcontroller.

[0034] The imaging unit 108 uses artificial intelligence and machine learning protocols to analyze the laser reflections and determine the user's height. The microcontroller linked with the laser sensor and imaging unit 108 processes the determined height and uses this information to actuate the bar 105 to get extend and align the housing 101 with the torso potion of the user, ensuring correct alignment of the housing 101, thereby allowing the user to interact with the housing 101 comfortably.

[0035] The extension of the bar 105 is powered by a pneumatic unit that utilizes compressed air to extend and retract the bar. The process begins with an air compressor which compresses atmospheric air to a higher pressure. The air cylinder of the pneumatic unit contains a piston that moves back and forth within the cylinder. The cylinder is connected to one end of the bar. The piston is attached to the bar 105 and its movement is controlled by the flow of compressed air. To extend the bar 105 the piston activates the air valve to allow compressed air to flow into the chamber behind the piston. As the pressure increases in the chamber, the piston pushes the bar 105 to the desired length for positioning the housing 101 near to the torso portion of the user.

[0036] For example, when the user approaches the housing 101, the laser sensor emits a laser beam, and the reflections are detected by the sensor. The artificial intelligence-based imaging unit 108 processes the data and determines the user's height. The microcontroller receives this information and adjusts the bar 105 to align the housing 101 with the user's torso. This process happens rapidly, ensuring a comfortable and user-friendly experience.

[0037] The imaging unit 108 also stores facial parameters into a database linked with the microcontroller, to create unique user profiles and track reading history for allowing the user, personalized interaction with the device.

[0038] As the housing 101 is positioned near to the torso portion of the user, the user is free to interact with the housing 101, wherein a fingerprint scanner 109 installed on the first section 102, which is easily accessible to the user. When the user places their finger on the fingerprint scanner 109, the scanner 109 captures the fingerprint image (unique patterns and ridges of the fingerprint) and sends it to the microcontroller for processing. The microcontroller uses machine learning protocols to analyze the fingerprint image, comparing it to stored templates in the database. If a match is found, the microcontroller retrieves the user's profile from the database stored by the imaging unit 108, enabling personalized interaction with the device.

[0039] The database linked with the microcontroller stores a vast array of user data, including facial recognition information. This facial recognition data is used for authentication purposes, providing an additional layer of security and verification. When the user's fingerprint is recognized, the microcontroller accesses the corresponding facial recognition data, ensuring that the user's identity is verified and authenticated.

[0040] After authentication, the user is allowed to share their preference with the device with the help of a microphone 110 installed with the first section 102. The user provides voice commands regarding their preferences of a reading material. This microphone 110 is designed to capture the user's voice with high fidelity, filtering out background noise and ambient sounds. When the user speaks into the microphone 110, their voice is converted into an electrical signal, which is then transmitted to the microcontroller for processing.

[0041] The microcontroller responsible for interpreting the user's voice commands and taking appropriate action. When the microcontroller receives the audio signal from the microphone 110, it uses speech recognition protocols to analyze the user's voice and identify their preferences. In this case, the user is providing voice commands regarding their preferred reading material. The microcontroller processes this information and based on which, the microcontroller actuates the first OCR (Optical Character Recognition) module embedded in the third section of the housing 101. The OCR module is a type of a camera that captures images of the cover texts of multiple reading materials stored in different slots 118 provided in the third section 104. The OCR module is designed to recognize and read text from images, allowing the device to identify the user-specified reading material.

[0042] When the OCR module is actuated, it begins to scan the cover texts of the reading materials, capturing images of the text and transmitting them to the microcontroller for analysis. The microcontroller uses OCR protocols to recognize and read the text from the images, comparing it to the user's voice commands to identify the desired reading material.

[0043] Once the microcontroller has identified the correct reading material, the microcontroller actuates plurality of pop-out balls positioned along the inner peripheries of the slots 118 in the third section of the housing 101. These balls are powered by an electric motor, which get actuated by the microcontroller by sending an electrical signal to the motor to rotate the balls. As the balls rotate, they translate the user-specified reading material outwards. The pop-out balls are designed to provide a gentle and precise motion, ensuring that the reading material is not damaged or dislodged during the translation process.

[0044] A pair of motorized clippers 111 arranged on the front lateral sides of the housing 101, attached to an extendable L-shaped rod. After translating the user-specified reading material outward, the microcontroller actuates the rod 112 to extend or retract to position the clippers 111 in proximity to the specified reading material. The extension of the rod 112 is powered by a pneumatic unit that utilizes the compressed air to extend and retract the rod. The clippers 111 mimic as human hand, which are designed to grab the reading material securely, without causing damage or creasing.

[0045] Once the clippers 111 have grasped the reading material, the microcontroller actuates a motorized ball and socket joint 113 to position the material for placement inside the second section 103. The motorized ball and socket joint 113 is an articulated component integrated between the clippers 111 and the extendable L-shaped rod. This joint provides a wide range of motion, allowing the clippers 111 to position the reading material at various angles and orientations. When the microcontroller actuates the ball and socket joint 113, it rotates and moves the clippers 111 to place the reading material inside the second section 103.

[0046] After successfully placing the reading material inside the second section 103, a second OCR module 202 installed in the section get actuated by the microcontroller to detect and analyze the texts of the placed reading material within this section. The OCR module is configured to work progressively, scanning and recognizing the text on each page of the reading material as it is flipped. The second OCR module 202 is a high-resolution camera that captures images of the text, which are then transmitted to the microcontroller for processing and analysis, wherein an extendable link 203 installed at the inner back portion of the second section 103 get actuated by the microcontroller, which extends or retracts the link 203 to position a suction tip 204 behind the currently opened page of the reading material.

[0047] The suction tip 204 is a small, rubberized component that creates a gentle vacuum seal on the page, allowing the device to flip the page without damaging the reading material. The extendable link 203 is designed to provide precise and controlled movement, ensuring that the suction tip 204 is accurately positioned behind the page. Synchronously, the microcontroller actuates a semi-circular slider 201 installed in between the link 203 and second section 103 to translate the link 203 and suction tip 204 for flipping the page and enabling sequential reading of the entire reading material.

[0048] The slider 201 consists of a motor, and a rail unit integrated with ball bearings to allow smooth semi-circular movement. As the microcontroller sends signal to the motor, it rotates in the half rotational motion through a pair of belts and linkages. This semi-circular movement provides a stable track and allows the link 203 and suction tip 204 to flip the page for sequential reading. As the slider 201 moves, it rotates the extendable link 203 and suction tip 204, creating a gentle flipping motion that preserves the integrity of the reading material.

[0049] When the second OCR module 202 analyses the text of the reading material, the microcontroller accesses the database to retrieve the corresponding braille scripts. The database is typically a comprehensive repository of braille patterns and scripts, which are linked to specific texts and languages. The microcontroller evaluates the correct braille patterns based on the analysed text, taking into account factors such as language, font, and formatting.

[0050] Once the microcontroller has evaluated the correct braille patterns, it actuates multiple pneumatic pins 114 installed with the first section 102in precise synchronization with the second OCR module 202. The pins 114 are extended or retracted to raise or lower specific pins 114, forming a tactile outline of the evaluated braille patterns onto the third section 104. This process allows the housing 101 to create a dynamic and interactive braille display, enabling users to access and decode the text of the reading material.

[0051] As the user accesses the third section 104, they feel the raised and lowered pins 114, which form the tactile outline of the braille patterns. The user then decodes the text by reading the braille patterns, allowing them to access and engage with the content in a convenient and enjoyable way, thereby enables users to access a wide range of reading materials, promoting literacy and independence. The pins 114 are specifically arranged to form a tactile outline of braille, corresponding to the text detected by the second OCR module. These pins 114 are raised and lowered in a precise pattern to create a dynamic braille display, allowing the user to interpret the text by moving their fingers over the raised braille pattern.

[0052] The device features a translator module that is integrated with the microcontroller, enabling automatic language detection and real-time translation of text. This module is designed to recognize the language of the text detected by the second OCR module and translate it into the user's preferred language. The translator module uses machine learning protocols and language databases to provide accurate and efficient translations, ensuring that the user access and understand the text with ease.

[0053] When the OCR second module detects text, the translator module is triggered to analyze the language of the text. This analysis is performed in real-time, using sophisticated language recognition protocols to identify the language and dialect of the text. Once the language is detected, the translator module retrieves the corresponding translation from its database and begins the translation process.

[0054] The translation process is performed rapidly and efficiently, with the translator module using machine learning protocols to ensure accurate and context-specific translations. The translated text is then sent to the speaker 115 installed on the housing 101, which outputs the translated text in real-time. This allows the user to listen to the translated text as it is being translated, providing a seamless and intuitive user experience. The translator module is designed to be flexible and adaptable, supporting a wide range of languages and dialects. Whether the user is reading a book, article, or document in a foreign language, the translator module can automatically detect the language and translate the text into the user's preferred language.

[0055] The OCR modules are further connected to a text-to-speech module, which enables the speaker 115 to audibly read the analysed text aloud to the user. This text-to-speech module uses machine learning protocols and language databases to convert the text into a natural-sounding audio output. The text-to-speech module is designed to work seamlessly with the OCR modules, receiving the analysed text and converting it into an audio format that played through the speaker 115.

[0056] When the OCR modules detect and analyze the text, the text-to-speech module receives the text data and begins the conversion process. The text-to-speech module uses a combination of natural language processing (NLP) and machine learning protocols to analyze the text and generate a natural-sounding audio output. Once the text-to-speech module has converted the text into an audio format, the speaker 115 plays the audio output aloud to the user. The speaker 115 is designed to provide clear and crisp audio, with adjustable volume and tone controls to suit the user's preferences.

[0057] The database contains a collection of descriptions of common objects, which are used to aid in the learning and development of the user. These descriptions include detailed information about the object's shape, size, texture, and other relevant characteristics. The database is carefully curated to ensure that the information is accurate, up-to-date, and relevant to the user's needs. The pneumatic pins 114 are configured to create a tactile representation of the objects described in the database. When the user selects an object to learn about, the pneumatic pins 114 are raised and lowered in a specific pattern to create a three-dimensional tactile representation of the object. This tactile representation allows the user to explore and understand the object's shape, size, and texture through touch.

[0058] As the user explores the tactile representation of the object, the speaker 115 pronounces the name of the object, providing an auditory reinforcement of the user's learning experience. The speaker’s 115 pronunciation is clear and accurate, and is synchronized with the tactile representation created by the pneumatic pins 114. This multisensory approach to learning helps to reinforce the user's understanding and retention of the object's characteristics.

[0059] A battery is associated with the device to supply power to electrically powered components which are employed herein. The battery is comprised of a pair of electrode named as a cathode and an anode. The battery uses a chemical reaction of oxidation/reduction to do work on charge and produce a voltage between their anode and cathode and thus produces electrical energy that is used to do work in the device.

[0060] The present invention works best in following manner, where the process begins when the user positions the housing 101 on the surface, and the extendable bar 105 adjusts to ensure the housing 101 is aligned with the user's torso. The laser sensor embedded in the housing 101 determines the user's height and actuates the bar 105 to maintain correct alignment. Next, the user accesses the device by imprinting their fingerprint on the scanner 109 installed on the first section 102. The microcontroller retrieves the user's profile from the database, which includes facial recognition data for authentication purposes. This personalized interaction enables the microcontroller to cater to the user's specific needs. The user then provides voice commands to the microphone 110, specifying their preferred reading material. The microcontroller actuates the first OCR module to analyze the cover texts of multiple materials stored in the device, identifying the user-specified material. The pop-out balls positioned along the slots 118 rotate to translate the specified material outwards, and the motorized clippers 111 grab the material, positioning it inside the second section 103. The second OCR module 202 detects and analyzes the texts of the placed reading material, and the extendable link 203 positions the suction tip 204 behind the currently opened page.

[0061] In continuation, the semi-circular slider 201 translates the link 203 and tip, flipping the page and enabling sequential reading. Meanwhile, the microcontroller accesses the database to retrieve braille scripts corresponding to the analyzed texts, evaluating the correct braille patterns. The plurality of pneumatic pins 114 raises or lowers specific pins 114 to form the tactile outline of the evaluated braille patterns. The user then accesses the third section to decode the text of the reading material by moving their fingers over the raised braille pattern. The microcontroller also translates the text into the user's preferred language in real-time through the speaker 115. Additionally, the speaker 115 audibly read the analyzed text aloud to the user, and create the tactile representation of common objects while pronouncing their names to aid in learning. The database stores facial parameters to create unique user profiles and track reading history, enabling personalized interaction. The motorized wheels 116 and suction units 117 for mobility and stability.

[0062] Although the field of the invention has been described herein with limited reference to specific embodiments, this description is not meant to be construed in a limiting sense. Various modifications of the disclosed embodiments, as well as alternate embodiments of the invention, will become apparent to persons skilled in the art upon reference to the description of the invention. , Claims:1) A braille-infused reading assistive device for visually impaired users, comprising:

i) a cuboidal housing 101 having a first, second and third section 102, 103 and 104, developed to be positioned on a ground surface, wherein an extendable bar 105 is arranged beneath said housing 101 for positioning a platform 107 on said surface, to provide support to said housing 101 on said surface;

ii) a laser sensor embedded in said housing 101 and synchronized with an artificial intelligence-based imaging unit 108, for determining height of a user positioned in proximity to said housing 101, wherein a microcontroller is linked with said laser sensor for processing said determined height to actuate said bar 105 for aligning said housing 101 with torso portion of said user, ensuring correct alignment of said housing 101’s first portion, to allow easy access to said user;

iii) a fingerprint scanner 109 installed on said first section 102 that is to be accessed by said user for imprinting said user’s fingerprint to allow said microcontroller for retrieving profile of said user from a database linked with said microcontroller, wherein said database is stored with user’s facial recognition data for authentication purposes, thereby enabling personized interaction with said device;

iv) a microphone 110 arranged on said first section 102 for enabling said user to input voice commands regarding said user’s preference of a reading material to be read, based on which said microcontroller actuates a first OCR (Optical Character Recognition) module embedded in said third section for analysing cover texts of multiple reading materials stored in different slots 118 provided in said third section 104, in view of identifying said user-specified reading material;

v) plurality of pop out balls 119 positioned along inner peripheries of said slots 118, that are actuated by said microcontroller to rotate for translating said user-specified reading material outwards, wherein a pair of motorized clippers 111 are arranged on front lateral sides of said housing 101 via an extendable L-shaped rod 112 that are actuated by said microcontroller to extend/retract for positioning said clippers 111 in proximity to said specified reading material, to enable said clippers 111 to grab said specified reading material, followed by actuation of a motorized ball and socket joint 113 integrated in between said clippers 111 and rod 112 for positioning said grabbed reading material, to place said reading material inside said second section 103;

vi) a second OCR module 202 embedded in said second section 103, configured to detect and analyze texts of said placed reading material within said second section 103, progressively, wherein an extendable link 203 installed at inner back portion of said second section 103, that is actuated by said microcontroller to extend/retract for positioning a suction tip 204 arranged on ends of said link 203 behind currently opened page of said reading material, followed by actuation of a semi-circular slider 201 arranged in between said link 203 and second section 103, for translating said link 203 and tip, thereby flipping said page and enabling sequential reading of entire reading material; and

vii) plurality of pneumatic pins 114 installed on an uppermost portion of said first section 102, wherein said microcontroller accesses said database for retrieving braille scripts corresponding to said analyzed texts, in view of evaluating a correct braille patterns, based on which said microcontroller actuates said pins 114 in precise synchronization with said second OCR module 202, for extending/retracting to raise/lower specific pins 114, in view of forming a tactile outline of said evaluated braille patterns onto said third section 104, that is accessed by said user for decoding text of said reading material.

2) The device as claimed in claim 1, wherein said pins 114 are arranged to form said tactile outline of braille, corresponding to said text detected by said second OCR module 202, allowing said user to interpret said text by moving said user’s fingers over said raised braille pattern.

3) The device as claimed in claim 1, wherein a translator module is integrated with said microcontroller to automatically detect language of said text and translate said texts into a user’s preferred language in real-time through a speaker 115 installed on said housing 101, as per requirement.

4) The device as claimed in claim 1, wherein said first and second 202 OCR modules are further connected to a text-to-speech module, allowing said speaker 115 to audibly read said analysed text aloud to said user, in said user-preferred language.

5) The device as claimed in claim 1, wherein said database contains descriptions of common objects, and said pneumatic pins 114 are configured to create a tactile representation of said objects while said speaker 115 pronounces name of said object to aid in learning of said user.

6) The device as claimed in claim 1, wherein said imaging unit 108 is linked with said database for storing facial parameters, to create unique user profiles and track reading history, enabling personalized interaction with said device.

7) The device of claim 1, wherein plurality of motorized wheels 116 and suction units 117 are arranged beneath said support for enabling mobility to said housing 101 to any desired location, and said suction units 117, secure said device in place to prevent movement when in use.

8) The device as claimed in claim 1, wherein a battery is configured with said device for providing a continuous power supply to electronically powered components associated with said device.

Documents

Application Documents

# Name Date
1 202421105113-STATEMENT OF UNDERTAKING (FORM 3) [31-12-2024(online)].pdf 2024-12-31
2 202421105113-REQUEST FOR EXAMINATION (FORM-18) [31-12-2024(online)].pdf 2024-12-31
3 202421105113-REQUEST FOR EARLY PUBLICATION(FORM-9) [31-12-2024(online)].pdf 2024-12-31
4 202421105113-PROOF OF RIGHT [31-12-2024(online)].pdf 2024-12-31
5 202421105113-POWER OF AUTHORITY [31-12-2024(online)].pdf 2024-12-31
6 202421105113-FORM-9 [31-12-2024(online)].pdf 2024-12-31
7 202421105113-FORM FOR SMALL ENTITY(FORM-28) [31-12-2024(online)].pdf 2024-12-31
8 202421105113-FORM 18 [31-12-2024(online)].pdf 2024-12-31
9 202421105113-FORM 1 [31-12-2024(online)].pdf 2024-12-31
10 202421105113-FIGURE OF ABSTRACT [31-12-2024(online)].pdf 2024-12-31
11 202421105113-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-12-2024(online)].pdf 2024-12-31
12 202421105113-EVIDENCE FOR REGISTRATION UNDER SSI [31-12-2024(online)].pdf 2024-12-31
13 202421105113-EDUCATIONAL INSTITUTION(S) [31-12-2024(online)].pdf 2024-12-31
14 202421105113-DRAWINGS [31-12-2024(online)].pdf 2024-12-31
15 202421105113-DECLARATION OF INVENTORSHIP (FORM 5) [31-12-2024(online)].pdf 2024-12-31
16 202421105113-COMPLETE SPECIFICATION [31-12-2024(online)].pdf 2024-12-31
17 Abstract.jpg 2025-01-24
18 202421105113-FORM-26 [03-06-2025(online)].pdf 2025-06-03