Sign In to Follow Application
View All Documents & Correspondence

Device For Assisting Visually Challenged Individuals

Abstract: The present invention relates to a device for aiding visually challenged users and to a method of operating such a device. Accordingly, a wearable device for guiding a visually challenged user, the device comprises an eye wear, a plurality of modules mounted on front side of the eye wear and each module having a plurality of sensors and an image capturing means, wherein the image capturing device configured to capture image data, and the sensors in the device calculate the distance of nearest obstacles. A processor coupled with the module and configured to receive data from the module, the processor integrates all the data received from the sensors and the image capturing means along with the data received from an accelerometer / gyroscope unit (IMU) and identify whether the user can move forward or not. A first module positioned at an angle of 70 – 85 degree identifies the obstacles in the front direction and provides data on the position of the obstacle, and a second module positioned at an angle of 50-70 degree inclined to body pointing downwards to identify pits. Reference figure 6

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 October 2020
Publication Number
16/2022
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
info@krishnaandsaurastri.com
Parent Application

Applicants

AKHILESH R
Krisnavilasm, Venchampu PO, Punaloor, Kollam, Kerala - 961333, India
Usha S
Krisnavilasm, Venchampu PO, Punaloor, Kollam, Kerala - 961333, India

Inventors

1. Roopesh Jenu
Flat No 1B, Jawahar Nagar, Kowdiar, Trivandrum, Kerala, India
2. Balu John
4G, SFS Pattom Square, Marappalam, Pattom Palace P O, Trivandrum, Kerala - 695004, India
3. AKHILESH R
Krisnavilasm, Venchampu PO, Punaloor, Kollam, Kerala - 961333, India
4. Usha S
Krisnavilasm, Venchampu PO, Punaloor, Kollam, Kerala - 961333, India

Specification

DESC:FIELD OF THE INVENTION
The present invention relates generally to devices for assisting the visually challenged individuals. More particularly, the present invention relates to a device for aiding visually challenged users and to a method of operating such a device.
BACKGROUND OF THE INVENTION
There are millions of individuals who are blind or partially sighted, and there are many more who suffer from some form of visual impairment or sight impediment that hinders their mobility or otherwise lessens their quality of life. The loss of sight obviously impacts greatly on an individual's ability to navigate and negotiate their environment, and thus many individuals suffer reduced mobility as a result of their visual impairment. It is believed that around 48 percent of blind or partially sighted individuals feel 'moderately' or 'completely' cut off from society.
Typically, the only mobility aids available to visually-impaired individuals (notwithstanding guide dogs) are manual probes, namely the cane (i.e. white stick) or auditory devices (similar to echo locating equipment). However, our sense of sight is the most natural sense by which an individual becomes aware of their spatial environment and therefore even with the conventionally available aids, an individual is still likely to suffer from a reduced awareness of their environment, which diminishes their ability to safely navigate and negotiate obstacles in their immediate vicinity.
Individuals who are visually impaired often have a difficult time self-navigating outside even in well-known environments. Traveling or simply walking down a crowded street may pose great difficulty. Because of this, many people who are visually impaired bring a sighted friend or family member to help navigate unknown environments. The most important challenge they face is safe moving. Emergent situations where visually impaired people require help, even if they use walking stick are a lot, like obstacles coming in front of them, depth or deep pit on the way of moving, steps for moving upwards or downwards, and human identification.
Apart from these, visually challenged individuals face a lot of other difficulties while travelling from one place to another place. Usually they take the assistance from strangers on such occasions. Some strangers may help them, some may not. Whatsoever, visually challenged individuals feel like they are dependents. Some of the information which are necessary but are difficult to know for this individuals include but not limited to: time, date, specialty of that day, place, location, destination distance, closest important places like hospital, recreation centers, bus stop, railway station etc., to seek help in emergency situations like when thieves or robbers trying to attack or fight with them, to seek help when they are totally lost in an unfamiliar place, to find the exact person who is willing to help them by investing his/her time, using of public transport, saving contact details on mobile phone etc.
Visually challenged individuals find it very hard to overcome aforementioned situations. These situations lead them to depression and make them feel that they are not part of active society. The present invention is focused to replace these helping hands by providing a device and make them free as individuals.
SUMMARY OF THE INVENTION
According to an aspect of the invention, a wearable device for guiding a visually challenged user, the device comprises an eye wear, a plurality of modules mounted on front side of the eye wear and each module having a plurality of sensors and an image capturing means, wherein the image capturing device configured to capture image data, and the sensors in the device calculate the distance of nearest obstacles. A processor coupled with the module and configured to receive data from the module, the processor integrates all the data received from the sensors and the image capturing means along with the data received from an accelerometer / gyroscope unit (IMU) and identify whether the user can move forward or not. A first module positioned at an angle of 70 – 85 degree identifies the obstacles in the front direction and provides data on the position of the obstacle. A second module positioned at an angle of 50-70 degree inclined to body pointing downwards to identify pits.
According to an another aspect of the invention the first module collects distance information about surfaces within its field of view, the picture produced by the module defines the distance to a surface at each point in the picture, providing the three dimensional position of each point in the picture to obtain the 3-dimensional model of the object.
According to yet another aspect of the invention the wearable device further comprises a buzzer configured with the processor, wherein if the distance is less than a pre-defined value a beep sound is produced, the frequency of beep is inversely related to the distance to the obstacle, the user analyses the distance to the nearest obstacle through the frequency of beep sound.
In an yet another aspect of the invention the processor compiles all the data from the sensors and makes an order, and compares the orders value to a pre-defined patterns, the level of matching with each pattern is calculated and the relevant match is identified to determine the exact obstacle.
In another aspect of the invention, the wearable device further comprising a plurality of audio output means configured with the processor to provide information about the type of the obstacle and guiding about steps and pits through voice commands.
In an yet another aspect of the invention said plurality of sensors including but not limited to a Light Detection and Ranging (LIDAR), a Sound Navigation and Ranging (SONAR) sensors.
In another aspect of the invention, the device is coupled to a blind aid center through a cloud server, the cloud server receives instant data from the module or processor and is directly shared to a virtual guide in the blind aid center to guide the user in real time.
In an yet another aspect of the invention, the virtual guide in the blind aid centre access the movement of the user and guides the user based on the pre stored data having geospatial and location information about the place.
In another aspect of the invention, the device is coupled and operated through a mobile application.
According to another aspect of the invention, a method for guiding a visually challenged user, the method comprises the steps of, obtaining data from a plurality of modules mounted on front side of an eye wear, wherein an image capturing device configured to capture image data, and a plurality of sensors calculates the distance of nearest obstacles. Receiving data from the modules and integrating all the data received from the sensors and the image capturing means along with the data received from an accelerometer / gyroscope unit (IMU) and identifying whether the user can move forward or not. A first module positioned at an angle of 70 – 85 degree identifies the obstacles in the front direction and provides data on the position of the obstacle. A second module positioned at an angle of 50-70 degree inclined to body pointing downwards to identify pits.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be described in greater detail with reference to an embodiment which is illustrated in the drawing figures:
Figure 1 shows device for aiding visually impaired individuals, according to an embodiment of the present invention;
Figure 2 shows sensor of the device, according to an embodiment of the present invention;
Figure 3 shows view from an image capturing module, according to an embodiment of the present invention;
Figure 4 shows angle at which the sensor components are positioned in a device, according to another embodiment of the present invention.
Figure 5 shows motherboard of the device, according to an embodiment of the present invention;
Figure 6 is a block diagram showing working of the device, according to an embodiment of the present invention;
Figure 7 is schematic representation showing view from the device, according to another embodiment of the present invention;
Figure 8 is photographic view showing some of the use case scenarios and digital output of device on different situations, according to another embodiment of the present invention; and
Figure 9 is a block diagram showing working of device with a blind aid center, according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention relates to a device for aiding visually impaired individuals and to a method of operating such device.
Figure 1 shows a device for aiding visually impaired individuals, according to an embodiment of the present invention. Accordingly, the device comprises plurality of modules including but not limited to a Light Detection and Ranging (LIDAR) / Sound Navigation and Ranging (SONAR) sensor module, an image capture module, an accelerometer / Gyroscope Unit (IMU), a Bluetooth low energy (BLE)/ WiFi based connectivity unit, a speaker and mic unit, a power supply and management unit, a power unit, a motherboard, and a gateway unit.
LIDAR/SONAR sensor module facilitates in ranging, distance calculation and obstacle identification. Based on the inputs provided these sensors identifies the obstacles and pits and inform the visually challenged users as voice output. According to an embodiment of the invention, a device is fitted on the central top of an eye wear having a first sensor at center and a second sensor mounted at an angle towards bottom. The first sensor facilitates to identify the obstacles in front direction providing data on the position of the obstacle whether it is directly straight or left to center or right to center (as shown in Figure 2). Also, second sensor placed at an angle towards down is used to fetch the information regarding the pit and step.
The image capturing module, identifies human faces, recognizes persons and detects objects. Similar to sensors, the device is provided with a pair of image capturing module to take images of pit, step and also obstacles/human in front of the user. The type of obstacle can be human, table, chair, wall, car, board, etc. detected. The image capturing module and sensor module in 3DI-PC unit functions as a 3D scanner which collects distance information about surfaces within its field of view. The "picture" produced by these modules describes the distance to a surface at each point in the picture. This allows the three dimensional position of each point in the picture. Depending on the situations, a single scan or multiple scans from many different directions is performed to obtain information about all sides of the object, then merged and run to get the 3D model of the object. Figure 3 shows the view of an image capturing module, according to an embodiment of the present invention.
Figure 4 shows the angle at which the sensors are placed, according to an embodiment of the present invention. In the figure A2 represents angle at which infrared/ LIDAR sensor for obstacle detection is placed. The angle is positioned between 70 – 85 degree inclined to body. A3 represents angle at which Ultrasound/ LIDAR sensor for steps/ pits identification. The preferred angle is between 50-70 degree. A1 - A4 represents the image capturing angle, where 3-D image of the object can be captured. A5 & A6 represents the angles securing in device for future proof where more applications can be added for safe journey of blinds.
The accelerometer / gyroscope unit (IMU) is employed to identify the movement of visually challenged users relative to various axis. This data is used to normalize the variations in data generated from sensors and image capturing module while the user is walking. It is also used for calibration and to identify the position & direction of the user’s head.
In the device BLE/ WiFi based connectivity unit is used to connect with the mobile phone or the gateway unit to get the internet access. Speaker and mic unit is the input and output unit for the user. The commands from the user is received as voice inputs and the instructions to the user is given back as voice.
The power unit is provided in the device to meet the requirement of the power. The power unit can be battery based rechargeable power unit having low form factor. Power supply and management unit in the device assists in keeping the charging requirement of the device to minimum. This unit manages the power of the device. The unit facilitates the device to make the power consumption to a lower limit to get maximum operational time with minimum battery power.
Figure 5 shows motherboard of the device, according to an embodiment of the present invention. The motherboard manages the entire device. It receives data from all the modules and processes it. Data from sensors, image capturing module and IMU are used for guiding the user. Connectivity module is used to send data to the internet through a mobile app or gateway unit.
The device has an optional gateway unit, if the user doesn’t have a smartphone then this optional gateway unit can be used to connect to the internet. The optional unit has a GPS module to identify the exact location of the person. The optional gateway unit takes the data from the device main unit and sends it to the cloud for processing and response. The gateway unit can be GSM/GPRS/4G module, GPS module, power, and interface module with motherboard.
The firmware for the device is employed in two units, in the main motherboard and the gateway unit. The main functions of firmware in the motherboard includes sensor management, image capturing management, IMU management, calibration management, connectivity management, power management, commands execution management i.e. voice response system and alerts generation. The functions of firmware in the gateway unit includes connecting through GSM/GPRS/4G, internet connectivity, GPS module, power management, and interface with motherboard.
Figure 6 is a block diagram showing working of the device, according to an embodiment of the present invention. In operation, the device is fitted at the front side of eye wear. The device having different modules viz. LIDAR, Ultrasonic sensors and an image capturing means is connected to the motherboard. One set of modules are positioned at angles facing the front walking direction. Another set of modules is positioned pointing downwards at certain angle to identify pits. The sensors in the device calculate the distance of nearest obstacles and send this to the motherboard. The processing unit in the motherboard combines various sensor readings along with the accelerometer / gyroscope unit (IMU) and identify whether the person can move forward or not. The decision made by the motherboard is passed to the blind person as voice commands through the speaker and mic unit.
The sensor unit is connected to the motherboard through wired or wireless medium. The sensor measures the distance to the nearest obstacles and send this data to the mother board. The controller in the mother board collect all these data and calculates the distance to the nearest obstacle. If the distance is less than a pre-defined value, then according to an embodiment of the invention a beep sound is produced. The frequency of beep is inversely related to the distance to the obstacle. The beep sound is produced by an inbuilt buzzer in the mother board. The visually challenged user can analyze the distance to the nearest obstacle through the frequency of beep sound.
The basic principal of pit and step identification is pattern matching. The controller receives all data from the sensors and makes it in an order. Then it compares this orders value to some pre-defined patterns. The level of matching with each pattern is calculated and the best match is identified. After the pattern is identified then the exact obstacle mapped to the pattern is determined. The audio information about each obstacle is stored as voice data in predefined address of flash memory. The audio file is played through the speakers in the mother board. The user gets information about the type of the obstacle and proper guiding about steps and pits while traveling.
The pattern matching can also be done with an AI Neural network trained with the inputs from sensors. The neural network model will be trained with actual data for steps and pits and other patters. Once the network is trained and tested, then it is deployed in the mother board and the data from the sensors is fed to the inputs layer of the network and the output layer gives the probability of every object. The object with maximum probity is identified.
For example, referring to figure 7 the space in front of user is divided into three where ’A’ is the zone where the sensor will search for any obstacles, ’B’ is the zone where the sensor will search for any pits/ steps and ’C’ is the zone where the user is really safe. The sensor modules first searches in zone ’A’ and if any obstacle is found it alerts the user. If not, the zone ’B’ is searched and alert the user.
Figure 8 shows some of the use case scenarios and digital output of device on different situations. Figure 8a shows user height is 172cm and the depth detected from the device is 245cm. Similarly Figures 8b-e shows, the obstacle can be steps or table or road divider which is detected as the distance output from the device and it is lesser than the user height. All these use cases are often part of visually challenged users life. The pits and steps height vary based on situations and circumstances.
The device also offers 3D scanning feature for more precise object and obstacle detection using an array of sensors played in the sense module of the device. The array of sensors are placed at different angles and different phase. A 3D profiling of the scene is created with the received data from these sensors and this profile is used for object detection along with AI based pattern matching.
The device can be connected to internet and mobile network. The mother board of this module is connected to internet through a mobile phone or device gateway. The device will connect to the cloud services for various features like voice assistance. The mother board has a high-quality micro phone built in. The AI based voice identification and response services running in the cloud performs the speech to text to command and text to speech services.
When a user request for some service through voice commands the micro phone in the module capture the voice and apply various filters to remove noise and stream the voice data to the cloud service through gateway module or mobile phone. The AI cloud service for speech to test to command service has a pre trained model which is further enhanced by personalization. The personalization for each user is done at the time of calibration. The AI model is initially trained with standard voice for the set of allowed commands. Then the pre trained AI model is again trained with the user’s voice data for personalization during the time of calibration.
During normal working when the blind gives voice request or command the microphone in the mother board captures voice and applies filter to remove noise. The filtered voice data is then streamed to the cloud AI service. The AI neural network is given the voice data to the input layer and the output layer gives the command code with maximum probability. The command code can be an integer, this code can be forwarded to the operation selector. Based on this command code the operation selector will execute a specific operation corresponds to the command code. The output of the operation can be location data, time, date etc. The output of the operation is forward to a text to speech converter and can be converted to standard voice information. This audio data can be streamed back to the device through gateway module. The mother board receives the voice response from the AI cloud service and plays the voice data through the inbuilt speakers of the module.
Further, the image from the image capturing module can be send to cloud service through the internet gateway module or mobile phone network. The image processing and face recognition service in the cloud is based on CNN based Alex-net and is trained with large amount of image data. The output is labels of object identified with probability. The maximum probable label can be sent to text to voice converter and the voice message is send back to the device which is played in the speaker of the motherboard for the user.
Thus, the motherboard unit in the device receives commands from the user through the speaker and mic unit and converts it into various actions and gives back the result through voice inputs. The motherboard actually navigates the user through voice commands.
According to an embodiment of the present invention, when the blind needs remote assistance, the service can be availed through a voice command and then the image capturing means mounted on the front sensor unit streams the video to a cloud based aid center from the device through a mobile gateway unit. In the blind aid center, any person can see the live streaming transmitted from the device and can assist the blind person in real time by providing voice guidance.
Figure 9 is a block diagram showing working of device with a blind aid center, according to an embodiment of the present invention. The visually challenged user can request for support from a blind aid centre by issuing a voice command. The voice command is processed in the cloud and a connection to a blind aid centre is initiated. The cloud server can have updated data of virtual guide(s) and their online status. The system identifies the most suitable and available virtual guide and connects to the device. The Guides are selected based on their location, language known and rating. Any person can voluntarily register as a virtual guide by installing a mobile app and making the status available. The virtual aid can be a person or an artificially intelligent computer program (AI Agent) which is empowered with various machine learning models for video, audio, sensor data and image analysis. The AI model will be trained with large data sets to identify the scenes, context and situations. Parameters like location, time, language etc. can also be used to train the AI model to generate the appropriate output which can aid the blind or any other persons who require remote assistance can be properly assisted.
Once the connection is established between the device and the aid centre the mother board streams the video from the sensor module to the virtual guide’s mobile or PC, so that guide can see what is happening in front of the visually challenged user. The user’s location is also shown in a map to the virtual guide. The aid centre can give audio instructions to the blind for navigation or other decision support. The remote guide can even read out boards and other signage for the user.
The user can also avail a virtual aid and navigation system for guiding him in unknown areas. The virtual guide can have various geospatial and other location-based information about the place in its data base and is provided to user as voice responses. The location and the direction of movement of the user is provided by the inertial module and GPS module in the mother board unit. The device and the associated system can even detect the activities of user like walking, standing falling etc. The system can be trained to identify emergency situations and can automatically raise alerts and make emergency service request like ERSS or medical aid.
The device has two modes manual and automatic. In manual mode, input is provided from the users as voice, and it is processed, and output is provided accordingly. The input can be predetermined commands which are processed as codes. In automatic mode the device automatically provides the output based on the input received from the surrounding environment. This mode in a device can be made active always. It works all the time when the user is moving. This mode assists the users to escape from sudden accidents.
The present device can identify the obstacle when visually challenged user is moving. In the device, the LIDAR module placed on center has the function of identifying the obstructions in front, right and left directions. According to the position and size of the obstacle, device responds. Until and unless device say anything to the user, the user can move forward. Assume even after the response from device user prefer to walk towards the obstacle direction, on such situation the device calculates the distance between the user and the obstacle. Further, if the user wants to know what the obstacle is, if the user moves in the direction of the obstacle, the device based on the distance between the user and the obstacle assists the user in the counts of distance or steps to reach the obstacle.
The present device can identify the pit or hump when user is moving. The sensor positioned on the bottom can identify the pits or humps in front, right or left directions. According to the position and size of the pits, device responds. Until and unless device responds to the user, user can move forward. If the blind walks towards the direction of pit/hump, on such situation the device calculates the distance between the user and the pit/hump. Further, if the user moves in the direction of the pit/hump, the device based on the distance between the user and the pit/hump assists the user in the counts of distance or steps to reach the obstacle.
The present device can also assist in identifying the steps to move upwards and downwards. The device gives the direction and distance towards the steps when the blind starts to move up or down, it provides output the number of steps to reach the top or bottom. Thus, assisting visually challenged users to move upwards and downwards through the steps.
The present device can identify humans and number of humans. Device identifies human and predict the number of humans who are in front and also on both the sides of visually challenged user. There are some possible conditions by which human come nearby users. The response of device depends on direction, distance and also on number of humans.
Although the preferred embodiment of the invention has been illustrated, it will be obvious to those skilled in this art that other embodiments may be readily designed within the scope and teachings thereof.


,CLAIMS:
1. A wearable device for guiding a visually challenged user, the device comprising:
an eye wear;
a plurality of modules mounted on front side of the eye wear and each module having a plurality of sensors and an image capturing means, wherein the image capturing device configured to capture image data, and the sensors in the device calculate the distance of nearest obstacles;
a processor coupled with the module and configured to receive data from the module, the processor integrates all the data received from the sensors and the image capturing means along with the data received from an accelerometer / gyroscope unit (IMU) and identify whether the user can move forward or not;
wherein,
a first module positioned at an angle of 70 – 85 degree identifies the obstacles in the front direction and provides data on the position of the obstacle;
a second module positioned at an angle of 50-70 degree inclined to body pointing downwards to identify pits.
2. The wearable device as claimed in claim 1, wherein the first module collects distance information about surfaces within its field of view, the picture produced by the module defines the distance to a surface at each point in the picture, providing the three dimensional position of each point in the picture to obtain the 3-dimensional model of the object.
3. The wearable device as claimed in claim 1, further comprising a buzzer configured with the processor, wherein if the distance is less than a pre-defined value a beep sound is produced, the frequency of beep is inversely related to the distance to the obstacle, the user analyses the distance to the nearest obstacle through the frequency of beep sound.
4. The wearable device as claimed in claim 1, wherein the processor compiles all the data from the sensors and makes an order, and compares the orders value to a pre-defined patterns, the level of matching with each pattern is calculated and the relevant match is identified to determine the exact obstacle.
5. The wearable device as claimed in claim 1, further comprising a plurality of audio output means configured with the processor to provide information about the type of the obstacle and guiding about steps and pits through voice commands.
6. The wearable device as claimed in claim 1, wherein said plurality of sensors including but not limited to a Light Detection and Ranging (LIDAR), a Sound Navigation and Ranging (SONAR) sensors.
7. The wearable device as claimed in any one of claims 1-4, wherein said device is coupled to a blind aid center through a cloud server, the cloud server receives instant data from the module or processor and is directly shared to a virtual guide in the blind aid center to guide the user in real time.
8. The wearable device as claimed in claim 7, wherein the virtual guide in the blind aid centre access the movement of the user and guides the user based on the pre stored data having geospatial and location information about the place.
9. The wearable device as claimed in any one of claims 1-7, wherein said device is coupled and operated through a mobile application.
10. A method for guiding a visually challenged user, the method comprising the steps of:
obtaining data from a plurality of modules mounted on front side of an eye wear, wherein an image capturing device configured to capture image data, and a plurality of sensors calculates the distance of nearest obstacles;
receiving data from the modules and integrating all the data received from the sensors and the image capturing means along with the data received from an accelerometer / gyroscope unit (IMU) and identifying whether the user can move forward or not;
wherein,
a first module positioned at an angle of 70 – 85 degree identifies the obstacles in the front direction and provides data on the position of the obstacle;
a second module positioned at an angle of 50-70 degree inclined to body pointing downwards to identify pits.

Documents

Application Documents

# Name Date
1 202041045010-PROVISIONAL SPECIFICATION [15-10-2020(online)].pdf 2020-10-15
2 202041045010-FORM 1 [15-10-2020(online)].pdf 2020-10-15
3 202041045010-DRAWINGS [15-10-2020(online)].pdf 2020-10-15
4 202041045010-Proof of Right [08-12-2020(online)].pdf 2020-12-08
5 202041045010-Correspondence_Form1_Proof of Right_14-12-2020.pdf 2020-12-14
6 202041045010-FORM 3 [18-10-2021(online)].pdf 2021-10-18
7 202041045010-ENDORSEMENT BY INVENTORS [18-10-2021(online)].pdf 2021-10-18
8 202041045010-DRAWING [18-10-2021(online)].pdf 2021-10-18
9 202041045010-CORRESPONDENCE-OTHERS [18-10-2021(online)].pdf 2021-10-18
10 202041045010-COMPLETE SPECIFICATION [18-10-2021(online)].pdf 2021-10-18
11 202041045010-FORM 18 [08-10-2024(online)].pdf 2024-10-08