Abstract: [0049] A user-worn device for assisting a visually impaired user wearing the device is provided. The device comprises an image capturing module for capturing one or more images of an environment of the user, a distance sensing module for sensing a distance between the device and an object in the user's environment and a processor for processing the image and the distance for extracting one or more information related to the object. In addition, the processor, further, converts the extracted information into a message in a natural language. The natural language message is converted into an audible signal and an audio output means is provided for conveying the audible message to the user.
FIELD OF THE INVENTION
[001] The present invention generally relates to a user-worn device and more particularly, the present invention relates to a user-worn device for assisting a visually impaired user wearing the device.
BACKGROUND OF THE INVENTION
[002] During the last few decades, the number of visually impaired person has tremendously increased. By visually impaired, it is meant a person who is visually challenged and whose eyesight cannot be corrected to a normal level (for example, functional limitation of the eye or eyes or the vision system). Several tools are in use at present for assisting a visually impaired user. The tools could be the Braille system, a white cane, a smart cane, an OrCam, an electronic headset, an ODG (Osterhout Design Group) smart glasses, an eSight, and the like are the common sets of tools that people rely on.
[003] In practice, the Braille system is restricted in use as it is not learned by all visually impaired persons. Existing white cane tool is limited in detecting ground level and above obstacles but not hanging obstacles. With the existing smart cane, the water sensor will not detect the water unless it is 0.5 cm or deeper and the buzzer of water detector will not stop before it is dried or wiped, however, usage during only summer and winter season and restricted during monsoon season.
[004] Existing OrCam tool is restricted to use for navigational purposes by the visually impaired person. Similarly, existing one or more basic tools work by beaming light into the parts of the eye that are still functional, restricted to persons whose eyes are partially functional. Hence, the one or more basic tools may not be used by a completely visually impaired person. In addition, the existing tools may also be bulky, heavy, expensive, or inefficient in utilizing data.
SUMMARY OF THE INVENTION
[005] This summary is provided to introduce a selection of concepts in a simple manner that are further described in the detailed description of the disclosure. This summary is not intended to identify key or essential inventive concepts of the subject matter nor is it intended to determine the scope of the disclosure.
[006] To overcome at least one of the above mentioned problems, there exists a need for a user-worn device that continuously interacts with a visually impaired person. Moreover, the user-worn device is needed that is light in weight, not limited to use during any season and also enables the visually impaired person in assisting daily activities, providing navigational aid in unfamiliar indoor environment as well as outdoor environment, involving unpredictable risks like potholes on road, obstacles, and the like.
[007] Briefly, according to an embodiment of the present disclosure, a user-worn device for assisting a visually impaired user wearing the device is provided. The device comprises an image capturing module for capturing one or more images of an environment of the user, a distance sensing module for sensing a distance between the device and an object in the user's environment and a processor for receiving the image captured by the image capturing module and the distance sensed by the distance sensing module for extracting one or more information related to the object. A processor, further, converts the extracted information into a message in a natural language. The natural language message is provided to an audio converter for converting the message into an audio signal and an audio output means is provided for conveying the audible message to the user.
[008] Briefly, according to an embodiment of the present disclosure, a method for assisting a visually impaired user is provided. The method comprises capturing an image of an environment of the user, sensing a distance between the user and an object in the user's environment, processing the captured images and the distance between the user and an object in the user's environment, extracting one or more information related to the object, converting the extracted information into a message in a natural language, converting the message into an audio signal and conveying the message audibly to the visually impaired user.
[009] The above summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, example embodiments, and features described above,
further aspects, example embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS:
[0010] These and other features, aspects, and advantages of the example embodiments will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
[0011] FIG. 1 is a block diagram of an exemplary wearable device 105 in which various embodiments of the present disclosure may be seen, according to an embodiment of the present invention;
[0012] FIG. 2 is a block diagram illustrating an indoor-outdoor navigation system and a recognition system, according to an embodiment of the present invention;
[0013] FIG. 3 is a block diagram illustrating use conditions of the wearable device, according to an embodiment of the present invention; and
[0014] FIG. 4 is a flow chart illustrating a method for a wearable device, according to an embodiment of the present invention.
[0015] Further, skilled artisans will appreciate that elements in the figures are illustrated for simplicity and may not have necessarily been drawn to scale. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the figures by conventional symbols, and the figures may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the figures with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
DETAILED DESCRIPTION OF THE INVENTION
[0016] For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiments illustrated in the figures and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the
illustrated system, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.
[0017] It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.
[0018] The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such process or method. Similarly, one or more devices or sub-systems or elements or structures or components proceeded by "comprises... a" does not, without more constraints, preclude the existence of other devices or other sub-systems or other elements or other structures or other components or additional devices or additional sub-systems or additional elements or additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[0019] Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The system, methods, and examples provided herein are illustrative only and not intended to be limiting.
[0020] The words user and person, mean the same and are used interchangeably in the description.
[0021] In addition to the illustrative aspects, exemplary embodiments, and features described above, further aspects, exemplary embodiments of the present disclosure will become apparent by reference to the drawings and the following detailed description.
[0022] FIG. 1 is a block diagram of an exemplary wearable device 105 in which various embodiments of the present disclosure may be seen. As shown, the wearable device 105 comprises an image capturing module 110, a distance sensing module 115 and a processor 120. The image capturing module 110, the distance sensing module 115 and the processor 120 are mounted on the wearable device 105, for example a pair of eyeglasses. Further, the image
capturing module 110 comprises a camera 125. The camera 125 may be a 2-D camera, a 3-D camera, and the like. The distance sensing module 115 comprises a pair of distance sensors for sensing a distance between the wearable device and the one or more objects within the user's environment. The wearable device 105 further comprises an audio convertor 130 and an audio output 135. The audio output means 135 is a pair of bone conduction headphones for conveying the audible message to the user. The wearable device 105 may be powered by a battery as well known in the art. Each component is described in detail further below.
[0023] The wearable device 105 is configured for assisting a visually impaired user. The wearable device 105 is not only limited to the visually impaired user. The wearable device 105 may also be used by a senior citizen, an Alzheimers patients, warehouse users, industrial workmen, adventure enthusiasts, and the like.
[0024] In one embodiment of the present disclosure, the image capturing module 110 is configured for capturing one or more images of an environment of the user. The image capturing module 110 comprises a camera 125. The camera 125 is typically mounted on the front face of the wearable device 105, for example a pair of eyeglasses, for capturing the one or more images of the user's environment.
[0025] The camera 125 may be a 2-D camera, a 3-D camera, and the like. The camera 125 captures one or more images of the environment of the user. In one implementation, depth sensing cameras may be used for estimating depth and orientation of the captured images of the user's environment. The camera 125 is also used for estimating 3-D features of the captured images in the environment of the user. The user's environment as described herein may refer to an indoor environment or an outdoor environment, the user is typically looking at. The indoor environment refers to the consideration of the elements encountering in a closed space. The indoor environment, further, refers to a dimensions of a room in which the user is located, a description of a room, an area and a description of the floor of the room; a number of objects lying on a table, color of an object lying on a table, a shape of an object lying on a table, a name of an object lying; a count of a number of people in the room, a facial expression of a person in the room; a name of a person in the room; an amount of a bill; a contact card details; a description of a currency note, and the like. Similarly, the outdoor environment refers to the consideration of the elements encountering in an open space, for example, a distance between the user and a pothole, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like. As described, the camera 125 captures the one or more
images of the user's environment and the capture one or more images are communicated to the processor 120 for further processing.
[0026] In the same embodiment, the distance sensing module 115 is configured for sensing a distance between the device and one or more objects in the user's environment. Herein, the device is a user-worn device which is configured to be worn as a pair of eyeglasses. However, the object in the user's environment refers to the object present in the indoor environment and the outdoor environment of the user.
[0027] The distance between the device (for example, the pair of eyeglasses) and the obj ect in the user's environment may be sensed by the Time-of-Flight (ToF) laser-ranging module. The ToF laser ranging module provides an accurate distance measurement. Two sensors (ToF) are mounted on either side of the wearable device 105 along with the camera 125 to estimate the depth and orientation of the captured images of the environment of the user and the distance of the object in the user's environment.
[0028] In the same embodiment, the processor 120 of the wearing device 105 (for example, the pair of eyeglasses) process the raw data captured by the image capturing module 110, for example, the camera 125. The processor 120 is configured for receiving the image captured by the image capturing module 110 and the distance sensed by the distance sensing module 115 for extracting information related to the objects within the environment. The extracted information, related to the objects may include dimensions of a room in which the user is located, a description of a room, an area and a description of the floor of the room, number of objects lying on a table, color of an object lying on a table, a shape of an object lying on a table, a name of an object lying, a count of a number of people in the room, facial expressions of a person in the room, a name of a person in the room, an amount of a bill, a contact card details, a description of a currency note, a distance between the user and a pothole, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like. In one embodiment of the present disclosure, such information is extracted by processing the one or more images and the distance data captured by the distance sensing module 115.
[0029] Further, in the same embodiment, the processor 120 is configured for converting the extracted information related to the object, into a message in a natural language. The generated message is then fed to the audio converter 130 for converting the message into an audible signal. In one embodiment, the audio converter 130 is a text to speech converter which converts
the input messages into an audio signal. The converted message that the audio signal is then provided to an audio output 135 means for conveying the audible message to the user. The audio output 135 means, here, is a pair of bone conduction headphones for conveying the audible message to the user, by the user-worn device.
[0030] The pair of bone conduction headphones in combination with the directional sound cues monotone alarm call generator timer based circuit is used for conveying the direction information to the user. The directional sound cues are provided via 800 Hertz (Hz) monotone alarm-call generator 555 timer based circuit. The audio cues are represented by a combination of pan and pitch, wherein the pan refers to the channel in which the sound is being played and pitch is the frequency of the sound.
[0031] In one implementation, the processor 120 used for processing the captured images, is an Intel Edison/Joule, and the like compatible Single Board Computer (SBC) running on Linux/Yocto with an algorithm. The algorithm works with the images of known objects. The objects may be of different shape, size, texture, pose, color, and the like. The algorithm parse a given indoor and outdoor environment image into constituent entities of individual objects present in the environment, using a rough segmentation algorithm. The retrieved entities in the environment with their rough boundaries are used to get the respective matched codebook from the learned exemplars of individual objects. The azimuth and elevation angle captured from the matched image is then used as an initial estimate of the pose. Further, to refine the estimate, use an ensemble of shape features to establish matches between the objects and model silhouettes. Thus, the spatial map of walkable region in an environment may be known by the user. The computer vision based device may be able to capture the environment and employ the semantic understanding of the environment to the user. Further, the information is given to the user through a haptic and an audio cues modalities. The haptic and the audio cues modalities are used in combination with the bone conduction headphones for conveying the audible message (for example, direction information) to the users.
[0032] In the same embodiment, a power system (for example, a battery) is a rechargeable Li-Polymer device rated at 3.3 V, 450 milliampere Hour (mAh) capacity having a form factor suitable for use in the eyeglasses and is integrated in the wearable device 105.
[0033] FIG. 2 is a block diagram 200 illustrating an indoor-outdoor navigation system and a recognition system, according to an embodiment of the present disclosure. The indoor
environment 205 include a space estimation, object: detection, recognition, shape, size, color, name and location as well as a layout detection, and the like. Similarly, the outdoor environment 210 includes detection of pothole, object, obstacles, traffic sign identification, description of scene, and the like. The recognition system 215 includes recognition of face, facial expression of people, and number of people, currency identification, daily use object recognition, and the like.
[0034] In the same embodiment, the indoor navigation 205 system provides a description of the indoor environment. The description of the indoor environment, further, provides the user with the estimation of the dimensions of a room in which the user is located, a description of a room, an area and a description of the floor of the room; a number of objects lying on a table, color of an object lying on a table, a shape of an object lying on a table, a name of an object lying; a count of a number of people in the room, a facial expression of a person in the room; a name of a person in the room; an amount of a bill; a contact card details; a description of a currency note, and the like. As for an example, consider a user enters an environment, say an indoor environment. The camera 125 of the wearable device 105 (for example, the pair of eyeglasses) captures the image of a room in which the user is located and the processor 120, further, receives the image captured by the camera 125 and extract the information related to the room of the user, for example, the dimensions of a room, the description of a room, an area and a description of the floor of the room, the number of objects lying on a table, a color of an object lying on a table, a shape of an object lying on a table, a name of object lying, a count of a number of people in the room, a facial expression of a person in the room, a name of a person in the room, and the like. In addition, the processor 120 converts the extracted information (for example, the dimensions of a room, the description of a room, an area and a description of the floor of the room, the number of objects lying on a table, a color of an object lying on a table, a shape of an object lying on a table, a name of object lying, a count of a number of people in the room, a facial expression of a person in the room, a name of a person in the room, and the like) into a message in a natural language and the same is provided to the audio converter for converting the message (for example, the dimensions of a room, the description of a room, an area and a description of the floor of the room, the number of objects lying on a table, a color of an object lying on a table, a shape of an object lying on a table, a name of object lying, a count of a number of people in the room, a facial expression of a person in the room, a name of a person in the room, and the like) into an audible signal. The audible signal is provided to the bone conduction headphones for conveying the audible message (for example, the
dimensions of a room, the description of a room, an area and a description of the floor of the room, the number of objects lying on a table, a color of an object lying on a table, a shape of an object lying on a table, a name of object lying, a count of a number of people in the room, a facial expression of a person in the room, a name of a person in the room, and the like) to the user. Similarly, the camera 125 captures the image of the object (for example, a bill, a currency note, a contact card, a pothole, and the like) of the environment of the user, the processor 120, further, receives the image captured by the camera 125 and extract one or more information related to the object of the user, for example, an amount of a bill, a contact card details, a description of a currency note, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like. In addition, the processor 120 converts the extracted information (for example, an amount of a bill, a contact card details, a description of a currency note, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like) into a message in a natural language. The natural language (a text to speech converter) is provided to the audio converter for converting the message (for example, an amount of a bill, a contact card details, a description of a currency note, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like) into an audible signal. The audible signal is provided to the bone conduction headphones for conveying the audible message (for example, an amount of a bill, a contact card details, a description of a currency note, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like) to the user.
[0035] In the same embodiment, the wearable device 105 also comprise a pair of distance sensors for sensing a distance between the device and an object in the user's environment. The processor 120, receives and extract the distance sensed by the distance sensing device and conveys the user as an audible message, the approximate distance of an object (for example, a table, a bed, a person, a traffic light, a pothole, and the like) in front of the user via a pair of bone conduction headphones.
[0036] FIG. 3 is a block diagram 300 illustrating use conditions of the wearable device, according to an embodiment of the present invention. The use of the wearable device 105 is not limited to a visually impaired 305 people. The wearable device 105 can also be used by a senior citizen, an Alzheimer patient 310, adventure enthusiast 315, a warehouse or an industrial users 320, and the like. The visually impaired 305 people use the wearable device 105, while navigating in an environment, manipulating day to day objects, searching of objects,
recognizing the people, and the like 325. Similarly, the senior citizens and Alzheimer patients' use the wearable device 105 while navigating in an environment, taking a walk, and the like 330. Further, for the adventure enthusiast work the wearable device 105 is used while tracking an activity, listening to music alongside natural hearing, and the like situations 335. The wearable device 105 is also used by the warehouse or the industrial users 320. The warehouse or the industrial users 320 use the wearable device 105 while giving training to newly employed workers, in providing training to scan barcodes, and the like 340.
[0037] FIG. 4 is a flow chart 400 illustrating a method for a wearable device, according to an embodiment of the present invention. At step 405, capturing of an image of an environment of the user is provided. The capturing of the image of the environment of the user is provided by an image capturing module. The image capturing module 110 comprises a camera 125. The camera 125 is typically mounted on the front face of the wearable device 105, for example a pair of eyeglasses, for capturing the one or more images of the user's environment.
[0038] The camera 125 may be a 2-D camera, a 3-D camera, and the like. The camera 125 captures one or more images of the environment of the user. In one implementation, depth sensing cameras may be used for estimating depth and orientation of the captured images of the user's environment. The camera 125 is also used for estimating 3-D features of the captured images in the environment of the user. The user's environment as described herein may refer to an indoor environment or an outdoor environment, the user is typically looking at. The indoor environment refers to the consideration of the elements encountering in a closed space. The indoor environment, further, refers to a dimensions of a room in which the user is located, a description of a room, an area and a description of the floor of the room; a number of objects lying on a table, color of an object lying on a table, a shape of an object lying on a table, a name of an object lying; a count of a number of people in the room, a facial expression of a person in the room; a name of a person in the room; an amount of a bill; a contact card details; a description of a currency note, and the like. Similarly, the outdoor environment refers to the consideration of the elements encountering in an open space, for example, a distance between the user and a pothole, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like. As described, the camera 125 captures the one or more images of the user's environment and the capture one or more images are communicated to the processor 120 for further processing.
[0039] At step 410, sensing a distance between the user and an object in the user's
environment is provided. The distance sensing module 115 is configured for sensing a distance between the device and one or more objects in the user's environment. Herein, the device is a user-worn device which is configured to be worn as a pair of eyeglasses. However, the object in the user's environment refers to the object present in the indoor environment and the outdoor environment of the user.
[0040] The distance between the device (for example, the pair of eyeglasses) and the object in the user's environment may be sensed by the Time-of-Flight (ToF) laser-ranging module. The ToF laser ranging module provides an accurate distance measurement. Two sensors (ToF) are mounted on either side of the wearable device 105 along with the camera 125 to estimate the depth and orientation of the captured images of the environment of the user and the distance of the object in the user's environment.
[0041] At step 415, processing the captured images and the distance between the user and
an object in the user's environment is provided. The processor 120 is configured for receiving the image captured by the image capturing module 110 and the distance sensed by the distance sensing module 115 for extracting information related to the objects within the environment. The processor 120 used for processing the captured images, is an Intel Edison/Joule, and the like compatible Single Board Computer (SBC) running on Linux/Yocto with an algorithm. The algorithm works with the images of known objects. The objects may be of different shape, size, texture, pose, color, and the like. The algorithm parse a given environment image into constituent entities of individual objects present in the environment, using a rough segmentation algorithm. The retrieved entities in the environment with their rough boundaries are used to get the respective matched codebook from the learned exemplars of individual objects. The azimuth and elevation angle captured from the matched image is then used as an initial estimate of the pose. Further, to refine the estimate, use an ensemble of shape features to establish matches between the objects and model silhouettes. Thus, the spatial map of walkable region in an environment may be known by the user. The computer vision based device may be able to capture the environment and employ the semantic understanding of the environment to the user. Further, the information is given to the user through a haptic and an audio cues modalities. The haptic and the audio cues modalities are used in combination with the bone conduction headphones for conveying the audible message (for example, direction information) to the users.
[0042] At step 420, extracting of one or more information related to the object is
provided. The extracted information, related to the objects may include dimensions of a room in which the user is located, a description of a room, an area and a description of the floor of the room, number of objects lying on a table, color of an object lying on a table, a shape of an object lying on a table, a name of an object lying, a count of a number of people in the room, facial expressions of a person in the room, a name of a person in the room, an amount of a bill, a contact card details, a description of a currency note, a distance between the user and a pothole, an area of a pothole, a depth of a pothole, a quantity of a slope of a road, an obstacles on a road, and the like. In one embodiment of the present disclosure, such information is extracted by processing the one or more images and the distance data captured by the distance sensing module 115.
[0043] At step 425, converting the extracted information into a message in a natural
language is provided. The processor 120 is configured for converting the extracted information related to the object, into a message in a natural language.
[0044] At step 430, converting the message into an audible signal is provided. The generated message is then fed to the audio converter 130 for converting the message into an audible signal. In one embodiment, the audio converter 130 is a text to speech converter which converts the input messages into an audio signal.
[0045] At step 435, conveying the audible message to the visually impaired user is
provided. The converted message that the audio signal is then provided to an audio output 135 means for conveying the audible message to the user. The audio output 135 means, here, is a pair of bone conduction headphones for conveying the audible message to the user, by the user-worn device.
[0046] The pair of bone conduction headphones in combination with the directional sound cues monotone alarm call generator timer based circuit is used for conveying the direction information to the user. The directional sound cues are provided via 800 Hertz (Hz) monotone alarm-call generator 555 timer based circuit. The audio cues are represented by a combination of pan and pitch, wherein the pan refers to the channel in which the sound is being played and pitch is the frequency of the sound.
[0047] While specific language has been used to describe the invention, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in
the art, various working modifications may be made to the method in order to implement the inventive concept as taught herein.
[0048] The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible.
CLAIM:
1. A user-worn device for assisting a visually impaired user wearing the device, the device
comprising:
an image capturing module for capturing one or more images of an environment of the
user;
a distance sensing module for sensing a distance between the device and an object in
the user's environment;
a processor for
receiving the one or more images captured by the image capturing module and
the distance sensed by the distance sensing module for extracting information
related to the object;
converting the extracted information into a message in a natural language; a natural language to audio converter for converting the message into an audio signal; and an audio output means for conveying the audible message to the user.
2. The device of claim 1, wherein the user-worn device is configured to be worn as a pair of eyeglasses.
3. The device of claim 1, wherein the image capturing module comprises a camera.
4. The device of claim 3, wherein the camera is a 3-D camera.
5. The device of claim 1, wherein the distance sensing module is a pair of distance sensors.
6. The device of claim 1, wherein the information extracted by the processor, related to the object includes:
dimensions of a room in which the user is located, a description of a room, an area and
a description of the floor of the room;
a number of objects lying on a table, color of an object lying on a table, a shape of an
object lying on a table, a name of object lying;
approximate distance of an object in front of the user;
a count of a number of people in the room, a facial expression of a person in the room,
a name of a person in the room;
an amount of a bill;
a contact card details;
a description of a currency note; and
a distance between the user and a pothole, an area of a pot hole, a depth of a pothole, a
quantity of a slope of a road, an obstacles on a road.
7. The device of claim 1, wherein the audio output means is a pair of bone conduction headphones for conveying the audible message to the user.
8. The device of claim 1 wherein the natural language to audio convertor is a text to speech convertor.
9. A method for assisting a visually impaired user, the method comprising:
capturing one or more images of an environment of the user; sensing a distance between the user and an object in the user's environment; processing the one or more captured images and the distance between the user and the object in the user's environment;
extracting information related to the object;
converting the extracted information into a message in a natural language;
converting the message into an audio signal; and
conveying the audible message to the visually impaired user.
| # | Name | Date |
|---|---|---|
| 1 | Power of Attorney [20-06-2017(online)].pdf | 2017-06-20 |
| 2 | FORM28 [20-06-2017(online)].pdf | 2017-06-20 |
| 3 | Form 5 [20-06-2017(online)].pdf | 2017-06-20 |
| 4 | Form 3 [20-06-2017(online)].pdf | 2017-06-20 |
| 5 | EVIDENCE FOR SSI [20-06-2017(online)].pdf_177.pdf | 2017-06-20 |
| 6 | EVIDENCE FOR SSI [20-06-2017(online)].pdf | 2017-06-20 |
| 7 | Drawing [20-06-2017(online)].pdf | 2017-06-20 |
| 8 | Description(Provisional) [20-06-2017(online)].pdf | 2017-06-20 |
| 9 | abstract.jpg | 2017-07-19 |
| 10 | 201711021595-FORM-26 [19-09-2017(online)].pdf | 2017-09-19 |
| 11 | 201711021595-Power of Attorney-210917.pdf | 2017-09-26 |
| 12 | 201711021595-Correspondence-210917.pdf | 2017-09-26 |
| 13 | 201711021595-Proof of Right (MANDATORY) [08-11-2017(online)].pdf | 2017-11-08 |
| 14 | 201711021595-OTHERS-131117.pdf | 2017-11-20 |
| 15 | 201711021595-Correspondence-131117.pdf | 2017-11-20 |
| 16 | 201711021595-OTHERS-131117..pdf | 2017-11-21 |
| 17 | 201711021595-DRAWING [19-06-2018(online)].pdf | 2018-06-19 |
| 18 | 201711021595-COMPLETE SPECIFICATION [19-06-2018(online)].pdf | 2018-06-19 |
| 19 | 201711021595-FORM 18 [31-05-2021(online)].pdf | 2021-05-31 |
| 20 | 201711021595-FER.pdf | 2022-03-24 |
| 21 | 201711021595-FORM 4(ii) [24-09-2022(online)].pdf | 2022-09-24 |
| 22 | 201711021595-OTHERS [25-10-2022(online)].pdf | 2022-10-25 |
| 23 | 201711021595-FER_SER_REPLY [25-10-2022(online)].pdf | 2022-10-25 |
| 24 | 201711021595-CLAIMS [25-10-2022(online)].pdf | 2022-10-25 |
| 25 | 201711021595-ABSTRACT [25-10-2022(online)].pdf | 2022-10-25 |
| 26 | 201711021595-US(14)-HearingNotice-(HearingDate-24-01-2024).pdf | 2024-01-08 |
| 27 | 201711021595-Correspondence to notify the Controller [21-01-2024(online)].pdf | 2024-01-21 |
| 28 | 201711021595-FORM-26 [23-01-2024(online)].pdf | 2024-01-23 |
| 29 | 201711021595-Written submissions and relevant documents [08-02-2024(online)].pdf | 2024-02-08 |
| 30 | 201711021595-PETITION UNDER RULE 137 [08-02-2024(online)].pdf | 2024-02-08 |
| 31 | 201711021595-PatentCertificate01-03-2024.pdf | 2024-03-01 |
| 32 | 201711021595-IntimationOfGrant01-03-2024.pdf | 2024-03-01 |
| 1 | SS_201711021595E_23-03-2022.pdf |