Abstract: The present invention relates to a system and method for navigation of the visually impaired. The system disclosed by the present invention aids the visually impaired by providing a wearable device capable of providing a tactile feedback. An image capturing device, for example, a smartphone"s camera is utilized to capture real-time images which are further analyzed using digital image analysis module installed on the smartphone. The processed image is then simulated to a digital visual matrix which is further transmitted to the wearable device. Tactile feedback providing means are fitted on the wearable device which gets activated based on the interception and interpretation of the visual matrix. The present invention also discloses a method of identifying the obstacles in the path by processing the digital image.
FORM 2
THE PATENT ACT 1970
&
The Patents Rules, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION:
SYSTEM AND METHOD FOR CREATING MENTAL PERCEPTION OF
SURROUNDING BY GIVING TACTILE FEEDBACK AROUND EYE
SOCKETS DRIVEN BY SMARTPHONE IMAGE CAPTURE
2. APPLICANT
(a) NAME: Tech Mahindra Limited
(b)NATIONALITY: An Indian Company
(c) ADDRESS: 3rd floor Corporate Block, Plot No. 1,
Phase III, Rajiv Gandhi Infotech Park, Hinjewadi, Pune 411 057 INDIA Maharashtra, India
3. PRREAMBLE TO THE DESCRIPTION
COMPLETE
The following specification Particularly describes the invention and the manner in be which it is to performed.
FIELD OF THE INVENTION
The present invention relates to the field of assistive solutions for visually impaired to enable them to navigate with the help of an image capturing device for example, Smartphone.
DEFINITIONS OF TERMS USED IN THE SPECIFICATION
The expression 'user' used hereinafter in the specification refers to but is not limited to a visually impaired person who requires assistance to navigate his surroundings.
The expression 'Wearable System' used hereinafter in the specification refers to but is not limited to a custom built device which will be worn by the "user" as part of the full system.
The expression 'SmartPhone' used hereinafter in the specification refers to but is not limited to a hand-held mobile device with an operating system, processing capabilities, a Camera and Bluetooth as minimal specifications.
The expression 'tactile feedback' used hereinafter refers to feedback technology which takes advantage of the sense of touch by applying forces, vibrations, or motions to the user
The expression 'Bluetooth Communication' used hereinafter refers to the Bluetooth® wireless technology for wire-free communication between the smartphone and the wearable system.
The above definitions are in addition to those expressed in the art.
BACKGROUND OF THE INVENTION
There are a large number of people affected with visual impairment worldwide. One of the key challenges faced by the visually challenged in their day to day working is to navigate through obstacles which surround them of which they have no knowledge.
Available solutions in the market although try to address this problem, they lack in one way or the other as stated hereinafter and there is no specific solution which addresses this concern effectively.
Giving a blind person sight is a daunting task but an alternate perspective of giving them a feel of their surroundings is feasible.
One of the classical ways is to use the aids in the form of stick and trained dogs. Other modern navigation systems for visually impaired rely upon expensive physical augmentation of the environment or expensive sensing equipment. Consequently, few systems have been implemented. Real-time systems which are being researched pose a typical problem of bulkiness, engaging the body parts, using special designed expensive sensors and limited inputs to the user for proper navigational purposes. Other systems rely on augmenting various sensors such as ultrasonic sensors which are prone to errors due to the varying ranges of sensing of these sensors and interference with other sound waves. Further, the
tactile feedback is provided in areas which do not relate to the "natural" human region of the sense of sight.
Accordingly, there exist a need for system and method of creating mental perception of surrounding by giving tactile feedback, which overcomes drawbacks of the prior art.
OBJECTS
Some of the objects of the present invention aimed to ameliorate one or more problems of the prior art or to at least provide a useful alternative are described herein below:
An object of the present invention is to provide a system and method of enabling a navigational aid for the visually impaired by providing tactic feedback corresponding to the object position and movement in the surrounding thereby creating a mental perception of the objects in the vicinity.
Another object of the said invention is to capture the object images through commonly available smartphones and create a computing method to calculate the position and direction from image data in real time.
Yet another objective of the preset invention is to enable image processing and resultant signal generation over the cloud by leveraging high throughput wireless communication channels like 3G, 4G, and LTE.
As would be evident from the proposed embodiments of the invention the method claimed would be used for performing End to End experience of mental perception of captured images through sequence of events involving multiple elements.
Other objects and advantages of the present invention will be more apparent from the following description when read in conjunction with the accompanying figures, which are not intended to limit the scope of the present invention.
SUMMARY
In one aspect, the present invention provides a system for creating mental perception of surrounding by giving tactile feedback around eye sockets for visually impaired user. The system includes an image capturing device for capturing real-time images. The image capturing device is adaptive with visual matrix corresponding to obstacle position, direction and combination thereof. The system further includes at least one image processing module installed on any one of the image capturing device and a cloud server for processing the captured images and mapping the captured images to a visual matrix. The system furthermore includes at least one wearable glass to be worn by the visually impaired user. The wearable glass is adaptive to receive the visual matrix from the image capturing device wirelessly. The system moreover includes a plurality of micro-motors positioned on the wearable glass, wherein the plurality of micro-motors gets activated based on interception and interpretation of the visual matrix and provides tactile feedback to the visually impaired user.
In another aspect, the present invention provides a method of providing tactile feedback to a visually impaired user. The method includes capturing an image using a image capturing device. The method further includes processing the captured image in the image capturing device using a collision detection module to detect point of collisions and classify them in the regions of high probability, medium probability and low probability of collision. The method furthermore includes generating a visual matrix based on the classification by the collision detection module. Moreover, the method includes sending the visual matrix to a wearable glass wore by the visually impaired user and activating a plurality of micro-motors positioned on the wearable glass based on interception and interpretation of the visual matrix to provide a tactile feedback to the visually impaired user.
Typically, wherein the image capturing device is a Smartphone.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWINGS
The present invention provides a system and a method of creating a mental perspective of surrounding objects by way of image capture through an image capturing device such as smart phone camera coupled to tactile feedback for the visually impaired which will now be described with the help of accompanying drawings, in which:
Figure 1 and 2 illustrates architecture of the system for creating mental perception of surrounding by giving tactile feedback to visually impaired, in accordance with one aspect of the present invention;
Figure 3 illustrates architecture of a wearable system , in accordance with the present invention;
Figure 4a illustrates a flowchart for a method for creating mental perception of surrounding by giving tactile feedback to visually impaired, in accordance with another aspect of the present invention;
Figure 4b illustrates a pictorial representation of the system of figure 1; and Figure 5 illustrates a cloud-based architecture of the system of figure 1.
DETAILED DESCRIPTION OF THE ACCOMPANYING DRAWINGS
A preferred embodiment will now be described in detail with reference to the accompanying drawings. The preferred embodiment does not limit the scope and ambit of the invention. The description provided is purely by way of example and illustration.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The system and method for creating mental perception of surrounding by giving tactile feedback to visually impaired individuals uses a wireless technology to render the user hands free. The system and method provides tactic feedback around eye sockets of the visually impaired individuals thereby giving a more comprehensive and natural sense of alternative to vision in a multi-directional way. The wearable system of the system itself is compact and can be put on like normal glasses thus making it light-weight, portable and handy. The system uses image capturing device such as a Smartphone camera as input which completely eliminates the need to haying an expensive, add-on real-time vision aid. Further, the Smartphone' also serves as means of communication, and additional navigational inputs using GPS, Wi-Fi, auditory & voice feedback thus giving the user an enhanced experience. As of date, there are no products in the market which make use of the smartphone's camera as a "virtual eye" to process realtime imaging, wirelessly communicate with a wearable system, which, in turn, provides tactile feedback around the eyes as detailed direction and position of obstacles.
The present invention envisages a system and method for navigation for the visually impaired. The system as disclosed by the present invention includes but is not limited to identification of obstacles in the path and providing intuitive and guided feedback to assist the visually impaired to navigate around them. The system aids the visually impaired by giving them intuitive tactile feedback through the use of digital image analysis algorithms which work on the real-time image captured through the Smart Phone's camera.
Referring now to figure 1, there is shown a system (100) for creating mental perception of surrounding by giving tactile feedback to visually impaired individuals. The system (100) includes an image capturing device (101), at least one image processing module (not shown), at least one wearable glass (103) to be wore by the visually impaired user, and a plurality of micro-motors (104).
In preferred embodiment, the image capturing device (101) is a Smartphone. The image capturing device (101) capture real-time images. The image capturing device (101) is configured with visual matrix corresponding to obstacle position, direction and combination thereof.
The image processing module is installed on any one of the image capturing device (101) and a cloud server (105) for processing the captured images and mapping the captured images to the visual matrix.
The at least one wearable glass (103) to be wore by the visually impaired user is adaptive to receive the visual matrix from the image capturing device wirelessly. In an embodiment, the wireless means is Bluetooth communication (102).
The plurality of micro-motors (104) are positioned on the wearable glass (103). The plurality of micro-motors (104) gets activated based on interception and interpretation of the visual matrix and provides tactile feedback to the visually impaired user. The wearable glass (104) comprises of strategically positioned micro-motors (104) which are excited based on the contents of the signal from the image capturing device (101), for example the SmartPhone.
As shown in figure 2, the image capturing device (101), for example the Smartphone is placed at chest level of the user. The smartphone captures realtime image (106) in front of the user, processes it and translates it into signals. These signals are then wirelessly, for example by Bluetooth communication (102) are transmitted to the wearable system/ wearable glass (103) which is worn as a "Goggle" by the user. The figure 2 also depicts additional functionalities such as GPS inputs (107) and Voice feedback for additional Navigational information using Satellite (108). It also depicts a Bluetooth headset to allow the user to cany out normal phone operations such as receiving and making calls.
Referring to figure 3, there is shown an architecture of a wearable system, in accordance with the present invention. The wearable system (103) which comprises of a Bluetooth module (102), a micro-controller (109) and a series of micro-motors (104) mounted on a "Goggle-like" casing which is to be worn around the eyes like glasses.
Referring Figure 4a which illustrates a flowchart for a method (200) for creating mental perception of surrounding by giving tactile feedback to visually impaired, in accordance with another aspect of the present invention;
The method (200) includes capturing an image using a image capturing device, for example a smartphone. Here, the Smartphone camera is central to the concept wherein it captures the real-time video image.
The method further includes processing the captured image in the image
capturing device using a collision detection module to detect point of collisions
and classify them in the regions of high probability, medium probability and low
probability of collision. The method identifies potential points of collision using
collision detection module which estimates the motion and determines the risk
of collision for each frame which involves computation of local expansion and
lateral motion. The collision detection module then calculates these regions for
High-Probability, Medium-Probability and Low-Probability of collisions. Based
on this analysis, a signal matrix is generated and sent across to the wearable
system wirelessly via Bluetooth. *
The Wearable System (103) intercepts and interprets this signal matrix based on the values of which the appropriate micro-motors positioned around the user's eyes are activated thus giving the user a tactile feedback in the exact region as the position and direction of the object.
Figure 5 illustrates another mechanism of the system (100) which leverages 4G/LTE channels and a cloud-based system. In this system, the image processing is offloaded to a Cloud-server (107) for faster processing through internet (110) i.e - 3G/4G/LTE Communication. The Cloud server, in turn, sends back the concluded signals to the SmartPhone (101), which, in turn, passes them on to the wearable System (103).
TECHNICAL ADVANCEMENTS
The technical advancements of the system envisaged by the present invention include the realization of:
1. A Navigation System for the Visually Impaired which uses the SmartPhone Camera as the "Virtual Eye" and interfaces wirelessly with a Wearable System
2. A Wearable System which provides tactic feedback (Tactile feedback) to the user around his eyes thus making it intuitive and adaptable.
Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
The use of the expression "at least" or "at least one" suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the invention to achieve one or more of the desired objects or results.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in
the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
CLAIMS:
1. A system for creating mental perception of surrounding by giving tactile
feedback around eye sockets for visually impaired user, the system
comprising:
an image capturing device for capturing real-time images, the image capturing device having visual matrix corresponding to obstacle position, direction and combination thereof;
at least one image processing module installed on any one of the image capturing device and a cloud server for processing the captured images , and mapping the captured images to a visual matrix;
at least one wearable glass to be wore by the visually impaired user, the wearable glass adaptive to receive the visual matrix from the image capturing device wirelessly; and
a plurality of micro-motors positioned on the wearable glass, wherein the plurality of micro-motors gets activated based on interception and interpretation of the visual matrix and provides tactile feedback to the visually impaired user.
2. The system as claimed in claim 1, wherein the image capturing device is a Smartphone.
3. The system as claimed in claim 1, wherein image capturing device communicates with the cloud server over 4G/LTE channels.
4. The system as claimed claim 1, wherein the image processing module is a collision detection module for detecting point of collisions and classify them in the regions as high probability, medium probability and low probability of collision.
5. The system as claimed in claim 1, wherein the visual matrix is a matrix comprising cells holding a specific value corresponding to the regions of interest while other cells holding a value of zero.
6. The system as claimed .claim 1, wherein the image capturing devicesi serves at least one of the functions of providing additional navigational inputs to the user using GPS, Wi-Fi, auditory and voice feedback either individually or in combination.
7. The system as claimed in claim 1, wherein the wearable glass is adaptive to receive the visual matrix from the image capturing device through Bluetooth communication mode.
8. A method of providing tactile feedback to a visually impaired user comprising the steps of:
capturing at least one image using an image capturing device;
processing the captured image in the image capturing device using a collision detection module to detect point of collisions and classify them in the regions of high probability, medium probability and low probability of collision;
generating a visual matrix based on the classification by the collision detection module;
sending the visual matrix to a wearable glass wore by the visually impaired user; and
activating a plurality of micro-motors positioned on the wearable glass based on interception and interpretation of the visual matrix to provide a tactile feedback to the visually impaired user.
9. The method as claimed in claim 8, wherein the image capturing device is
a Smartphone. ,
10. The method as claimed in claim 8, wherein the collision detection module is installed on any one of the image capturing device and a cloud server.
11. The method as claimed in claim 8, wherein the wearable glass is adaptive to receive the visual matrix from the image capturing device through Bluetooth communication mode.
| # | Name | Date |
|---|---|---|
| 1 | 1632-MUM-2015-CORRESPONDENCE.pdf | 2018-08-11 |
| 1 | 1632-MUM-2015-FORM 3 (28-01-2016).pdf | 2016-01-28 |
| 2 | 1632-MUM-2015-FORM 1.pdf | 2018-08-11 |
| 2 | Form 3 [24-08-2016(online)].pdf | 2016-08-24 |
| 3 | 1632-MUM-2015-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 3 | ABSTRACT1.jpg | 2018-08-11 |
| 4 | 1632-MUM-2015-FORM 2(TITLE PAGE).pdf | 2018-08-11 |
| 4 | ABSTRACT1.jpg | 2018-08-11 |
| 5 | 1632-MUM-2015-FORM 1.pdf | 2018-08-11 |
| 5 | Form 3 [24-08-2016(online)].pdf | 2016-08-24 |
| 6 | 1632-MUM-2015-CORRESPONDENCE.pdf | 2018-08-11 |
| 6 | 1632-MUM-2015-FORM 3 (28-01-2016).pdf | 2016-01-28 |