Abstract: The present disclosure envisages a helmet (1) having an arrangement for assisting a visually impaired user. The helmet (1) is defined by a hollow body (100) having a forehead region (2), a crown region (4), an occipital region (6), and a temple region (8) on either side of the forehead region (2) for enclosing a head of the user. The arrangement comprises an image capturing unit (102) mounted at the forehead region (2), a sensing unit (104) mounted at the crown region (4), a compartment (12) configured at the occipital region (6) having a mini-computer (106), and a battery (114) disposed therein, and a feedback unit (108) mounted at the temple region (8) and at occipital region (6). The image capturing unit (102), the sensing unit (104), the mini¬computer (106), and the feedback unit (108) assist the user to sense and navigate user's surroundings.
The present disclosure relates to the field of devices for assisting visually impaired people.
DEFINITION
As used in the present disclosure, the following term(s) are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used indicate otherwise.
The term 'Forehead region' used hereinafter in the specification refers to
an area of the head bounded by three features, two of the skull and one of the
scalp enclosed by a helmet.
The term "Crown region" used hereinafter in the specification refers to the topmost part of the skull or head enclosed by a helmet.
The term "Occipital region" used hereinafter in the specification refers to the back part of the head enclosed by a helmet.
The term "Temple region" used hereinafter in the specification refers to the flat area on the side of the head, in front of each ear.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
Visually impaired people live in a difficult world. The simple act of walking from one place to another becomes difficult and often dangerous. Walking by using canes is helpful for avoiding some obstacles, but do not solve the larger problem of navigation and situational-awareness (e.g., there is a window on the left, a table on the right, there is a bus approaching, etc.). Reading signs and printed flex and boards present additional problems. People who are
visually impaired rely heavily on their auditory senses to make sense of the world's on-goings, especially in an urban environment and are therefore not desired.
Therefore, there is felt a need for an arrangement that assists visually impaired persons in sensing and navigating surrounding and that alleviates the aforementioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
It is an object of the present disclosure to ameliorate one or more problems of the prior art or to at least provide a useful alternative.
An object of the present disclosure is to provide an arrangement for assisting a visually impaired user to independently navigate their surroundings.
Yet another object of the present disclosure is to provide an arrangement for assisting a visually impaired user that easily fits into existing head worn devices
Still another object of the present disclosure is to provide an arrangement for assisting a visually impaired user that is portable.
Still yet another object of the present disclosure is to provide an arrangement for assisting a visually impaired user that is user friendly.
Other objects and advantages of the present disclosure will be more apparent from the following description when read in conjunction with the accompanying figures, which are not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a helmet having an arrangement for assisting a visually impaired user. The helmet is defined by a hollow body having a forehead region, a crown region, an occipital region, and a temple region on either side of the forehead region. The helmet encloses a head of the user. The arrangement comprises an image capturing unit mounted at the forehead region, a sensing unit mounted at the crown region, a compartment configured at the occipital region having a mini-computer, and a battery disposed therein, and a feedback unit mounted at the temple region and at the occipital region. The image capturing unit, the sensing unit, the mini¬computer, and the feedback unit assist the visually impaired user to sense and navigate the user's surroundings.
In an embodiment, the image capturing unit is configured to capture at least one media of an environment surrounding a user.
In an embodiment, the sensing unit is configured to sense the surrounding environment and calculate a distance between the user and an object in the environment. The sensing unit is further configured to generate a map of the surrounding environment based on the calculated distances.
In an embodiment, the mini-computer is communicatively coupled to the image capturing unit and the sensing unit to receive the captured media and the map of the surrounding environment. The mini-computer is further configured to process the received map to generate a first set of output signals and analyze the captured media to generate a second set of output signals.
In an embodiment, the feedback unit is configured to cooperate with the mini-computer to receive the first and second set of output signals. The feedback unit is further configured to provide the user at least one of an audio feedback or a haptic feedback:
to assist the user to navigate the surrounding environment based on the first set of output signals; and
to enable the user to understand at least one of objects, scenes, faces, and text in the surrounding environment based on the second set of output signals.
In an embodiment, the battery is configured to supply power to the image capturing unit, the sensing unit, the mini-computer, and the feedback unit.
In an embodiment, the sensing unit is a light detection and ranging (LiDAR) sensor configured to transmit light onto near objects and further configured to receive modulated light from the scanned objects to determine the distance between the user and the stationary or moving objects.
In an embodiment, the captured media is an image or a video.
In an embodiment, the sensing unit is operatively disposed over the mini-computer.
In an embodiment, the mini-computer comprises a memory card, a navigation assistance module, and an environment detection module. The memory card is configured to store coordinates of predetermined location and pathways, algorithm, a list of names and images of known people and friends, list of abstract words used to describe scene, and list of names and images of pre-stored objects. The navigation assistance module is configured to plot a user-defined goal on the map, determine a pathway to the plotted goal, and detect objects in the pathway using one or more pre-trained models. The environment detection module is configured to recognize scenes in the captured media, detect and identify faces in the captured media, and recognize text in the captured media using one or more pre-trained models.
The navigation assistance module and the environment detection module are implemented using one or more processor(s).
In an embodiment, the navigation assistance module includes a mapping module, an object detection module, and a lane detection module. The mapping module is configured to receive the data points from the sensing unit. The mapping module is further configured to cooperate with the memory card to extract the stored coordinates of predetermine target location data points to help navigate the visually impaired person to set destination. The object detection module is configured to detect objects in the pathway. The object detection module is further configured to cooperate with memory card to extract the pre-stored list of names and images of objects and classify and determine the object using one or more pre-trained object detection models. The lane detection module is configured to receive determined pathway of the plotted goal on the map. The lane detection module is further configured to cooperate with the object detection module to align the visually impaired user to left or right waypoint of the determined pathway using pre-trained models to determine lane.
In an embodiment, the environment detection module includes a face recognition module, a scene recognition module, and an optical character reader (OCR) module. The face recognition module is configured to receive the captured media from the image capturing unit and is configured to cooperate with the memory card to extract the list of images and names of known persons and detect the face of the approaching person. The face recognition module is further configured to match the face of the approaching person along with his name and image and determine whether the approaching person is a friend, a family member, or an unknown person via an audio feedback generated by the feedback unit using pre-trained face recognition models. The scene recognition module is configured to receive the captured media from the image capturing unit and the list of abstract
names from the memory card. The scene recognition module is further configured to detect the objects present in the surrounding environment, and determine the scene via an audio feedback generated by the feedback unit using pre-trained scene recognition models. The optical character recognition (OCR) module is configured to receive captured media having text from the image capturing unit and the list of words from the memory card. The OCR module is further configured to detect the text and determine words from the text and determine detected objects via an audio feedback generated by the feedback unit using pre-trained OCR models.
In an embodiment, the feedback unit comprises a vibration motor and headphones. The vibration motor is coupled at the temple region. The motor is configured to activate and vibrate upon receiving the first set of output signals to notify the user regarding the approaching detected objects, and when the user tries to approach a wrong lane. The headphones are coupled to the mini-computer. The headphones are configured to receive the second set of output signals and produce a corresponding sound output.
In an embodiment, the image capturing unit, the sensing unit, the min-computer, the battery, and the feedback unit are removably coupled to the body.
In an embodiment, the image capturing unit is a miniature camera which is communicatively coupled with the mini-computer using a wired communication media.
In an embodiment, the crown region of the helmet includes a plurality of air vents configured thereon to provide comfort to the user.
In an embodiment, the sensing unit is mounted on a plurality of brackets mounted at the temple region.
The present disclosure also envisages a method for assisting a visually impaired user. The method comprising the following steps:
capturing, by the image capturing unit, at least one media of the environment surrounding the user;
scanning and calculating, by the sensing unit, a distance to each object in the environment;
generating, by the sensing unit, a map of the surrounding environment based on the calculated distances;
receiving, by the mini-computer, the captured media and the map of the surrounding environment from the image capturing unit (102) and the sensing unit;
processing, by the mini-computer, the received map to generate a first set of output signals;
analyzing, by the mini-computer, the captured media to generate a second set of output signals;
receiving, by a feedback unit, the first and second set of output signals from the mini-computer; and
providing, by the feedback unit, the user with at least one of an audio feedback and a haptic feedback based on the first set of output signals and the second set of output signals respectively to assist in sensing and navigating the visually impaired user in their surroundings.
In an embodiment, the step of providing feedback includes further method steps of:
assisting, by the feedback unit via a vibration motor, the user to safely navigate their surrounding environment based on the haptic feedback; and
enabling, by the feedback unit via a plurality of headphones, the user to understand at least one of objects, scenes, faces, and text in the surrounding environment based on the audio feedback.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
An arrangement for assisting a visually impaired user of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1 illustrates an isometric view of a helmet with the arrangement removably mounted thereon, in accordance with an embodiment of the present disclosure;
Figure 2 illustrates a top view of the helmet depicting a light detection and ranging (LiDAR) sensor of the arrangement of Figure 1 mounted thereon;
Figure 3 illustrates a rear view of the helmet depicting a mini-computer and a battery of the arrangement of Figure 1 disposed in a compartment;
Figure 4A illustrates an isometric view of the sensing unit of Figure 1;
Figure 4B illustrates an isometric view of the image capturing unit of Figure
i;
Figure 4C illustrates an isometric view of the mini-computer of Figure 1;
Figure 4D and Figure 4E illustrate an isometric view of the feedback unit of Figure 1; and
Figure 5 illustrates a schematic block diagram depicting the internal components and modules of the arrangement of Figure 1; and
Figure 6A and Figure 6B illustrates a flow chart depicting method steps of assisting a visually impaired user in navigating their surroundings.
LIST OF REFERENCE NUMERALS
1 - Helmet
2 - Forehead region
4 - Crown region
5 - USB cable
6 - Occipital region 8 - Temple region 12 - Compartment 100-Body
102 - Image capturing unit
104 - Sensing unit
105 - Wired communication media
106 - Mini-computer
106A - Navigation assistance module 106B - Environment detection module
107-Brackets
108 - Feedback unit
108 A- Headphones 108B - Vibration motor
110 - Memory card
110A - Face recognition module
HOB - Scene recognition module
1 IOC - Optical character recognition (OCR) module
111 - Global positioning system (GPS) module
112 - Mapping module
112A - Object detection module
112B - Lane detection module
114-Battery
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details, are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some
embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a", "an", and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "including", and "having", are open ended transitional phrases and therefore specify the presence of stated features, steps, operations, elements and/or components, but do not forbid the presence or addition of one or more other features, steps, operations, elements, components, and/or groups thereof.
The present disclosure envisages an arrangement for assisting a visually impaired user. The arrangement is provided in a helmet 1. The helmet 1 is defined by a hollow body 100 having a forehead region 2, a crown region 4, an occipital region 6, and a temple region 8 on either side of the forehead region 2 of the helmet 1. The helmet 1 encloses a head of the visually impaired user.
The arrangement is now described with reference to Figure 1 through Figure
5.
Figure 1 and Figure 2 depict the arrangement for assisting a visually impaired user. The arrangement comprises an image capturing unit 102 mounted at the forehead region 2, a sensing unit 104 mounted at the crown region 4, a compartment 12 (refer Figure 3) configured at the occipital region 6 having a mini-computer 106, and a battery 114 disposed therein, and a feedback unit 108 mounted at the temple region 8 and at the occipital region 6. In an embodiment, the arrangement of the present disclosure can be mounted on any type of helmet which encloses or covers the head of a user.
Figures 4A to 4E depict the image capturing unit 102, the sensing unit 104, the mini-computer 106, and the feedback unit 108 assist the visually impaired user to sense and navigate their surroundings.
In an embodiment, the image capturing unit 102, the sensing unit 104, and the mini-computer 106 are powered by a rechargeable battery 114 via a USB cable 5.
The image capturing unit 102 is configured to capture at least one media of an environment surrounding a user. In an embodiment, the captured media is an image or a video. In an embodiment, the image capturing unit 102 is a miniature camera which is communicatively coupled with the mini-computer 106 using a wired communication media 105. In another embodiment, the image capturing unit 102 can be wirelessly communicated with the mini-computer 106.
The sensing unit 104 is configured to scan the surrounding environment and calculate a distance between the user and the object in the environment. The sensing unit 104 is further configured to generate a map of the surrounding environment based on the calculated distances. In an embodiment, the sensing unit 104 is a light detection and ranging (LiDAR) sensor communicatively coupled with the mini-computer 106 using wire or wireless communication technology. In another embodiment, the LiDAR is a 360° LiDAR configured to sense the entire surrounding of the user.
The sensing unit 104 is mounted on a plurality of brackets 107 at the crown region 4 (refer Figure 1). The LiDAR includes an emitter, a scanner, and a receiver, and a signal conditioning unit. The emitter is configured to emit pulses of laser light. The scanner is configured to direct the emitted pulses of light in accordance with a scanned pattern to illuminate a field of view of the LiDAR module.
The receiver is configured to detect the emitted pulses of light scattered by one or more nearby objects.
The signal conditioning unit is configured to receive the reflected light beams and is further configured to convert the light beams into digital values corresponding to distance.
Figure 5 depicts block diagram illustrating the internal modules of the mini-computer 106. The mini-computer 106 is communicatively coupled to the image capturing unit 102 and the sensing unit 104 to receive the captured media and the calculated distance. The mini-computer 106 is further configured to process the received map to generate a first set of output signals and analyze the captured media to generate a second set of output signals.
In an embodiment, the mini-computer 106 comprises a memory card 110, a navigation assistance module 106A, and an environment detection module 106B.
The memory card 110 is configured to store coordinates of predetermined location and pathways, co-ordinates, algorithm, a look up table containing list of names and images of known people and friends, list of abstract words used to describe scene, and list of names and images of pre-stored objects.
The navigation assistance module 106A is configured to plot a user-defined goal on the map, determine a pathway to the plotted goal, and detect objects in the pathway using one or more pre-trained models.
In an embodiment, the navigation assistance module 106A includes and a mapping module 112, an object detection module 112A, and a lane detection module 112B.
The mapping module 112 is configured to receive the data points from the sensing unit 104 to compute the co-ordinates of the user, and is further
configured to cooperate with the memory card 110 to extract the pre-determined coordinates of the user. The mapping module 112 is further configured to compare the computed co-ordinates with the pre-determined co-ordinates to generate a map and locate the user in the surrounding. The mapping module 106A is configured to wirelessly communicate the exact location of the visually impaired user to his family and friends. In an embodiment, the mapping module includes a global positioning system (GPS) module 111 configured to determine the co-ordinates of the user.
The mapping module 112 is further configured to compute:
• distance between the user and the nearby objects (steady and moving or approaching objects) in the pathway;
• speed of the one or more objects based on the received distance measurements;
• acceleration of the one or more objects based on the received distance measurements; and
• direction of one or more objects based on the received distance measurements.
The object detection module 112A is configured to detect objects in the pathway. The object detection module 112A is further configured to cooperate with memory card 110 to extract and classify and determine the object from the pre-stored list of names and images of objects using one or more pre-trained object detection models.
The lane detection module 112B is configured to receive determined pathway of the plotted goal on the map. The lane detection module 112B is further configured to cooperate with the object detection module 112A to align the visually impaired user to left or right waypoint of the determined pathway using pre-trained lane detection models to determine lane. The mapping
module 112, the object detection module 112A, and the lane detection module 112B are implemented using one or more processor(s).
The environment detection module 106B is configured to recognize scenes in the captured media and detect and identify faces in the captured media, and recognize text printed on a hoarding, flex, or road sign in the captured media using one or more pre-trained models. In an embodiment, the environment detection module 112B includes a face recognition module 110A, a scene recognition module HOB, and an optical character reader (OCR) module HOC.
The face recognition module 110A is configured to receive the captured media from the image capturing unit and is configured to cooperate with the memory card 110 to extract the list of images and names of known persons and detect the face of the approaching person. The face recognition module 110A is further configured to match the face of the approaching person along with his name and image and determine whether the approaching person is a friend, a family member, or an unknown person via the feedback unit using pre-trained face recognition models.
The scene recognition module HOB is configured to receive the captured media from the image capturing unit 102 and the list of abstract names from the memory card 110. The scene recognition module HOB is further configured to detect the objects present in the surrounding environment, and determine the scene via the feedback unit 108 using pre-trained scene recognition models.
The optical character recognition (OCR) module HOC is configured to receive captured media having text from the image capturing unit 102 and the list of words from memory card 110. The OCR module HOC is further configured to detect the text and determine words from the text and
determine detected objects via the feedback unit using pre-trained OCR models.
The feedback unit 108 is configured to cooperate with the mini-computer 106 to receive the first and second set of output signals. The feedback unit 108 is further configured to provide the user at least one of an audio feedback or a haptic feedback to assist the user to navigate the surrounding environment based on the first set of output signals, and to enable the user to understand at least one of objects, scenes, faces, and text in the surrounding environment based on the second set of output signals.
In an embodiment, the feedback unit 108 comprises a vibration motor 108A and headphones 108B. The vibration motor 108A is coupled to a body part of the user. The vibration motor 108A is configured to activate and vibrate upon receiving the first set of output signals to notify the user regarding the approaching detected objects, and when the user tries to approach a wrong lane. In an embodiment, the vibration motor 108A is configured to be removably coupled to the temple region 8 of the helmet 1.
The headphones 108B coupled to the mini-computer 106. The headphones 108B extend from the mini-computer 106 from the occipital region 6 to the ears of the user. The headphones 108B are configured to receive the second set of output signals and produce a corresponding sound output to allow the user to listen the words, text, and name of the person.
The image capturing unit 102, the sensing unit 104, the mini-computer 106, and the feedback unit 108 are configured to enable the visually impaired user to safely and independently navigate their surroundings and help know a person approaching them as known or unknown. Also, the arrangement helps in enabling the visually impaired person to know whether a vehicle is approaching towards them. The arrangement helps the visually impaired user
to know whether they are approaching a wrong lane or path, thereby making their navigation safe.
Figures 6A and 6B illustrate a flow chart depicting a method 200 for assisting a visually impaired user. The method 200 comprises the following steps:
i. At step 202, capturing, by the image capturing unit 102, at least one media of the environment surrounding the user;
ii. At step 204, scanning and calculating, by a sensing unit 104, a distance between the user and the object (stationary or moving) in the environment;
iii. At step 206, generating, by the sensing unit 104, a map of the surrounding environment based on the calculated distances;
iv. At step 208, receiving, by the mini-computer 106, the captured media and the map of the surrounding environment from the image capturing unit 102 and the sensing unit 104;
v. At step 210, processing, by the mini-computer 106, the received map to generate a first set of output signals;
vi. At step 212, analyzing, by the mini-computer 106, the captured media to generate a second set of output signals;
vii. At step 214, receiving, by the feedback unit 108, the first and second set of output signals from the mini-computer 106; and
viii. At step 216, providing, by the feedback unit 108, the user with at least one of an audio feedback and a haptic feedback based on the first set of output signals and the second set of output
signals respectively to assist the user to sense and navigate their surroundings.
In an embodiment, the step of providing feedback 216 includes further method steps of:
ix. At step 218, assisting, by the feedback unit 108, the user to navigate the surrounding environment based on the haptic feedback; and
x. At step 220, enabling, by the feedback unit 108, the user to understand at least one of objects, scenes, faces, and text in the surrounding environment based on the audio feedback.
The foregoing description of the embodiments has been provided for purposes of illustration and not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS AND ECONOMICAL
SIGNIFICANCE
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of an arrangement for assisting a visually impaired user, that:
• is portable;
• is user friendly;
• is compact;
• helps visually impaired users to independently sense and navigate their surroundings; and
• can be easily coupled to any helmet.
The embodiments herein, the various features, and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveals the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the
disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
WE CLAIM:
1. A helmet (1) having an arrangement for assisting a visually impaired
user, said helmet (1) defining a hollow body (100) having a forehead
region (2), a crown region (4), an occipital region (6), and a temple region
(8) on either side of the forehead region (2), said helmet (1) enclosing a
head of a user, said arrangement comprising:
a. an image capturing unit (102) mounted at the forehead region (2);
b. a sensing unit (104) mounted at the crown region (4);
c. a compartment (12) defined at the occipital region (6), having a
mini-computer (106) and a battery (114) disposed therein; and
d. a feedback unit (108) disposed at the temple region (8) and at the
occipital region (6);
wherein said image capturing unit (102), said sensing unit (104), said mini-computer (106), and said feedback unit (108) assist the visually impaired user to sense and navigate the user's surroundings.
2. The arrangement as claimed in claim 1, wherein said image capturing unit (102) is configured to capture at least one media of an environment surrounding a user.
3. The arrangement as claimed in claim 1, wherein said sensing unit (104) configured to sense the surrounding environment and calculate a distance between the user and the object in the environment, and further configured to generate a map of the surrounding environment based on said calculated distances.
4. The arrangement as claimed in claim 1, wherein said mini-computer (106) is communicatively coupled to said image capturing unit (102) and
said sensing unit (104) to receive the captured media and the map of the surrounding environment, said mini-computer (106) further configured to process the received map to generate a first set of output signals and analyze the captured media to generate a second set of output signals.
5. The arrangement as claimed in claim 4, wherein said feedback unit (108)
is configured to cooperate with said mini-computer (106) to receive said
first and second set of output signals, said feedback unit (108) further
configured to provide the user at least one of an audio feedback or a
haptic feedback:
i. to assist the user to navigate the surrounding environment based on said first set of output signals; and
ii. to enable the user to understand at least one of objects, scenes, faces, and text in the surrounding environment based on said second set of output signals.
6. The arrangement as claimed in claim 1, wherein said battery (114) is configured to supply power to said image capturing unit (102), said sensing unit (104), said mini-computer (106), and said feedback unit (108).
7. The arrangement as claimed in claim 1, wherein said sensing unit (104) is a light detection and ranging (LiDAR) sensor configured to transmit light onto near objects and further configured to receive modulated light from the scanned objects to determine the distance between the user and the stationary or moving objects.
8. The arrangement as claimed in claim 1, wherein said captured media is an image or a video.
9. The arrangement as claimed in claim 1, wherein said sensing unit (104) is operatively disposed over said mini-computer (106).
10. The arrangement as claimed in claim 4, wherein said mini-computer (106) comprises:
a. a memory card (110) configured to store coordinates of
predetermined location and pathways, algorithm, a list of names
and images of known people and friends, list of abstract words
used to describe scene, and list of names and images of pre-stored
objects;
b. a navigation assistance module (106A) configured to plot a user-
defined goal on the map, determine a pathway to the plotted goal,
and detect objects in the pathway using one or more pre-trained
models; and
c. an environment detection module (106B) configured to recognize
scenes in the captured media, detect and identify faces in the
captured media, and recognize text in the captured media using
one or more pre-trained models;
wherein said navigation assistance module (106A) and said environment detection module (106B) are implemented using one or more processor(s).
11. The arrangement as claimed in claim 10, wherein said navigation
assistance module (106A) includes:
a. a mapping module (112) configured to receive the data points from said sensing unit (104), and further configured to cooperate with said memory card (110) to extract said stored coordinates of
predetermine target location data points to help navigate the visually impaired person to set destination;
b. an object detection module (112A) configured to detect objects in
the pathway, and further configured to cooperate with memory
card (110) to extract said pre-stored list of names and images of
objects and classify and determine the object using one or more
pre-trained object detection models; and
c. a lane detection module (112B) configured to receive determined
pathway of the plotted goal on the map, and further configured to
cooperate with said object detection module (112A) to align the
visually impaired user to left or right waypoint of the determined
pathway using pre-trained models to determine lane.
12. The arrangement as claimed in claim 10, wherein said environment detection module (106B) includes:
a. a face recognition module (110A) configured to receive said
captured media from said image capturing unit (102) and
configured to cooperate with said memory card (110) to extract
the list of images and names of known persons and detect the face
of the approaching person, and further configured to match the
face of the approaching person along with his name and image
and determine whether the approaching person is a friend, a
family member, or an unknown person via an audio feedback
generated by said feedback unit (108) using pre-trained face
recognition models;
b. a scene recognition module (HOB) configured to receive said
captured media from said image capturing unit (102) and the list
of abstract names from said memory card (110), and further
configured to detect the objects present in the surrounding environment, and determine the scene via an audio feedback generated by said feedback unit (108) using pre-trained scene recognition models; and
c. an optical character recognition (OCR) module (HOC) configured to receive captured media having text from said image capturing unit (102) and the list of words from said memory card (110), and further configured to detect the text and determine words from the text and determine detected objects via an audio feedback generated by said feedback unit (108) using pre-trained OCR models.
13. The arrangement as claimed in claim 5, wherein said feedback unit (108)
comprises:
a. a vibration motor (108A) coupled to the temple region (8), said
motor (108A) configured to activate and vibrate upon receiving
the first set of output signals to notify the user regarding the
approaching detected objects, and when the user tries to approach
a wrong lane; and
b. a plurality of headphones (108B) coupled to said mini-computer
(106), said headphones (108B) configured to receive the second
set of output signals and produce a corresponding sound output.
14. The arrangement as claimed in claim 1, wherein said image capturing unit
(102), said sensing unit (104), said mini-computer (106), said battery
(114), and said feedback unit (108) are removably mounted to the body
(100).
15. The arrangement as claimed in claim 1, wherein said image capturing unit (102) is a miniature camera which is communicatively coupled with said mini-computer (106) using a wired communication media (105).
16. The arrangement as claimed in claim 1, wherein the crown region (4) of the helmet (1) includes a plurality of air vents (14) configured thereon to provide comfort to the user.
17. The arrangement as claimed in claim 1, wherein said sensing unit (104) is mounted on a plurality of brackets (107) mounted at the crown region (4).
18. A method (200) for assisting a visually impaired user, said method (200) comprising the following steps:
i. capturing (202), by an image capturing unit (102), at least one media of the environment surrounding the user;
ii. scanning and calculating (204), by a sensing unit (104), a distance between the user and the object in the environment;
iii. generating (206), by said sensing unit (104), a map of the surrounding environment based on said calculated distances;
iv. receiving (208), by a mini-computer (106), the captured media and the map of the surrounding environment from said image capturing unit (102) and said sensing unit (104);
v. processing (210), by said mini-computer (106), the received map to generate a first set of output signals;
vi. analyzing (212), by said mini-computer (106), the captured media to generate a second set of output signals;
vii. receiving (214), by a feedback unit (108), said first and second set of output signals from said mini-computer (106); and
viii. providing (216), by said feedback unit (108), the user with at least one of an audio feedback and a haptic feedback based on the first set of output signals and the second set of output signals respectively to allow the visually impaired user to sense and navigate the user's surroundings.
19. The method (200) as claimed in claim 18, wherein said step of providing feedback (216) includes further method steps of:
i. assisting (218), by said feedback unit (108) via a vibration motor (108B), the user to navigate their surrounding environment based on said haptic feedback; and
ii. enabling (220), by said feedback unit (108) via a plurality of headphones (108A), the user to understand at least one of objects, scenes, faces, and text in the surrounding environment based on said audio feedback.
| # | Name | Date |
|---|---|---|
| 1 | 202211010812-STATEMENT OF UNDERTAKING (FORM 3) [28-02-2022(online)].pdf | 2022-02-28 |
| 2 | 202211010812-PROOF OF RIGHT [28-02-2022(online)].pdf | 2022-02-28 |
| 3 | 202211010812-POWER OF AUTHORITY [28-02-2022(online)].pdf | 2022-02-28 |
| 4 | 202211010812-FORM 1 [28-02-2022(online)].pdf | 2022-02-28 |
| 5 | 202211010812-FIGURE OF ABSTRACT [28-02-2022(online)].jpg | 2022-02-28 |
| 6 | 202211010812-DRAWINGS [28-02-2022(online)].pdf | 2022-02-28 |
| 7 | 202211010812-DECLARATION OF INVENTORSHIP (FORM 5) [28-02-2022(online)].pdf | 2022-02-28 |
| 8 | 202211010812-COMPLETE SPECIFICATION [28-02-2022(online)].pdf | 2022-02-28 |
| 9 | 202211010812-FORM-26 [01-07-2022(online)].pdf | 2022-07-01 |