Abstract: A system (100) to detect position of objects comprising sensors (104) are configured to provide a first signal representative of presence or absence of objects at a distance away, a processing module (102) configured to receive the first signal from the sensors (104), determine a distance of the objects from the sensors (104) based on the received first signal, generate a second signal, indicative of distance of the objects from the sensors (104), generate a third signal of the absence of the objects, generate a fourth signal indicative of an alert, a user interface (106) configured to receive the second, the third and the fourth signal, provide an audio output indicative of the distance of the objects from the sensors (104), provide audio output indicative of the absence of the objects, and provide audio output indicative of the location of the objects at being at the location equal to or less than the predetermined distance. [Figure 1]
FIELD OF THE INVENTION
Embodiments of the present invention generally relates to smart devices and assistive technology. More particularly the invention relates to a system and method to detect position of one or more objects and give its distance in real time in audio form.
BACKGROUND OF THE INVENTION
Smart devices refer to various surveying methods of providing useful information to a person of the surrounding environment, including but not limited to, temperature value, speed of an object (hand-held speedometer), laser scales etc.
There is a lack of such system which can measure the distance between two points and actively provide the measured distance data as an audio output. Such technology has many potential applications in measuring devices used in commercial products, defense, explorations etc. Anywhere the eye cannot see and detect the distance, the proposed system comes in handy to know in real time how far an object is, in terms of a real time audio feedback.
Assistive technology is a term for assistive, adaptive, and rehabilitative devices for humans, people with disabilities and the elderly. Disabled people often have difficulty performing activities of daily living independently, or even with assistance. Assistive technology reduces the need for formal health and support services, long-term care and the work of caregivers. Without assistive technology, people are often excluded, isolated, and locked into poverty, thereby increasing the impact of disease and disability on a person, their family, and society.
There are many low-tech visual aids that can help older adults with visual impairment, such as magnifying glasses, a long cane, glasses, optoelectronic reading systems (i.e., video magnifier), large-print books, audiobooks, a touch watch, a phone with enlarged buttons, books in Braille, and walking aids. There are various warning systems available which alerts visually impaired person before collision. However, such system lacks the ability to provide real time audio output and provide suggestions.
Therefore, there is a need for an improved a system and method to detect position of one or more objects and give its distance in real time in audio form, which does not suffer from the above-mentioned limitations. Such system and method should be far more efficient than prior art.
OBJECT OF THE INVENTION
An object of the present invention is to provide a system to detect position of one or more objects and give its distance in real time in audio form.
Another object of the present invention is to provide a method to detect position of one or more objects and give its distance in real time in audio form.
Yet another object of the present invention is to provide a system for converting and providing real time spoken (audio) output of the measured data into relevant units, e.g., speaking out distance from an nearby obstacle or solid body.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simple manner, which is further described in the detailed description of the invention. This summary is neither intended to identify key or essential inventive concepts of the subject matter, nor to determine the scope of the invention.
According to a first aspect of the present invention, there is provided a system to detect position of one or more objects and give its distance in real time in audio form. The system comprises one or more sensors, wherein the one or more sensors are configured to provide a first signal representative of presence or absence of one or more objects at a distance away from the one or more sensors, a processing module in communication with the one or more sensors, configured to: receive the first signal from the one or more sensors, determine a distance of the one or more objects from the one or more sensors based on the received first signal, in case of presence of the one or more objects, generate a second signal periodically, indicative of real time distance of the one or more objects from the one or more sensors, generate a third signal in case of the absence of the one or more objects, indicative of absence of the one or more objects, generate a fourth signal indicative of an alert where the determined distance is less than or equal to a predetermined distance, a user interface in communication with the processing module, configured to: receive the second signal, the third signal and the fourth signal, provide a real time audio output indicative of the real time distance of the one or more objects from the one or more sensors based on the second signal, provide a real time audio output indicative of the absence of the one or more objects, based on the third signal, and provide a real time audio output indicative of the location of the one or more objects at being at the location equal to or less than the predetermined distance, based on the fourth signal.
In accordance with an embodiment of the present invention, the one or more sensors are selected from a group comprising laser sensors, ultrasonic sensors, capacitive sensors, photoelectric sensors, inductive sensors, or magnetic sensors.
In accordance with an embodiment of the present invention, the user interface is configured to provide a warning signal suggestive of probable collision with the detected one or more objects at the location equal to or less than the predetermined distance.
In accordance with an embodiment of the present invention, the one or more sensors are further configured to provide a fifth signal indicative of an empty space near the one or more sensors.
In accordance with an embodiment of the present invention, wherein the processing module is configured to receive the second signal, the third signal and the fourth signal, and provide a suggestive signal to the user interface, indicative of an empty space and/or a suggested path to dodge the detected one or more objects.
In accordance with an embodiment of the present invention, the user interface is selected from audio output, buzzer, or vibration output or a combination thereof.
According to a second aspect of the present invention, there is provided a method to detect position of one or more objects and give its distance in real time in audio form. The method comprising steps of providing a first signal representative of presence or absence of one or more objects at a distance away from one or more sensors for detecting one or more objects, receiving the first signal from the one or more sensors, determining a distance of the one or more objects from the one or more sensors based on the received first signal, in case of presence of the one or more objects, generating a second signal periodically, indicative of real time distance of the one or more objects from the one or more sensors, generating a third signal in case of the absence of the one or more objects, indicative of absence of the one or more objects, generating a fourth signal indicative of an alert where the determined distance is less than or equal to a predetermined distance, receiving the second signal, the third signal and the fourth signal, providing a real time audio output periodically indicative of the real time distance of the one or more objects from the one or more sensors based on the second signal, providing a real time audio output indicative of the absence of the one or more objects, based on the third signal, and providing a real time audio output indicative of the location of the one or more objects at being at the location equal to or less than the predetermined distance, based on the fourth signal.
In accordance with an embodiment of the present invention, the method comprises step of providing a warning signal suggestive of a change in the distance between the one or more objects and the one or more sensors and probable collision with the detected one or more objects at the location equal to or less than the predetermined distance.
In accordance with an embodiment of the present invention, the method comprises step of providing fifth signal indicative of an empty space near one or more sensors.
In accordance with an embodiment of the present invention, the method comprises step of receiving the second signal, third signal, and the fourth signal and providing a suggestive signal, indicative of an empty space nearby and/or suggested path to dodge the detected one or more objects.
BRIEF DESCRIPTION OF THE DRAWINGS
So that the manner in which the above recited features of the present invention can be understood in detail, a more particular to the description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, the invention may admit to other equally effective embodiments.
These and other features, benefits and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein:
Fig. 1 illustrates a system to detect position of one or more objects and give its distance in real time in audio form, in accordance with the present invention;
Fig. 2 illustrates a method to detect position of one or more objects and give its distance in real time in audio form, in accordance with an embodiment of the present invention;
Fig. 3A and 3B illustrate information flow and an exemplary implementation of system and method shown Fig. 1 and Fig. 2, in accordance with an embodiment of the present invention;
Fig. 4 illustrates an exemplary implication, in accordance with an embodiment of the present invention in terms of an wearable assembly of such system ; and
Fig. 5 shows a flow chart illustrating an exemplary implementation, in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF THE DRAWINGS
While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claims. As used throughout this description, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense, (i.e., meaning must).
Further, the words "a" or "an" mean "at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as "including," "comprising," "having," "containing," or "involving," and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term "comprising" is considered synonymous with the terms "including" or "containing" for applicable legal purposes. Any discussion of documents, acts, materials, devices, articles and the like is included in the specification solely for the purpose of providing a context for the present invention. It is not suggested or represented that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present invention.
The present invention is described hereinafter by various embodiments with reference to the accompanying drawings, wherein reference numerals used in the accompanying drawing correspond to the like elements throughout the description. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiment set forth herein. Rather, the embodiment is provided so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those skilled in the art. In the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as examples only and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary and are not intended to limit the scope of the invention.
Figure 1 illustrates a system (100) to detect position of one or more objects and give its distance in real time in audio form, in accordance with an embodiment of the present invention. As shown in figure 1, the system (100) comprises one or more sensors (104), a processing module (102) and a user interface (106). In an embodiment of the present invention, the one or more sensors are selected from a group comprising laser sensors, ultrasonic sensors, capacitive sensors, photoelectric sensors, inductive sensors, or magnetic sensors. The one or more sensors (104) may be disposed on a wearable to be worn by the user or the one or more sensors (104) may be disposed on a vehicle such as, but not limited to, a car or a bike.
Additionally, the processing module (102) is envisaged to include computing capabilities such as a memory unit (1022) configured to store machine readable instructions. The machine-readable instructions may be loaded into the memory unit (1022) from a non-transitory machine-readable medium, such as, but not limited to, CD-ROMs, DVD-ROMs and Flash Drives. Alternately, the machine-readable instructions may be loaded in a form of a computer software program into the memory unit (1022). The memory unit (1022) in that manner may be selected from a group comprising EPROM, EEPROM and Flash memory. Then, the processing module (102) includes a processor (1024) operably connected with the memory unit (1022). In various embodiments, the processor (1024) may be a microprocessor selected from one of, but not limited to an ARM based or Intel based processor or in the form of field-programmable gate array (FPGA), a general-purpose processor and an application specific integrated circuit (ASIC).
Further, the processing module (102) comprises a communication module (1026) configured for enabling connection of the one or more sensors (104), the processing module (102), and the user interface (106). The connection may be wired or wireless. In that sense, the communication module (1026) may include Power over Ethernet Switch, USB ports etc. These may allow transferring of data from one or more sensors (104) to the processing module (102) and data from the processing module (102) to the user interface (106) via Ethernet cable, USB cable etc. Additionally, or alternately, the communication module (1026) may be an Internet of Things (IOT) module, Wi-Fi module, Bluetooth module, RF module etc. adapted to enable a wireless communication between the one or more sensors (104), the processor (1024) and the user interface (106) via a wireless communication network (110).
The wireless communication network (110) may be, but not limited to, Bluetooth network, RF network, NFC, WIFI network, Local Area Network (LAN) or a Wide Area Network (WAN). The wireless communication network (110) may be implemented using a number of protocols, such as, but not limited to, TCP/IP, 3GPP, 3GPP2, LTE, IEEE 802.x, etc. In one embodiment, all the components of the system (100) are connected with each other via the communication network (100).
Further, the processing module (102) may also include the user interface (106). The user interface (106) may include a display envisaged to show the data received from the one or more sensors (104), the processing module (102), and the results of the analysis. In an embodiment of the present invention, the user interface (106) is selected from audio output, buzzer, or vibration output or a combination thereof. However, a skilled addressee may appreciate that the user interface (106) may include a display that may be, but not limited to, Light-emitting diode display (LED), electroluminescent display (ELD), liquid crystal display (LCD), Organic light-emitting diode (OLED) & AMOLED display. Furthermore, the user interface (106) may include accessories like keyboard, mouse etc. envisaged to provide input capability to enable a user to enter his/her details. In another embodiment, the user interface (106) may be a touch input-based display that integrates the input-output functionalities.
Additionally, the system (100) may comprise one or more user devices are connected with the processing module (102) via a wired or wireless connection. Herein, the one or more user devices (106) may be selected from computing devices such as desktop PC, laptop, PDA or hand-held computing device such as smartphones and tablets. The one or more user devices may be associated with one or more users. In another embodiment, instead of a processing module (102) and one or more sensors (104) being a stand-alone device, the one or more user devices (106) may house the processing module (102) and the one or more sensors (104) along with their functionalities. The one or more user device (106) already includes microprocessor (1024) for processing and communication capabilities via wired or wireless connections, so that the one or more user devices (106) may be provided with the one or more sensors (104).
In yet another embodiment, the system (100) could be implemented as a distributed system (100) where the one or more sensors (104) the processing module (102) and the user interface (106) may be at disposed at different locations from each other and/or the processing module (102) could be implemented in a server-side computing device or cloud computing environment. It will be appreciated by a skilled addressee that there is multiple arrangement in which the present invention can be implemented, without departing from the scope of the present invention. The processing module (102) is also envisaged to implement Artificial Intelligence, Machine Learning and deep learning for data collation and processing.
In accordance with an embodiment of the present invention, the system (100) may also include a data repository (108). The data repository (108) may be a local storage (such as SSD, eMMC, Flash, SD card, etc.) or a cloud-based storage. In any manner, the data repository (108) is envisaged to be capable of providing the data to the processing module (102), when the data is queried appropriately using applicable security and other data transfer protocols. The data repository (108) may store, but not limited to, previous and/or live images, videos, audios, 3D immersive content, data from the one or more sensors (104), predetermined data. It is also envisaged to store various charts, tables, manipulatable 3D content, prepared for the users. In one embodiment of the present invention, the processing module (102) may include AI and deep learning-based trained models using the above data, to compare and assess and update the database based on the received data from the one or more sensors (104) or the internet in real-time.
Figure 2 illustrates a method (200) to detect position of one or more objects and give its distance in real time in audio form. As shown in figure 2, the method (200) begins at step 210 by providing a first signal representative of presence or absence of one or more objects at a distance away from one or more sensors (104) for detecting one or more objects. Figure 3A and figure 3B illustrate information flow and an exemplary implementation of system (100) and method (200) shown Figure 1 and Figure 2, in accordance with an embodiment of the present invention. As shown in figure 3A, the one or more sensors (104) are configured to provide a first signal representative of presence or absence of one or more objects at a distance away from the one or more sensors (104).
Further, as shown in figure 3A, at step 220, the processing module (102) in communication with the one or more sensors (104), configured to receive the first signal from the one or more sensors (104). For example, the one or more sensors (104) may be an ultra-sonic senor, a LIDAR, or a LASER based sensors (104) may emit a sound wave or light which may be reflected from one or more objects and creates a signal called the first signal. However, in absence of any object, i.e., in case, the sensors (104) do not receive a reflected signal within a specified time let’s say within 1 second to 60 second, it may be considered as absence of any objects in vicinity of the one or more sensors (104).
Next, as shown in figure 3A, at step 230, the processing module (102) may determine a distance of the one or more objects from the one or more sensors (104) based on the received first signal, in case of presence of the one or more objects. As stated above, when the reflected signal is occurred within predetermined time period, it suggests that one or more object may be present in front of the one or more sensors (104). Based on the reflected signal, the processing module (102) may determine the distance of the one or more objects from the one or more sensors (104) based on the received first signal.
Next, at step 240, as shown in figure 3B, the processing module (102) is further configured to generate a second signal periodically, indicative of real time distance of the one or more objects from the one or more sensors (104). The processing module (102) may use the communication module (1026) to generate and transmit the second signal periodically. Further, as shown in figure 3B, at step 250, the processing module (102) may be configured to generate a third signal in case of the absence of the one or more objects, indicative of absence of the one or more objects. Later, at step 260, as shown in figure 3B, the processing module (102) is further configured to generate a fourth signal indicative of an alert where the determined distance is less than or equal to a predetermined distance.
Furthermore, as illustrated in figure 3B, at step 270, the user interface (106) in communication with the processing module (102), configured to receive the second signal, the third signal and the fourth signal. The user interface (106) may receive the second signal, the third signal and the fourth signal using the communication module (1026). Next, at step 280, as shown in figure 3B, the user interface (106) may provide a real time audio output indicative of the real time distance of the one or more objects from the one or more sensors (104) based on the second signal. For example, an object is at a distance of 15 feet, and the user is moving towards the object, the user interface (106) after receiving the second signal may determine the user a real time distance of the object. In the present case, the distance may be decreasing since the user is moving towards the object and the user interface (106) after some time may determine the distance as 10 feet and later 8 feet and so on.
Further, at step 290, as shown in figure 3B, the user interface (106) may be configured to provide a real time audio output indicative of the absence of the one or more objects, based on the third signal. At this step, the processing module (102) may provide the third signal and based on this signal, the user interface (106) may provide an audio or vibration alert indicating absence of the one or more objects. For example, in this case, the signals from the one or more sensors (104) may not reflect and read within a predetermined time period.
Further, as shown in figure 3B, at step 300, the user interface (106) receives fourth signal and provide a real time audio output indicative of the location of the one or more objects at being at the location equal to or less than the predetermined distance, based on the fourth signal. For example, in case, the one or more objects are dangerously close to the user, the user interface (106) may rapidly give alert and real time distance in order for the user to avoid collision.
In an additional or alternative embodiment, the user interface (106) is configured to provide a warning signal suggestive of probable collision with the detected one or more objects at the location equal to or less than the predetermined distance. For example, the one or more objects are at a distance of 1 or 2 feet which is less than the predetermined distance of 3 feet to 5 feet, the processing module (102) may provide warning signal to the user interface (106). The user interface (106) may use audio output and/or vibrational output to alter the user about incoming one or more objects to avoid collision.
In an additional or alternative embodiment, the one or more sensors (104) are further configured to provide a fifth signal indicative of an empty space near the one or more sensors (104). For example, in case of busy road or crowded area, the one or more sensors (104) may detect an empty space and provide the fifth signal about empty space in crowded area or busy road. On reception of the fifth signal, the processing module (102) may generate a signal of safety to the user to guide the user to the empty space. The audio output and vibration output may provide direction to the empty space to the user to avoid the heavy crowd and the busy road.
In an alternative or additional embodiment, the processing module (102) is configured to receive the second signal, the third signal and the fourth signal and provide a suggestive signal to the user interface (106), indicative of an empty space and/or a suggested path to dodge the detected one or more objects. For example, on reception of the suggestive signal, the processing module (102) may determine a path for the user to guide the user through a non-colliding and safe path to travel. The processing module (102) may provide the suggestive signal to the user interface (106) to provide an audio output and/or vibrational output to the user to guide the user through the crowded places and busy roads.
Further, figure 4 illustrates an exemplary implication, in accordance with an embodiment of the present invention. As shown in figure 4, the one or more sensors (104) may be disposed on a wearable to be worn by the user. The one or more sensors (104) may gather the first signal and send it to the processing module (102) inbuilt in the wearable. The processing module (102) may function the same way as disclosed above and prove the second signal, the third signal, the fourth signal, the suggestive signal, the warning signal, and the fifth signal for the user. The wearable may be provide with audio and or vibrational output (402) to process the signals from the processing module (102) and provide audio/vibrational output as disclosed above.
Further, figure 5 shows a flow chart illustrating an exemplary implementation, in accordance with an embodiment of the present invention. The system (100) may be stand-alone, easy to use wearable product, or disposed on vehicles. The user can switch it ON and go about their regular commutes, meanwhile, it keeps scanning the nearby environment of the user and helps them navigate by notifying the user about any obstacle coming in the vicinity of 10-15 feet, with a real time spoken guidance of the path. The system (100) is highly robust and active may have a refresh rate of range 10 milliseconds to 500 milliseconds and hence giving an immediate response. Various usage cases elaborating the output depending on the obstacle distance -
1. Normal walking (Clear) – When no obstacle is in the vicinity of the user up to 10 feet, the system (100) may respond: clear for 10 feet-15 feet.
2. Something in between (3-10 feet) –
When any obstacle is encountered in the path of the user, the system (100) detects it, calculates the distance from the user and speaks the distance out through the speaker and or vibrational output, the response is obstacle at X feet (between 3 feet to 10 feet).
Now, if the obstacle is not moving with respect to the user, i.e., effective distance is constant, then the device stays silent for certain iterations (5-10 seconds, depending on the distance) and loops back again to give another notification.
3. Something very close (>3 feet)
In case of something being closer than 2-3 feet to the user, the user receives an alert signal from the system (100) through the speaker along with the vibration alert from the direction of the obstacle. (The vibrational output vibrates from back or front if the obstacle is in back or front respectively).
Other Potential applications of the concept such as car parking sensor assembly with distance sensors (104) mounted on front and back of the car and a speaker inside, giving real time spoken feedback notifying the driver about the distance remaining between the car and wall/obstacle. Further the system (100) may be implemented as a hand-held sensor and speaker assembly, like a torch to measure any distance. Furthermore, the system (100) may be used in exploration jacket or can be used by firefighters in smoky conditions and by cave explorers operating in the dark, also, by defense personnel. Basically, anywhere the eyes can’t be used to detect the distance, the concept of the system (100) comes in handy to know in real time how far is anybody in terms of a real time audio feedback.
In this manner the present invention may provide various advantages. The present invention provides cost effective and precise approach to provide safety to the user without visual abilities. The system can give sense in a complete dark place. Further, the system can increase situational awareness of any person by making him/her aware in advance based on artificial sensors. With better situational awareness, the system can help a person to make better decisions about their location.
In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, written in a programming language, such as, for example, Java, C, or assembly. One or more software instructions in the modules may be embedded in firmware, such as an EPROM. It will be appreciated that modules may comprised connected logic units, such as gates and flip-flops, and may comprise programmable units, such as programmable gate arrays or processors. The modules described herein may be implemented as either software and/or hardware modules and may be stored in any type of computer-readable medium or other computer storage device.
Further, while one or more operations have been described as being performed by or otherwise related to certain modules, devices or entities, the operations may be performed by or otherwise related to any module, device or entity. As such, any function or operation that has been described as being performed by a module could alternatively be performed by a different server, by the cloud computing platform, or a combination thereof. It should be understood that the techniques of the present disclosure might be implemented using a variety of technologies. For example, the methods described herein may be implemented by a series of computer executable instructions residing on a suitable computer readable medium. Suitable computer readable media may include volatile (e.g., RAM) and/or non-volatile (e.g., ROM, disk) memory, carrier waves and transmission media. Exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publicly accessible network such as the Internet.
It should also be understood that, unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as "controlling" or "obtaining" or "computing" or "storing" or "receiving" or "determining" or the like, refer to the action and processes of a computer system, or similar electronic computing device, that processes and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Various modifications to these embodiments are apparent to those skilled in the art from the description and the accompanying drawings. The principles associated with the various embodiments described herein may be applied to other embodiments. Therefore, the description is not intended to be limited to the embodiments shown along with the accompanying drawings but is to be providing broadest scope of consistent with the principles and the novel and inventive features disclosed or suggested herein. Accordingly, the invention is anticipated to hold on to all other such alternatives, modifications, and variations that fall within the scope of the present invention and the appended claims.
CLAIMS:
I claim:
1. A system (100) to detect position of one or more objects and give its distance in real time in audio form, the system (100) comprising:
one or more sensors (104), wherein the one or more sensors (104) are configured to provide a first signal representative of presence or absence of one or more objects at a distance away from the one or more sensors (104);
a processing module (102) in communication with the one or more sensors (104), configured to:
receive the first signal from the one or more sensors (104);
determine a distance of the one or more objects from the one or more sensors (104) based on the received first signal, in case of presence of the one or more objects;
generate a second signal periodically, indicative of real time distance of the one or more objects from the one or more sensors (104);
generate a third signal in case of the absence of the one or more objects, indicative of absence of the one or more objects;
generate a fourth signal indicative of an alert where the determined distance is less than or equal to a predetermined distance;
a user interface (106) in communication with the processing module (102), configured to:
receive the second signal, the third signal and the fourth signal;
provide a real time audio output indicative of the real time distance of the one or more objects from the one or more sensors (104) based on the second signal;
provide a real time audio output indicative of the absence of the one or more objects, based on the third signal; and
provide a real time audio output indicative of the location of the one or more objects at being at the location equal to or less than the predetermined distance, based on the fourth signal.
2. The system (100) as claimed in claim 1, wherein the one or more sensors (104) are selected from a group comprising laser sensors (104), ultrasonic sensors (104), capacitive sensors (104), photoelectric sensors (104), inductive sensors (104), or magnetic sensors (104).
3. The system (100) as claimed in claim 1, wherein the user interface (106) is configured to provide a warning signal suggestive of probable collision with the detected one or more objects at the location equal to or less than the predetermined distance.
4. The system (100) as claimed in claim 1, wherein the one or more sensors (104) are further configured to provide a fifth signal indicative of an empty space near the one or more sensors (104).
5. The system (100) as claimed in claim 1, wherein the processing module (102) is configured to:
receive the second signal, the third signal and the fourth signal;
provide a suggestive signal to the user interface (106), indicative of an empty space and/or a suggested path to dodge the detected one or more objects.
6. The system (100) as claimed in claim 1, wherein the user interface (106) is selected from audio output, buzzer, or vibration output or a combination thereof.
7. A method (200) to detect position of one or more objects and give its distance in real time in audio form, the method (200) comprising steps of:
providing (210) a first signal representative of presence or absence of one or more objects at a distance away from one or more sensors (104) for detecting one or more objects;
receiving (220) the first signal from the one or more sensors (104);
determining (230) a distance of the one or more objects from the one or more sensors (104) based on the received first signal, in case of presence of the one or more objects;
generating (240) a second signal periodically, indicative of real time distance of the one or more objects from the one or more sensors (104);
generating (250) a third signal in case of the absence of the one or more objects, indicative of absence of the one or more objects;
generating (260) a fourth signal indicative of an alert where the determined distance is less than or equal to a predetermined distance;
receiving (270) the second signal, the third signal and the fourth signal;
providing (280) a real time audio output periodically indicative of the real time distance of the one or more objects from the one or more sensors (104) based on the second signal;
providing (290) a real time audio output indicative of the absence of the one or more objects, based on the third signal; and
providing (300) a real time audio output indicative of the location of the one or more objects at being at the location equal to or less than the predetermined distance, based on the fourth signal.
8. The method (200) as claimed in claim 7, comprising step of providing a warning signal suggestive of a change in the distance between the one or more objects and the one or more sensors (104) and probable collision with the detected one or more objects at the location equal to or less than the predetermined distance.
9. The method (200) as claimed in claim 7, comprising step of providing fifth signal indicative of an empty space near one or more sensors (104).
10. The method (200) as claimed in claim 7, comprising step of:
receiving the second signal, third signal, and the fourth signal; and
providing a suggestive signal, indicative of an empty space nearby and/or suggested path to dodge the detected one or more objects.
| # | Name | Date |
|---|---|---|
| 1 | 202111023743-PROVISIONAL SPECIFICATION [28-05-2021(online)].pdf | 2021-05-28 |
| 2 | 202111023743-FORM 1 [28-05-2021(online)].pdf | 2021-05-28 |
| 3 | 202111023743-APPLICATIONFORPOSTDATING [26-05-2022(online)].pdf | 2022-05-26 |
| 4 | 202111023743-FORM-26 [27-07-2022(online)].pdf | 2022-07-27 |
| 5 | 202111023743-DRAWING [27-07-2022(online)].pdf | 2022-07-27 |
| 6 | 202111023743-COMPLETE SPECIFICATION [27-07-2022(online)].pdf | 2022-07-27 |
| 7 | 202111023743-GPA-220822.pdf | 2022-09-02 |
| 8 | 202111023743-Correspondence-220822.pdf | 2022-09-02 |
| 9 | 202111023743-FORM 18 [13-09-2023(online)].pdf | 2023-09-13 |
| 10 | 202111023743-FER.pdf | 2024-12-31 |
| 11 | 202111023743-FORM 3 [25-03-2025(online)].pdf | 2025-03-25 |
| 12 | 202111023743-OTHERS [27-06-2025(online)].pdf | 2025-06-27 |
| 13 | 202111023743-FORM-5 [27-06-2025(online)].pdf | 2025-06-27 |
| 14 | 202111023743-FER_SER_REPLY [27-06-2025(online)].pdf | 2025-06-27 |
| 15 | 202111023743-CLAIMS [27-06-2025(online)].pdf | 2025-06-27 |
| 1 | 202111023743E_30-12-2024.pdf |