Abstract: The present disclosure pertains to a wearable monitoring system (100) for vocally disabled. The system (100) include a device (102) adapted to be worn by a user, and collect vibration information in the throat of the user. Upon receiving vibration in the throat, an image acquisition unit (108) may be activated to collect images of a pre-defined area, and the images be analysed o detect a person. Upon detection of the person in the image, name of the person is (such as ‘mom’) spoken by a speaker (112), and when any unidentified person is detected in the image, an alert unit (114) is activated to inform nearby entities that the user needs help. Also, the device (102) is communicatively coupled with a mobile computing device associated with the caretaker, and they may check location and health information of the user anytime from a remote location.
TECHNICAL FIELD
[0001] The present disclosure relates generally to health and medical field. More particularly, the present disclosure provides a wearable monitoring system for vocally disabled person, to call nearby entities i.e. caretaker for assistance.
BACKGROUND
[0002] Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Improvements in living condition and advances in health care have resulted in a marked prolongation of life expectancy for elderly and disabled population. Children which are disabled due to issues of delayed growth or suffering from a syndrome are generally seen to have speech issues as well. Some children attempt to put in lots of efforts to speak but are unable to make a sound due to structural deficiencies or other internal issues. This creates lots of communication gaps. In certain situations, for example if the child has fallen, and there is an urgency to call onto someone, then it can be highly risky. Moreover, no facilities for continuous monitoring of the child and thus no care, except in fortuitous circumstances, in times of emergency. Therefore a device is requires that can detect the urgency of the situation and understand what the child wants to speak, then such situations may be avoided.
[0004] Existing solutions to use a communication aid are costly and tedious to use. Besides most of them requires understanding and such children have an issue of understanding as well. There are devices catering to only health monitoring and no single device for vocally disabled is available that may perform health monitoring, location tracking, and speaking name of their closed ones. In addition, working parents or caretakers of such children need to remotely assess current basic health statistics and location of such individuals, therefore, we need a device that may enable the parents of the vocally disabled to track them from a remote location.
[0005] Hence, there is need in the art to develop a solution to assist vocally disabled person to speak name, and also assist caretakers to track health and location of the vocally disabled.
OBJECTS OF THE PRESENT DISCLOSURE
[0006] Some of the objects of the present disclosure, which at least one embodiment herein satisfies are as listed herein below.
[0007] It is an object of the present disclosure to provide a system that helps in assisting vocally disabled person.
[0008] It is another object of the present disclosure to provide a system that speaks name, thus communication issue is eradicated.
[0009] It is another object of the present disclosure to provide a system to call other for help automatically, when the vocally disabled person need help.
[0010] It is another object of the present disclosure to provide a system that is wearable, and easily accessible.
[0011] It is another object of the present disclosure to provide a system that provides health information and current emotion of the vocally disabled person to the caretaker.
[0012] It is another object of the present disclosure to provide a system that assists in location tracking, in case the vocally disabled is strayed.
SUMMARY
[0013] Aspects of the present invention relates to health and medical field. More particularly, the present invention provides a wearable monitoring system for vocally disabled person, to call nearby entities i.e. caretaker for assistance.
[0014] An aspect of the present disclosure pertains to a wearable monitoring system for vocally disabled, the system may include a device adapted to be worn by a subject, a vibration sensor disposed in the device to receive vibration information in throat of the subject, a set of sensors may be configured to detect one or more health parameters of the subject, and correspondingly generate a set of signals, an image acquisition unit may be disposed in the device to capture one or more images of the one or more entities in a pre-defined area, and a processing unit including one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors and configured to activate the image acquisition unit, upon receiving the vibration information in throat of the subject, extract facial information from the received one or more images, compare the extracted facial information with a database storing facial information associated with a plurality of entities, based on the match, cause an audio unit to produce an indicator associated with the identified entity, actuate an alert unit upon detection of an unidentified entity, and correspondingly generate a first warning signal, wherein the generated first warning signal is transmitted to a mobile computing device.
[0015] In another aspect of the present disclosure, the processing unit may be further configured to extract value of each of the health parameters from the received set of signals, analyse the extracted value of each of the health parameters to detect health of the subject and determine emotional state of the subject, generate a second warning signal, upon detection of at least one of the value of the one or more health parameters beyond a threshold value, and the generated second warning signal is transmitted to the mobile computing device.
[0016] In an aspect, the device may include any or a combination of pendant, and wearable band.
[0017] In an aspect, the audio unit may include a speaker, wherein the indicator indicates name of the identified entity.
[0018] In an aspect, when no facial information is extracted from the received one or more images, the alert unit may be activated, that facilitates in notifying the one or more entities, that the subject needs help.
[0019] In an aspect, a push button may be mounted in the device, and configured to deactivate the alert unit.
[0020] In an aspect, the image acquisition unit may include any or a combination of camera, scanner, and face recognition sensor.
[0021] In an aspect, the set of sensors may include any or a combination of heart rate sensor, temperature sensor, accelerometer, and touch sensor.
[0022] In an aspect, the device may be in communication with the mobile computing device through a communication unit, and the communication unit comprises any or a combination of GSM, Wireless Fidelity (Wi-Fi) Module, Bluetooth Module, Li-Fi Module, optical fiber, Wireless Local Area Network (WLAN), and ZigBee.
[0023] In an aspect, the processing unit may be configured to transmit the determined emotional state of the subject to the associated mobile computing device.
[0024] In an aspect, a location identifier may be disposed in the device, and the location identifier may be configured to detect location information of the subject, and the detected location information is transmitted to the mobile computing device to enable the associated entity to check live location of the subject.
[0025] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0027] FIG. 1 illustrates a block diagram of proposed wearable monitoring system for vocally disabled, in accordance with an embodiment of the present disclosure.
[0028] FIG. 2 illustrates exemplary functional components of a processing unit of the proposed wearable monitoring system for vocally disabled, in accordance with an embodiment of the present disclosure.
[0029] FIG. 3 illustrates an exemplary flowchart to disclose working of the proposed wearable monitoring system for vocally disabled, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0030] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[0031] If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0032] As used herein the description and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0033] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such details as to clearly communicate the disclosure. However the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosures as defined by the appended claims. Embodiments explained herein relate to health and medical field. In particular, the present invention provides a wearable monitoring system for vocally disabled person, to call nearby entities i.e. caretaker for assistance.
[0034] As illustrated in FIG. 1, a proposed wearable monitoring system (100) (interchangeably, referred as system (100)) for vocally disabled is disclosed. The system (100) can include a device (102) adapted to be worn by a subject (interchangeably referred as user or vocally disabled hereinafter), the device (102) can include a vibration sensor (104), a set of sensors (106), an image acquisition unit (108), and a processing unit (110). The processing unit (110) can be operatively coupled to the vibration sensor (104), the set of sensors (106), and the image acquisition unit (108). In another embodiment, the device (102) can be a pendant, necklace, and wearable band that can be wearing in neck and the pendant can be near the throat. In an exemplary embodiment, the device (102) is less bulky, which enables the subject to carry the device (102) with comfortably.
[0035] In an embodiment, the vibration sensor (104) can be disposed in the device (102), and can be configured to receive vibration information in throat of the subject. For example, when vocally disabled person try to speak, the vibration sensor (104) can detect vibration information in throat of the person, and the vibration information can be transmitted to the processing unit (110).
[0036] In an embodiment, the set of sensors (106) can be configured to detect one or more health parameters of the subject, and correspondingly generate a set of signals. The set of sensors (106) can include any or a combination of heart rate sensor, temperature sensor, accelerometer, and touch sensor. The heart rate sensor can be configured to detect heart rate of the user, and the temperature sensor can be configured to detect temperature of the body of the user. Similarly, the accelerometer and the touch sensor can be configured to detect motion of the user such as gesture of the user.
[0037] In an embodiment, the image acquisition unit (108) can be disposed on the device (102), and can be configured to capture one or more images of the one or more entities in a pre-defined area. In another embodiment, the image acquisition unit (108) can include any or a combination of camera, camcorder, video recorder, scanner, face recognition sensor, and the likes.
[0038] In an embodiment, the processing unit (110) can include one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors and configured to activate the image acquisition unit (108), upon receiving the vibration information in throat of the subject from the vibration sensor (104). Upon activation, the image acquisition unit (108) can acquire one or more images of the pre-defined area, and transmit to the processing unit (110). The processing unit (110) can extract facial information from the received one or more images, and further compare the extracted facial information with a database storing facial information associated with the one or more entities, that can be caretaker, family member such as mother father, sister, brother and friends.
[0039] In an embodiment, based on the match, the processing unit (110) can cause an audio unit (112) to produce an indicator associated with the identified entity for example, produce ‘Mom’, ‘Dad’. The audio unit (112) can be a speaker, a microphone, and the likes. Moreover, upon detection of an unidentified entity, actuate an alert unit (114). The alert unit (114) can be alarm, horn, and the likes, and alert unit (114) can generate a first warning signal, upon detection of an unidentified entity, actuate, and the generated first warning signal can be transmitted to a mobile computing device. In another embodiment, when no facial information is extracted from the received one or more images, the alert unit (114) can be activated, that facilitates in notifying the one or more entities, that the subject needs help. For example, the user is trying to speak something, the vibration sensor (104) can detect vibration in the throat, but no person is detected in the pre-defined area, then the alert unit (114) can produce sound to notify nearby entities that the user need help.
[0040] In an embodiment, the processing unit (110) can be further configured to extract value of each of the health parameters from the received set of signals, analyse the extracted value of each of the health parameters to detect health of the subject and determine emotional state of the subject. Further, upon detection of at least one of the value of the one or more health parameters beyond a threshold value, the processing unit (110) can generate a second warning signal, and the generated second warning signal can be transmitted to the mobile computing device through a communication unit (116). In another illustrative embodiment, the one or more mobile computing devices can include any or a combination of smart phone, portable digital hand held device, laptop, digital assistant, and the likes. In an exemplary embodiment, the mobile computing device can include any one of a web client or application to facilitate communication and interaction between the device (102) and the mobile computing device. Accordingly, during a communication session with the mobile computing device, the device (102) can provide the mobile computing device with a set of machine-readable instructions that, when interpreted by the mobile computing device using the web client or the application, cause the mobile computing device 110 to present a user interface (UI), and transmit input received through such UIs back to the device (102).
[0041] In an embodiment, the communication unit (116) can be configured to establish communication in between the processing unit (110) and the mobile computing device. In an embodiment, the communication unit (116) can be configured to facilitate wireless Internet technology. Examples of such wireless Internet technology include GSM, Wireless LAN (WLAN), Wireless Fidelity (Wi-Fi), Wi-Fi Direct, Digital Living Network Alliance (DLNA), Wireless Broadband (WiBro), Worldwide Interoperability for Microwave Access (WiMAX), High Speed Downlink Packet Access (HSDPA), HSUPA (High Speed Uplink Packet Access), Long Term Evolution (LTE), LTE-A (Long Term Evolution-Advanced), and the like.
[0042] In addition, the communication unit (116) can be configured to facilitate short-range communication. For example, short-range communication can be supported using at least one of Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra-Wideband (UWB), ZigBee, Near Field Communication (NFC), Wireless-Fidelity (Wi-Fi), Wi-Fi Direct, Wireless USB (Wireless Universal Serial Bus), and the like.
[0043] In an embodiment, the system (100) can include a push button (118) that can be mounted in the device (102), and push button (118) can be configured to deactivate the alert unit (114). For example, when the alarm produce sound to notify nearby entities, the alarm continues until, the push button (118) is pressed.
[0044] In an embodiment, the system (100) can include a location identifier (120) disposed in the device (102), and the location identifier (120) can be configured to detect location information of the subject, and the detected location information can be transmitted to the mobile computing device to enable the associated entity to check live location of the subject. For example, family members of vocally disabled person can track their location, the location can be visible on the smart phone, and they can easily track where the vocally disabled person is present.
[0045] In an embodiment, the system (100) can include a power source operatively coupled to the vibration sensor (104), the set of sensors (106), the image acquisition unit (108), the processing unit (110), the audio unit (112), the alert unit (114), and the location identifier (120). The power source can facilitate in providing electric power to the system (100), and the power source included in the device can include any type of alternating current (AC) power source and/or direct current (DC) power source. In certain embodiments, the power source can include one or more batteries (e.g., rechargeable and/or non-rechargeable batteries). The power source can additionally, or alternatively, includes wires and/or plugs that can be connected to outlets.
[0046] As illustrated in FIG. 2, the processing unit (110) can include one or more processor(s) (202). The one or more processor(s) (202) can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that manipulate data based on operational instructions. Among other capabilities, the one or more processor(s) (202) are configured to fetch and execute computer-readable instructions stored in a memory (204) of the processing unit (110). The memory (204) can store one or more computer-readable instructions or routines, which may be fetched and executed to create or share the data units over a network service. The memory (204) can include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0047] In an embodiment, the processing unit (110) can also include an interface(s) (206). The interface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (206) may facilitate communication of the processing unit (110) with various devices coupled to the processing unit (110). The interface(s) (206) may also provide a communication pathway for one or more components of the processing unit (110). Examples of such components include, but are not limited to, learning engine(s) (208) and a database (210).
[0048] In an embodiment, the learning engine(s) (208) can be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the learning engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the learning engine(s) (208) may be processor executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the learning engine(s) (208) may include a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the learning engine(s) (208). In such examples, the processing unit (110) can include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the processing unit (110) and the processing resource. In other examples, the learning engine(s) (208) may be implemented by electronic circuitry. The database (210) can include data that is either stored or generated as a result of functionalities implemented by any of the components of the learning engine(s) (208).
[0049] In an embodiment, the learning engine(s) (208) can include an facial recognition unit (212), a text to speech converter unit (214), a health monitoring unit (216), an emotion prediction unit (218), and other unit(s) (220). The other unit(s) (220) can implement functionalities that supplement applications or functions performed by the system (100) or the learning engine(s) (208).
[0050] It would be appreciated that units being described are only exemplary units and any other unit or sub-unit may be included as part of the system (100). These units too may be merged or divided into super- units or sub-units as may be configured.
[0051] As illustrated in FIG. 2, the processing unit (110) can be configured to receive vibration information in throat of the subject (i.e. wearer of the device) and correspondingly activate an image acquisition unit (108) configured to capture one or more images of the one or more entities in a pre-defined area (i.e. the area where the wearer is looking). The received images from the image acquisition unit (108) can be transmitted to a facial recognition unit (212), where various facial attributes extraction algorithms can be applied, and extracted facial attributes can be compared with a dataset storing facial attributes of entities such as family members, friends, relatives, and the likes. Upon detection of a stored face, the information can be transmitted to a text to speech converter unit (214), which can produce an indicator associated with the identified entity using an audio unit (112). The indicator can include sound of name (i.e. speaking name) of the identified entity for example, ‘mom’.
[0052] In an embodiment, the facial recognition unit (212) can be further configured generate a first warning signal upon detection of an unidentified entity. The generated first warning signal can be transmitted to a mobile computing device associated with the entity, and the alert unit (114) can be activated to produce a sound for example, ‘help’ or any other pre-stored message.
[0053] In an embodiment, when no facial information is extracted from the received one or more images, the facial recognition unit (212) can activate the alert unit (114) to produce a sound for example, ‘help’ or any other pre-stored message. For example, when vibration is detected in throat, but no one is near the user, which means, the user need help or need something.
[0054] In an embodiment, the health monitoring unit (216) can be configured to receive a set of signals in response to the detected one or more health parameters of the subject, collected from the set of sensors (106) positioned on the device (102). The health monitoring unit (216) can be further configured extract value of each of the health parameters from the received set of signals, analyse the extracted value of each of the health parameters to detect health of the subject, and health information can be transmitted to the emotion prediction unit (218), which can determine emotional state (such as happy, neutral, sad or the likes) of the subject based on the received health information. For example, due to fever, the user is feeling sleepy. The emotion prediction unit (218) can further generate a second warning signal, upon detection of at least one of the value of the one or more health parameters beyond a threshold value, and the generated second warning signal can be transmitted to the mobile computing device to notify health information and emotional state of the user.
[0055] In an embodiment, each of the unit can be trained and updated based on associated values including received facial attributes, health parameters, emotional states, and the likes. In another embodiment, a deep leaning model can be trained based on the received facial attributes, health parameters, emotional states, where the deep leaning model can be stored in the database (210). In yet another embodiment, once the dataset is trained correctly, a deep learning algorithm can be configured to perform repetitive and routine tasks within a shorter period of time.
[0056] In an exemplary embodiment, the learning engine (208) can be further configured in the form of an Artificial Neural Network like the following but not limited to Convolutional Neural Network (CNN) and Deep Neural Network (DNN).
[0057] As illustrated in FIG. 3, at step (302), a database (210) can be created storing name and images of entities (i.e. family, relative, friends, and etc.).
[0058] At step (304), a vibration sensor (104) disposed in a device (102) (i.e. pendant positioned near the throat) can detect vibration from throat of the user (i.e. vocally disabled).
[0059] At step (306), a processing unit (110) can activate an image acquisition unit (108) upon receiving vibration information in the throat from the vibration sensor (104).
[0060] At step (308), the image acquisition unit (108) can capture images of a pre-defined area (i.e. images in the direction in which disabled is looking), and the collected images can be transmitted to the processing unit (110).
[0061] At step (310), the processing unit (110) can identify person captured in images through facial recognition analysis, and receive name of the person stored in the database (210) can be extracted.
[0062] At step (312), a text to speech converter (214) can convert the name of the person in a speech, and name of the person can be produced by a microphone (i.e. audio unit (112)) disposed in the device (102).
[0063] At step (314), when no match of image is found, help message can be produced by the microphone (i.e. audio unit (112)).
[0064] At step (316), the push-button (118) provided on back side of the device (102) can be pressed to deactivate the microphone. Also, after repeating the name five times, the microphone can be deactivated automatically.
[0065] At step (318), health parameters such as heart rate, airflow pressure, motion pattern, and the likes, collected from the set of sensors (106) can be transmitted to a mobile computing device of a caretaker (i.e. entity) through the communication unit (116) i.e. GSM module.
[0066] At step (320), location of the user can be transmitted to the associated mobile computing device through a GPS (120), and emotional state of the user can be transmitted to the associated mobile computing device.
[0067] As used herein, and unless the context dictates otherwise, the term “coupled”; is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms “coupled to”; and “coupled with”; are used synonymously. Within the context of this document terms “coupled to”; and “coupled with”; are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary.
[0068] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0069] The present disclosure provides a system that helps in assisting vocally disabled person.
[0070] The present disclosure provides a system that speaks name, thus communication issue is eradicated.
[0071] The present disclosure provides a system to call other for help automatically, when the vocally disabled person need help.
[0072] The present disclosure provides a system that is wearable, and easily accessible.
[0073] The present disclosure provides a system that provides health information and current emotion of the vocally disabled person to the caretaker.
[0074] The present disclosure provides a system that assists in location tracking, in case the vocally disabled is strayed.
Claims:
1. A wearable monitoring system (100) for vocally disabled comprising:
a device (102) adapted to be worn by a subject;
a vibration sensor (104) disposed in the device (102), and configured to receive vibration information in throat of the subject;
a set of sensors (106) configured to detect one or more health parameters of the subject, and correspondingly generate a set of signals;
an image acquisition unit (108) disposed in the device (102) and configured to capture one or more images of the one or more entities in a pre-defined area;
a processing unit (110) disposed in the device (102), wherein the processing unit (108) including one or more processors coupled with a memory, the memory storing instructions executable by the one or more processors and configured to:
activate the image acquisition unit (108), upon receiving the vibration information in throat of the subject;
extract facial information from the received one or more images;
compare the extracted facial information with a database storing facial information associated with a plurality of entities;
based on the match, cause an audio unit (112) to produce an indicator associated with the identified entity;
actuate an alert unit (114), upon detection of an unidentified entity, and correspondingly generate a first warning signal, wherein the generated first warning signal is transmitted to a mobile computing device; and
the processing unit (110) is further configured to:
extract value of each of the health parameters from the received set of signals;
analyse the extracted value of each of the health parameters to detect health of the subject and determine emotional state of the subject;
generate a second warning signal, upon detection of at least one of the value of the one or more health parameters beyond a threshold value, wherein the generated second warning signal is transmitted to the mobile computing device.
2. The system (100) as claimed in claim 1, wherein the device (102) comprises any or a combination of pendant, and wearable band.
3. The system (100) as claimed in claim 1, wherein the audio unit (112) comprises a speaker, wherein the indicator indicates name of the identified entity.
4. The system (100) as claimed in claim 1, wherein when no facial information is extracted from the received one or more images, the alert unit (114) is activated, that facilitates in notifying the one or more entities, that the subject needs help.
5. The system (100) as claimed in claim 1, wherein a push button(118) is mounted in the device, and configured to deactivate the alert unit (114).
6. The system (100) as claimed in claim 1, wherein the image acquisition unit (108) comprises any or a combination of camera, scanner, and face recognition sensor.
7. The system (100) as claimed in claim 1, wherein the set of sensors (106) comprise any or a combination of heart rate sensor, temperature sensor, accelerometer, and touch sensor.
8. The system (100) as claimed in claim 1, wherein the device 102 is in communication with the mobile computing device through a communication unit (116), wherein the communication unit comprises any or a combination of GSM, Wireless Fidelity (Wi-Fi) Module, Bluetooth Module, Li-Fi Module, optical fiber, Wireless Local Area Network (WLAN), and ZigBee.
9. The system (100) as claimed in claim 1, wherein the processing unit (110) is configured to transmit the determined emotional state of the subject to the associated mobile computing device.
10. The system (100) as claimed in claim 1, wherein a location identifier (120) is disposed in the device (102), wherein the location identifier is configured to detect location information of the subject, and the detected location information is transmitted to the mobile computing device to enable the associated entity to check live location of the subject.
| # | Name | Date |
|---|---|---|
| 1 | 202211003725-STATEMENT OF UNDERTAKING (FORM 3) [22-01-2022(online)].pdf | 2022-01-22 |
| 2 | 202211003725-POWER OF AUTHORITY [22-01-2022(online)].pdf | 2022-01-22 |
| 3 | 202211003725-FORM FOR STARTUP [22-01-2022(online)].pdf | 2022-01-22 |
| 4 | 202211003725-FORM FOR SMALL ENTITY(FORM-28) [22-01-2022(online)].pdf | 2022-01-22 |
| 5 | 202211003725-FORM 1 [22-01-2022(online)].pdf | 2022-01-22 |
| 6 | 202211003725-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-01-2022(online)].pdf | 2022-01-22 |
| 7 | 202211003725-EVIDENCE FOR REGISTRATION UNDER SSI [22-01-2022(online)].pdf | 2022-01-22 |
| 8 | 202211003725-DRAWINGS [22-01-2022(online)].pdf | 2022-01-22 |
| 9 | 202211003725-DECLARATION OF INVENTORSHIP (FORM 5) [22-01-2022(online)].pdf | 2022-01-22 |
| 10 | 202211003725-COMPLETE SPECIFICATION [22-01-2022(online)].pdf | 2022-01-22 |
| 11 | 202211003725-Proof of Right [16-02-2022(online)].pdf | 2022-02-16 |
| 12 | 202211003725-FORM-9 [10-11-2022(online)].pdf | 2022-11-10 |
| 13 | 202211003725-FORM 18 [15-11-2023(online)].pdf | 2023-11-15 |
| 14 | 202211003725-FER.pdf | 2025-07-09 |
| 1 | 202211003725_SearchStrategyNew_E_SearchHistory_202211003725E_30-06-2025.pdf |