Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Performing One Or More Functions Associated With An Assistive Device

Abstract: A SYSTEM AND METHOD FOR PERFORMING ONE OR MORE FUNCTIONS ASSOCIATED WITH AN ASSISTIVE DEVICE The present invention relates to a system and method for performing one or more functions associated with an assistive device. The method may include receiving a user input to perform one or more functions The method may include receiving context information associated with the user and assistive device based on the received user input. The method may include identifying at least one function from among a plurality of a function based on received context information. The method may include determining instruction to perform the identified function using a trained model. Subsequently, the method may include presenting the determined instruction to the user to perform the identified function. <>

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 January 2022
Publication Number
05/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
shivani@lexorbis.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-02-06
Renewal Date

Applicants

Enligence Technology Labs LLP
C/o Manvi Gupta, 20/20/1, Padmavati Nagar, Venkateswara Colony, Vizianagaram, Andhra Pradesh -535001, India

Inventors

1. DAYAL, Abhinav
Professor, Department of CSE, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh -534202
2. PONNADA, Sreenu
Associate Professor, Department of CSE, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh – 534202
3. BONTHU, Sridevi
Associate Professor, Department of CSE, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh – 534202
4. KIRAN, Kompella Bhargav
Associate Professor, Department of CSE, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh -534202
5. SHANKAR, Saripalle Ravi
Gayatri Vidya Parishad College of Engineering, Center for Innovation and Incubation, Madhurawada, Visakhapatnam 530048
6. GUPTA, Sumit
HOD, Department of CSE, Vishnu Institute of Technology, Bhimavaram, Andhra Pradesh -534202

Specification

Claims:We claim:
1. A method for performing one or more functions associated with an assistive device, comprising one or more processors, and memory storing one or more programs for execution by the one or more processors, the method comprising:
- receiving a user input to perform one or more functions;
- receiving context information associated with the user and assistive device based on the received user input;
- identifying at least one function from among a plurality of a function based on received context information;
- determining instruction to perform the identified function using a trained model; and
- presenting the determined instruction to the user to perform the identified function.
2. The method as claimed in claim 1, wherein the user input includes at least one of one or more voice input, one or more movements, and one or more gestures.
3. The method as claimed in claim 1, further comprising:
- receiving, by a central server, a device registration request from the assistive device;
- assigning, by the central server, a unique device id to the assistive device; and
- establishing secure communication between the central server and the assistive device.
4. The method as claimed in claim 1, wherein the context information associated with the user and assistive device is received by one or more sensors.
5. The method as claimed in claim1, wherein the context information includes at least one of user-based context information, location-based context information, and time-based context information.
6. The method as claimed in claim 1, wherein the determining instruction to perform the one or more function by a trained model comprising:
- acquiring one or more sensor data related to performing one or more functions, wherein the acquired sensor data represent at least one of scenarios, objects, and locations.
- classifying and annotating the acquired sensor data based on the context information, wherein the annotation in such a way to perform at least one function;
- augmenting the annotated sensor data;
- training the model based on augmented sensor data;
- testing the inferences of the trained model; and
- deploying the tested trained model at the central server.
7. The method as claimed in claim 1, wherein the determined instruction is presented by at least one of voice notification, a visual cue, and vibration
8. A system for performing one or more functions associated with an assistive device, comprising one or more processors, and memory storing one or more programs for execution by the one or more processors, the system comprising:
- a receiving unit (210) is configured to receive a user input to perform one or more functions;
- a context information module (212) is configured to receive, context information associated with the user and assistive device based on the received user input;
- a function identifier engine (216) is configured to
-identify at least one function from among a plurality of a function based on received context information; and
-determine instruction to perform the identified function using a trained model; and
- the assistive device is configured to present the determined instruction to the user to perform the identified function.
9. The system as claimed in claim 8, wherein the user input includes at least one of one or more voice input, one or more movements, and one or more gestures.
10. The system as claimed in claim 8, wherein a central server is configured to
- receive a device registration request from the assistive device;
- assign a unique device id to the assistive device; and
- establish secure communication with the assistive device.
11. The system as claimed in claim 8, wherein the context information associated with the user and assistive device is received by one or more sensors.
12. The system as claimed in claim 8, wherein the context information includes at least one of user-based context information, location-based context information, and time-based context information.
13. The system as claimed in claim 8, wherein a trained model 216 is trained to determine instruction to perform the one or more functions by:
- acquiring one or more sensor data related to performing one or more functions, wherein the acquired sensor data represent at least one of scenarios, objects, and locations.
- classifying and annotating the acquired sensor data based on the context information, wherein the annotation in such a way to perform at least one function;
- augmenting the annotated sensor data;
- training the model based on augmented sensor data;
- testing the inferences of the trained model; and
- deploying the tested trained model at the central server.
14. The system as claimed in claim 8, wherein the assistive device is configured to present the determined instruction by at least one of voice notification, a visual cue, and vibration.
, Description:FIELD OF THE INVENTION

The present invention relates to a system and method for performing one or more functions associated with an assistive device.

BACKGROUND

With an ageing global population and a rise in non-communicable diseases, more people will need one or more assistive devices. Hearing aids, wheelchairs, communication aids, spectacles, pill organizers, and memory aids are all examples of these devices. The assistive devices are designed to maintain or improve an individual's functioning and thereby promote their well-being. These devices are designed to perform a particular task. Thus, older or blind people may need to carry many devices and it becomes hectic for them to manage multiple assistive devices.
Further, most of the assistive devices designed so far for disabled or for patients are designed, made, or adapted to assist a person to perform a particular task i.e., they can serve a single purpose. A disabled person will have a lot of requirements like crossing roads, boarding a bus, identifying whether a person is present next to him or not, checking whether he reached a market, bus stop, or supermarket. Most of the existing solutions are developed and tested in a structured and predictable environment and surroundings. However, designing the assistive design in an unstructured and unpredictable environment and surroundings such as crowded buses, people hanging on the door, buses not stopping at designated spots, poor road conditions, and unregulated traffic in and around the bus is challenging. There lies a need for a mechanism for performing one or more functions associated with the assistive device.

SUMMARY

This summary is provided to introduce a selection of concepts in a simplified format that is further described in the detailed description of the invention. This summary is not intended to identify key or essential inventive concepts of the invention, nor is it intended for determining the scope of the invention.
In an implementation, the present invention relates to a system and method for performing one or more functions associated with an assistive device. The method may include receiving a user input to perform one or more functions.The method may include receiving context information associated with the user and assistive device based on the received user input. The method may include identifying at least one function from among a plurality of a function based on received context information. The method may include determining instruction to perform the identified function using a trained model. Subsequently, the method may include presenting the determined instruction to the user to perform the identified function.
To further clarify the advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawing. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other features, aspects, and advantages of the present invention will become better understood when the following detailed description is read with reference to the accompanying drawings in which like characters represent like parts throughout the drawings, wherein:
Figure 1 illustrates an exemplary network environment implementing a system for performing one or more functions associated with an assistive device, according to an embodiment of the present subject matter;
Figure 2 illustrates a block diagram of a system for performing one or more functions associated with an assistive device, according to an embodiment of the present subject matter
Figure 3 illustrates a flow diagram depicting a method for performing one or more functions associated with an assistive device, according to an embodiment of the present subject matter;
Figure 4 illustrates a diagram depicting a method for guiding blind / elder people to board the bus by providing continuous instructions according to an exemplary embodiment of the present matter; and
Figure 5 illustrates a diagram depicting scenarios describing the crowded busses and the bus stopped far away from the bus stop exemplary embodiment according to an embodiment of the present matter.
Further, skilled artisans will appreciate that elements in the drawings are illustrated for simplicity and may not have necessarily been drawn to scale. For example, the flow charts illustrate the method in terms of the most prominent steps involved to help to improve understanding of aspects of the present invention. Furthermore, in terms of the construction of the device, one or more components of the device may have been represented in the drawings by conventional symbols, and the drawings may show only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the drawings with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.

DETAILED DESCRIPTION
It should be understood at the outset that although illustrative implementations of the embodiments of the present disclosure are illustrated below, the present invention may be implemented using any number of techniques, whether currently known or in existence. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified within the scope of the appended claims along with their full scope of equivalents.
The term “some” as used herein is defined as “none, or one, or more than one, or all.” Accordingly, the terms “none,” “one,” “more than one,” “more than once, but not all” or “all” would all fall under the definition of “some.” The term “some embodiments” may refer to no embodiments or one embodiment or several embodiments or all embodiments. Accordingly, the term “some embodiments” is defined as meaning “no embodiment, or one embodiment, or more than one embodiment, or all embodiments.”
The terminology and structure employed herein is for describing, teaching, and illuminating some embodiments and their specific features and elements and does not limit, restrict or reduce the spirit and scope of the claims or their equivalents.
More specifically, any terms used herein such as but not limited to “includes,” “comprises,” “has,” “consists,” and grammatical variants thereof do NOT specify an exact limitation or restriction and certainly do NOT exclude the possible addition of one or more features or elements, unless otherwise stated, and must NOT be taken to exclude the possible removal of one or more of the listed features and elements, unless otherwise stated with the limiting language “MUST comprise” or “NEEDS TO include.”
Whether or not a certain feature or element was limited to being used only once, either way, it may still be referred to as “one or more features” or “one or more elements” or “at least one feature” or “at least one element.” Furthermore, the use of the terms “one or more” or “at least one” feature or element do NOT preclude there being none of that feature or element unless otherwise specified by limiting language such as “there NEEDS to be one or more . . . ” or “one or more element is REQUIRED.”
Unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
Embodiments of the present invention will be described below in detail with reference to the accompanying drawings. The present invention relates to the field of artificial intelligence-powered assistive devices. The invention particularly supports ICU admitted patients, blind people, elderly people, and people with other disabilities.
Figure 1 illustrates an exemplary network environment 100 implementing a system for performing one or more functions associated with an assistive device, according to an embodiment of the present subject matter. As shown, the network environment 100 comprises a central server 102 configured for performing one or more functions associated with an assistive device, the plurality of assistive device 106a and 106b (only two assistive devices are shown and referred to hereafter as an assistive device 106) associated with a plurality of users 108a and 108b, and a communication network 104, wherein the central server 102 is communicatively connected via the communication network 104 to the plurality of assistive device 106a and 106b.
The assistive device 106 may be any computing device that often accompanies their users 108 to perform various activities and by way of example, the assistive device 106 may include, but not limited, to a smart wrist band, a wearable device, a smartwatch, a computer, a laptop, a notebook computer, a tablet, and a smartphone, and having communication capabilities. The assistive device 106 may communicate with the central server 102 through the communication network 104 in one or more ways such as wired, wireless connections, or a combination thereof. It will be appreciated by those skilled in the art that the assistive device 106 comprises one or more functional elements capable of communicating through the communication network 104 to receive one or more services offered by the system. In one embodiment of the present disclosure, the system 102 may be configured to support ICU admitted patients, blind people, elderly people, and people with other disabilities. The system 102 may be configured to guide blind or elder people to board the bus by providing continuous and easy instructions. The system 102 may be configured to help people from rural backgrounds or uneducated to easily move around in big hospitals, airports, railway stations, or shopping malls. The central server 102 may be configured to receive a device registration request from the assistive device. The central server 102 may be configured to assign a unique device id to the assistive device. The central server 102 may be configured to establish secure communication between the central server and the assistive device.
The communication network 104 may be a wireless network, or a wired network, or a combination thereof. The wireless network may include long-range wireless radio, wireless personal area network (WPAN), wireless local area network (WLAN), mobile data communications such as 3G, 4G, or any other similar technologies. The communication network 120 may be implemented as 30 one of the different types of networks, such as an intranet, local area network (LAN), 6 wide area network (WAN), the internet, and the like. The communication network 104 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like. Further, the communication network 120 may include a variety of network devices, including routers, bridges, servers, modems, computing devices, storage devices, and the like. In one implementation, the communication network 104 is the internet, which enables the communication between the system 102 and the plurality of assistive device 106.
Figure 2 illustrates a block diagram 200 of a system for performing one or more functions associated with an assistive device, according to an embodiment of the present subject matter. The system 202 includes a processor 204, a memory 206, a receiving unit 210, a context information engine 212, a function identifier engine 214, a trained model 216, and a data 218. In an embodiment, the processor 204, the memory 206, the receiving unit 210, the context information engine 212, the function identifier engine 214, the trained model 216 and the data 218 may be communicatively coupled to one another. At least one of the pluralities of the module 208 may be implemented through an AI model. A function associated with AI may be performed through the non-volatile memory or the volatile memory, and/or the processor.
The processor 204 may include one or a plurality of processors. At this time, one or a plurality of processors may be a general-purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an AI-dedicated processor such as a neural processing unit (NPU).
A plurality of processors controls the processing of the input data in accordance with a predefined operating rule or artificial intelligence (AI) model stored in the non-volatile memory or the volatile memory. The predefined operating rule or artificial intelligence model is provided through training or learning. Here, being provided through learning means that, by applying a learning technique to a plurality of learning data, a predefined operating rule or AI model of the desired characteristic is made. The learning may be performed on a device itself in which AI according to an embodiment is performed, and/or maybe implemented through a separate server/system. The AI model may consist of a plurality of neural network layers. Each layer has a plurality of weight values and performs a layer operation through calculation of a previous layer and an operation of a plurality of weights. Examples of neural networks include, but are not limited to, convolutional neural network (CNN), deep neural network (DNN), recurrent neural network (RNN), restricted Boltzmann Machine (RBM), deep belief network (DBN), bidirectional recurrent deep neural network (BRDNN), generative adversarial networks (GAN), and deep Q-networks.
As would be appreciated, the system 202, may be understood as one or more of a hardware, a software, a logic-based program, a configurable hardware, and the like. In an example, the processor 204 may be a single processing unit or a number of units, all of which could include multiple computing units. The processor 204 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, processor cores, multi-core processors, multiprocessors, state machines, logic circuitries, application-specific integrated circuits, field-programmable gate arrays, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor 204 may be configured to fetch and/or execute computer-readable instructions and/or data stored in the memory 206.
In an example, the memory 206 may include any non-transitory computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and/or dynamic random-access memory (DRAM), and/or non-volatile memory, such as read-only memory (ROM), erasable programmable ROM (EPROM), flash memory, hard disks, optical disks, and/or magnetic tapes. The memory 206 may include the data. The data serves, amongst other things, as a repository for storing data processed, received, and generated by one or more of the processor 204, the memory 206, the receiving unit 210, the context information engine 212, the function identifier engine 214, the trained model 216 and the data 218.
The module (s) 210, amongst other things, may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement data types. The module(s) 210 may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions.
Further, the module(s) 210 may be implemented in hardware, as instructions executed by at least one processing unit, e.g., processor 204, or by a combination thereof. The processing unit may be a general-purpose processor that executes instructions to cause the general-purpose processor to perform operations or, the processing unit may be dedicated to performing the required functions. In another aspect of the present disclosure, the module(s) 210 may be machine-readable instructions that, when executed by a processor/processing unit, may perform any of the described functionalities. In some example embodiments, the module (s) 210 may be machine-readable instructions that, when executed by a processor 204/processing unit, perform any of the described functionalities.
In an embodiment, the receiving unit 210 may be configured to receive by a plurality of sensors, a user input to perform one or more functions. The user input includes at least one of voice input, one or more movements, and one or more gestures.
The context information module 212 may be configured to receive context information associated with the user and assistive device based on the received user input. The context information module 212 may be configured to receive the context information associated with the user and assistive device by one or more sensors. The context information includes at least one of user-based context information, location-based context information, and time-based context information. For example, the context may include the current view as an RGB image for help with navigating, the depth image for obstacle detection, or a spoken audio command by an ICU patient, a gesture expressing a question or query, etc.
Continuing with the above embodiment, the function identifier engine 214 may be configured to identify at least one function from among a plurality of a function based on received context information. Further, the function identifier engine 214 may be configured to determine instruction to perform the identified function using the trained model 216. The function identifier engine 214 may be configured to acquire one or more sensor data related to performing one or more functions. The acquired sensor data represent at least one of the scenarios, objects, and locations. The function identifier engine 214 may be configured to classify and annotate the acquired sensor data based on the context information. The annotation in such a way to perform at least one function. For example, the function identifier engine 214 may be configured to determine the objects of interest, their precise location and respond with necessary voice or vibration-based instructions to the visually challenged, based on the image provided. In another example, the function identifier engine 214 may be configured to analyze the voice command, identify the need, and respond with an appropriate answer, for example, which pill to take now, or if a nurse was to be called, etc. In case of calling the nurse, the system may be smarter and include more context information, e.g., expression of the patient, the voice analysis, the exact need from gesture, the status of medication being delivered say via an IV, etc. The function identifier engine 214 may be configured to augment the annotated sensor data. The function identifier engine 214 may be configured to train the model based on augmented sensor data. The function identifier engine may be configured to test the inferences of the trained model. The function identifier engine 214 may be configured to deploy the tested trained model at the central server.
The assistive device 106 may be configured to present the determined instruction to the user to perform the identified function. The assistive device 106 may be configured to present the determined instruction by at least one of voice notification, a visual cue, and vibration.
Figure 3 illustrates a flow diagram 300 depicting a method for performing one or more functions associated with an assistive device, according to an embodiment of the present subject matter. The method 300 may include receiving at a step301, a user input to perform one or more functions The user input includes at least one of one or more voice input, one or more movements, and one or more gestures.
The method 300 may include receiving at a step 303, context information associated with the user, and an assistive device based on the received user input. This step may be performed by the context information engine 212. The context information associated with the user and assistive device is received by one or more sensors. Further, the context information includes at least one of user-based context information, location-based context information, and time-based context information.
The method 300 may include identifying at a step 305, at least one function from among a plurality of a function based on received context information. This step may be performed by the function identifier module 214.
The method 300 may include determining at a step 307, instruction to perform the identified function using a trained model. This step may be performed by the function identifier module 214. The determining instruction to perform one or more functions by trained model includes acquiring one or more sensor data related to performing one or more functions. Further, the acquired sensor data represent at least one of the scenarios, objects, and locations. The method may include classifying and annotating the acquired sensor data based on the context information. The annotation in such a way to perform at least one function. The determination may include augmenting the annotated sensor data. The determination may include training the model based on augmented sensor data. The method may include testing the inferences of the trained model. The method may include deploying the tested trained model at the central server.
Subsequently, the method 300 may include presenting at a step 309, the determined instruction to the user to perform the identified function. The determined instruction is presented by at least one of voice notification, a visual cue, and vibration.
The method may include receiving, by a central server 102, a device registration request from the assistive device 106. The method may include assigning a unique device id to the assistive device 106. The method may include establishing secure communication between the central server 102 and the assistive device 106.
Figure 4 illustrates diagram 400 depicting a method for guiding blind / elder people to board the bus by providing continuous instructions, according to an exemplary embodiment of the present matter. In an example, the system 202 may be implemented in a wearable smart wristband 402. The smart wristband 402 may be configured to perform one or more functionality. The smart wristband 402 may be configured to help people to locate bus stops, board the bus on high traffic roads, smart home applications, etc., The same wrist band can help the people to locate bus stops, board the bus on the highly traffic roads, smart home application etc. The smart wristband 402 may be configured to navigate the visually impaired to board a bus. As shown in figure 4, the smart wristband 402 may be configured to work with at least one of alert mode. For example, the smart wristbands 402 may be configured to work in 6 alert modes such as 0 - Sleep, 1 - Bus Stop, 2 - Person, 3 - Road, 4 - Bus, and 5 - Bus Door. The default mode may be bus detection. Further, the system 202 may be configured to help blind / elder people to board the bus by continuously instructing the person from the point he locates the bus stop to the point he boards the bus by using depth sensors. This step corresponds to the step 309. The system 202 helps him to safely board the bus with simple instructions like
a. Bus is approaching, and it is 55 meters away from you,
b. Turn right to face the front door, and
c. Door is 2 steps away from you.
The people who are blind and regularly travel are facing issues with boarding the bus successfully on the high traffic roads. Every wrist band 402 may be assigned a unique ID and that ID may be mapped to the respective function that the user intends to use. The wristband may be utilized for other applications, by just changing the mapping of ID with the application. The smart wristband may be configured to use depth camera get both image and depth helping with correct identification of the bus door, its relative location and avoid static and moving obstacles along the way giving the visually challenged a more self-confident way to board the desired bus. The smart wrist band 402 may be configured to help the blind navigate to the bus stop, guiding whether he is facing the road or not by tracking the moving objects, guiding him to board the bus by continuously instructing him, and identifying a person nearby to take immediate help from him, if required.
Figure 5 illustrates a diagram 500 depicting classifying and annotating the acquired sensor data, according to an exemplary embodiment of the present subject matter. The system may be configured to collect various images having buses, bus stops, and other vehicles like auto, car, two-wheeler, truck and annotate them according to the requirement. The collected image dataset represents buses in India, rural, urban, suburban scenarios, with various lighting and rush conditions. The annotation is done in such a way that the entire system will safely navigate him to board the bus. The annotated labels are auto, bus, bus stop, car, driver door, front door, rear door, person, route, truck, two-wheeler. A pre-trained Mobilenet SSD model on Common Objects in Context (COCO) dataset may be taken from the TensorFlow Model Garden library and fine-tuned with the newly created image data set. Once the smart wristband detects the bus, the smart wristband guides the person to move a little either to right or left with voice instructions until it locates the bus door and keeps on instructing him until he successfully boards the bus. The system requires a small bag (or the person can use the same bag which he is using to carry his belongings) to hold the computing unit and other devices. The user can select the modes through which he can board the bus safely. The base model used is pre-trained mobilenet_ssd_v2 on the COCO dataset. This model may be fine-tuned on a custom image dataset of images. The total number of classes is 11 and the class labels are auto, bus, bus stop, car, driver door, front door, person, rear door, route, truck, two-wheeler. The train, validation, test split of the data is 85% - 10% - 5%. The model is trained for example, but not limited to, 60,000 steps with a batch size of 80. The number of evaluation steps is 50. The iou_threshold is 0.5. The images are resized to a height of 300 and a width of 300. The dropout used is 0.5. After training and running a test inference to check the model’s functionality, the model is deployed to an OAK-D device by converting the custom model to OpenVino and DepthAI.
In another example, the wristband may be given to patients at hospitals. Many people from rural areas visit hospitals in cities for their needs. Most of the super specialty hospitals may have different departments and be in different buildings. A wrist band may be configured to guide the patient to move around easily based on that hospital. In another example, a wristband may be configured to translate the home into a smart home.
In view of the aforesaid, there are provided various advantageous features relating to the present disclosure;
● the present invention aims at developing an easy to wear and easy-to-use low-cost assistive device.
● single assistive device may be configured help the people to locate bus stops, board the bus on the highly traffic roads, smart home application etc.
While specific language has been used to describe the disclosure, any limitations arising on account of the same are not intended. As would be apparent to a person in the art, various working modifications may be made to the method to implement the inventive concept as taught herein. The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein.
Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts necessarily need to be performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples. Numerous variations, whether explicitly given in the specification or not, such as differences in structure, dimension, and use of material, are possible. The scope of embodiments is at least as broad as given by the following claims.
Benefits, other advantages, and solutions to problems have been described above concerning specific embodiments. However, the benefits, advantages, solutions to problems and any component(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or component of any or all the claims.

Documents

Application Documents

# Name Date
1 202241004876-STATEMENT OF UNDERTAKING (FORM 3) [28-01-2022(online)].pdf 2022-01-28
2 202241004876-FORM FOR STARTUP [28-01-2022(online)].pdf 2022-01-28
3 202241004876-FORM FOR SMALL ENTITY(FORM-28) [28-01-2022(online)].pdf 2022-01-28
4 202241004876-FORM 1 [28-01-2022(online)].pdf 2022-01-28
5 202241004876-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [28-01-2022(online)].pdf 2022-01-28
6 202241004876-EVIDENCE FOR REGISTRATION UNDER SSI [28-01-2022(online)].pdf 2022-01-28
7 202241004876-DRAWINGS [28-01-2022(online)].pdf 2022-01-28
8 202241004876-DECLARATION OF INVENTORSHIP (FORM 5) [28-01-2022(online)].pdf 2022-01-28
9 202241004876-COMPLETE SPECIFICATION [28-01-2022(online)].pdf 2022-01-28
10 202241004876-STARTUP [01-02-2022(online)].pdf 2022-02-01
11 202241004876-FORM28 [01-02-2022(online)].pdf 2022-02-01
12 202241004876-FORM-9 [01-02-2022(online)].pdf 2022-02-01
13 202241004876-FORM 18A [01-02-2022(online)].pdf 2022-02-01
14 202241004876-FER.pdf 2022-03-07
15 202241004876-Proof of Right [18-03-2022(online)].pdf 2022-03-18
16 202241004876-FORM-26 [18-03-2022(online)].pdf 2022-03-18
17 202241004876-OTHERS [24-06-2022(online)].pdf 2022-06-24
18 202241004876-FER_SER_REPLY [24-06-2022(online)].pdf 2022-06-24
19 202241004876-COMPLETE SPECIFICATION [24-06-2022(online)].pdf 2022-06-24
20 202241004876-CLAIMS [24-06-2022(online)].pdf 2022-06-24
21 202241004876-US(14)-HearingNotice-(HearingDate-16-11-2022).pdf 2022-10-20
22 202241004876-FORM-26 [14-11-2022(online)].pdf 2022-11-14
23 202241004876-Correspondence to notify the Controller [14-11-2022(online)].pdf 2022-11-14
24 202241004876-Written submissions and relevant documents [30-11-2022(online)].pdf 2022-11-30
25 202241004876-PatentCertificate06-02-2023.pdf 2023-02-06
26 202241004876-IntimationOfGrant06-02-2023.pdf 2023-02-06

Search Strategy

1 202241004876E_07-03-2022.pdf

ERegister / Renewals

3rd: 05 Dec 2023

From 28/01/2024 - To 28/01/2025

4th: 05 Dec 2023

From 28/01/2025 - To 28/01/2026

5th: 05 Dec 2023

From 28/01/2026 - To 28/01/2027

6th: 05 Dec 2023

From 28/01/2027 - To 28/01/2028

7th: 05 Dec 2023

From 28/01/2028 - To 28/01/2029