Sign In to Follow Application
View All Documents & Correspondence

System And Method For Assisting Users By An Intelligent Assistive Robot

Abstract: Exemplary embodiments of the present disclosure are directed towards a system for assisting users by an intelligent assistive robot, method for assisting user by an intelligent assistive robot, comprising collecting sensor information using sensors through controller and Obtaining Odometry information by navigation stack. Obtaining target location tags or location coordinates from voice-based interaction of user using microphone. Enabling intelligent assistive robot to load static map from cloud server and initialising path parameters by navigation stack. Planning and generating global path from current location to target location using Dijkstra algorithm. Dividing global path into smaller segments and converting to timed elastic band for local path planning. Enabling intelligent assistive robot to move to target location by avoiding obstacles and providing assistance, health services to user and executing tasks based on user request through voice-based interaction; communication subsystem; assistance module embedded in first computing device, second computing device, and intelligent assistive robot. FIG. 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 April 2021
Publication Number
43/2022
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patentagent@prometheusip.com
Parent Application

Applicants

ACHALA HEALTH SERVICES PRIVATE LIMITED
Plot 253, Road No 6, Kakateeya Hills, Hyderabad-500081, Telangana, India
RAJESH RAJU
Plot 253, Road No 6, Kakateeya Hills, Hyderabad-500081, Telangana, India

Inventors

1. DEEPA BHUPATHIRAJU
Plot 253, Road No 6, Kakateeya Hills, Hyderabad-500081, Telangana, India
2. RAJESH RAJU
Plot 253, Road No 6, Kakateeya Hills, Hyderabad-500081, Telangana, India
3. JYOTISH VARMA VEGESNA
E-53, Madhura Nagar, Hyderabad–500045, Telangana, India

Specification

DESC:This patent application claims priority benefit of Provisional Patent Application No: 202141013007, entitled “SYSTEM AND METHOD FOR ASSISTING USERS BY AN INTELLIGENT ASSISTIVE ROBOT”, filed on 25-Mar-2021. The entire contents of the patent application are hereby incorporated by reference herein in its entirety.

TECHNICAL FIELD
[001] The disclosed subject matter relates generally to an intelligent assistive robot. More particularly the present disclosure relates to a system and method for assisting users by an intelligent assistive robot for handling tasks, engaging with activities, monitoring health of the users and connecting to the doctors in case of emergencies and the like.

BACKGROUND
[002] The number and proportion of geriatric population is increasing across the globe and more so ever in developing countries. Currently, this population is around 1 billion in number and is expected to more than double in future. This continual increase in this demographic requires a structural change in all sectors of the society to adapt to upcoming range of challenges. Especially health and social care of the elderly must be revamped to address the needs of raising population.

[003] The current public and private healthcare system is highly dependent on infrastructure. In the present state most of the first point of care are incapable to provide the needed help for the elderly people. These first point of care are plagued with several issues starting from lack of trained resources, qualified clinicians, under equipped clinics, poor infrastructure, improper management and more. There is a need to bring the first point of care to the home of these elders and deliver more effective care to them.

[004] With increased advancements of wireless connectivity, it is possible to build telemedicine platforms with range of health care services delivered to home. Through these platforms many services such as doctor consultation, prescription refilling, nursing services and more were delivered at the comfort of home. These kinds of solutions could satisfy many health needs but could not impact on improving the overall quality of life. This is because the social and mental needs are also the huge driving factors in improving the quality of life. These solutions have an extremely limited access into user’s social life and mental status to understand their needs in holistic way. The social aspect must be studied from a multiple perspective such as their daily routines, their interests, their family, their social circle and more.

[005] In the light of aforementioned discussion, there exists a need for an intelligent assistive robot that would overcome or ameliorate the above-mentioned limitations.

SUMMARY
[006] The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.

[007] An objective of the present invention directed towards designing an intelligent automated navigational assistive robot for assisting a user in handling tasks, engaging with activities, monitoring the health, connecting to the doctors if needed and trigger alarms in case of emergency.

[008] Another objective of the present invention directed towards programming the automated navigation robot (intelligent assistive robot) to navigate to user’s desired location from current location.

[009] Another objective of the present invention directed towards the navigation stack uses the sensor information to avoid both dynamic and static obstacles.

[0010] Another objective of the present invention directed towards providing the capability for object and people recognition using the sensor information.

[0011] Another objective of the present invention directed towards a motion detection to shadow the user or automatically wakeup if user is nearby.

[0012] Another objective of the present invention directed towards displaying the floor plan map on first or second computing device and providing a single click control to save desired geo-locations within the map.

[0013] Another objective of the present invention directed towards the intelligent assistive robot is powered by an artificial intelligence and internet of things.

[0014] Another objective of the present invention directed towards providing an interactive voice response system using conversation artificial intelligence and semantics to act as a virtual voice assistant.

[0015] Another objective of the present invention directed towards providing the capability of automated task execution.

[0016] Another objective of the present invention directed towards facilitating multimedia capabilities with inbuilt speakers, microphones, and a display system.

[0017] Another objective of the present invention directed towards providing a video conference capability with an inbuilt camera.

[0018] Another objective of the present invention directed towards offering a range of personalized health care experience.
[0019] Another objective of the present invention directed towards providing an automatic pill dispenser and reminder.

[0020] Another objective of the present invention directed towards capturing user health vitals (random glucose level, Blood pressure, Blood oxygen level, Body temperature and the like) on a user request and detecting the abnormalities based on the captured health vitals.

[0021] Another objective of the present invention directed towards providing a SOS button for the user to connect to the emergency services.

[0022] Another objective of the present invention directed towards collecting highly valuable social data points and integrate them with the periodic healthcare data to provide a holistic way to augment the quality of life.

[0023] Another objective of the present invention directed towards integrating mental, and physical well-being aspects provides a holistic solution to augment the overall quality of life in senior citizens.

[0024] Another objective of the present invention directed towards allowing elderly people to live in safe, healthier, more engaging environments and have an active quality life.

[0025] In an embodiment of the present disclosure, collecting sensor information using the one or more sensors through a controller.

[0026] In another embodiment of the present disclosure, obtaining an Odometry information from the sensors information by the navigation stack.

[0027] In another embodiment of the present disclosure, obtaining target location tags or coordinates from voice-based interaction of a user using a microphone.
[0028] In another embodiment of the present disclosure, enabling the intelligent assistive robot to load a static map and from a cloud server as per user interaction and navigation stack initializes path parameters as per the static map.

[0029] In another embodiment of the present disclosure, planning and generating a global path from current location of the intelligent assistive robot to the target location using Dijkstra algorithm.

[0030] In another embodiment of the present disclosure, dividing the global path into smaller segments and converting to timed elastic band using a timed elastic band planner for local path planning, the segments are divided such that the current segment is within the sensor range of the intelligent assistive robot.

[0031] In another embodiment of the present disclosure, changing the local path(segment) dynamically upon the detection of any obstacles;

[0032] In another embodiment of the present disclosure, calculating angular and the linear velocity of the intelligent assistive robot.

[0033] In another embodiment of the present disclosure, enabling the intelligent assistive robot to move to the target location by avoiding the obstacles and providing assistance, one or more health services.

[0034] In another embodiment of the present disclosure, executing one or more tasks based on the user request through at least one of: the voice-based interaction using an interactive voice response system; a communication subsystem; an assistance module embedded in a first computing device, a second computing device, and the intelligent assistive robot.

[0035] In another embodiment of the present disclosure, the intelligent assistive robot comprises a communication subsystem, an interactive voice response system, a navigation subsystem, a vital monitoring subsystem, and one or more sensors, an internal memory, and a SOS button and are electrically coupled to a processing device.

[0036] In another embodiment of the present disclosure, the communication subsystem comprises a display system, an inbuilt camera, the audio system, and a wireless connectivity system.

[0037] In another embodiment of the present disclosure, the communication subsystem configured to enable the user to interact with a doctor, family and friends through at least one of: a touchscreen display system on the intelligent assistive robot; one or more voice-based interaction using a microphone; and an assistance module embedded in an internal memory the intelligent assistive robot; a first computing device, and a second computing device.

[0038] In another embodiment of the present disclosure, the interactive voice response system embedded with an artificial intelligence-based semantics and artificial intelligence based proactive conversational agent configured to interpret and process the user’s voice-based interaction and generate replies to the user using algorithm generated voice through a speaker.

[0039] In another embodiment of the present disclosure, the interactive voice response system configured to generate voice-based reminders to the user to execute one or more tasks and engage the user by initiating conversations based up on user interest and mood.

[0040] In another embodiment of the present disclosure, the interactive voice response system configured to provide seamless teleconsultation with the doctor through at least of: the voice-based interaction of the user; or when an abnormality is detected using the inbuilt camera.

In another embodiment of the present disclosure, the navigation subsystem configured to receive sensor information from the one or more sensors and processes the sensor information through the processing device to execute a navigation plan, the navigation subsystem navigates the intelligent assistive robot to a required place upon a user request through the voice-based interaction and also configured to track a user location upon identifying fall detection for elderly care.

[0041] In another embodiment of the present disclosure, the global planner engine configured to obtain a user position and deliver to the local planner engine, the local planner engine configured to guide the intelligent assistive robot to move to the user location or required place by avoiding the obstacles in the path.

[0042] In another embodiment of the present disclosure, the navigation subsystem configured to enable the intelligent assistance robot to locate a target in any location using name tags assigned by the user, and identifies people or object and their position using computer vision algorithms and proximity analysis techniques.

[0043] In another embodiment of the present disclosure, the vital monitoring subsystem is configured to perform automated diagnostic tests to the user for elderly care, the vital monitoring subsystem comprises an automatic pill dispensing and reminder unit, an heart rate and blood oxygen monitoring unit, an infrared thermometer, a glucometer, a blood pressure monitoring unit, a digital stethoscope, an inspection camera, and an Electrocardiogram Capturing Device.

[0044] In another embodiment of the present disclosure, the vital monitoring subsystem configured to monitor health of the elderly on a regular basis to capture one or more health vitals, the vital monitoring subsystem configured to detect one or more abnormalities based on the one or more health vitals captured and predict diseases and infections using an artificial intelligence based algorithms.

[0045] In another embodiment of the present disclosure, a SOS button configured to initiate SOS operations automatically by connecting to one or more emergency services over the network, the SOS operations are initiated automatically upon detection of abnormalities by connecting to the emergency services upon detecting at least one of: a user posture; a facial expression; and a fall detection.

BRIEF DESCRIPTION OF THE DRAWINGS
[0046] In the following, numerous specific details are set forth to provide a thorough description of various embodiments. Certain embodiments may be practiced without these specific details or with some variations in detail. In some instances, certain features are described in less detail so as not to obscure other aspects. The level of detail associated with each of the elements or features should not be construed to qualify the novelty or importance of one feature over the others.

[0047] FIG. 1A is a diagram depicting a schematic representation of a system for assisting users by an intelligent assistive robot, in accordance with one or more exemplary embodiments.

[0048] FIG. 1B is another exemplary diagram depicting a schematic representation of a system for assisting users by an intelligent assistive robot, in accordance with one or more exemplary embodiments.

[0049] FIG. 1C is an example diagram depicting an intelligent assistance robot, in accordance with one or more exemplary embodiments.

[0050] FIG. 1D and FIG. 1E are example diagrams depicting a schematic representation of the navigation peripherals, in accordance with one or more exemplary embodiments.

[0051] FIG. 2A is an example diagram depicting a schematic representation of the vitals monitoring subsystem 102 and communication subsystem 106, in accordance with one or more exemplary embodiments.

[0052] FIG. 2B is an example diagram depicting a schematic representation of the communication subsystem 108, in accordance with one or more exemplary embodiments.

[0053] FIG. 3 is a flowchart depicting an exemplary method for executing tasks by the intelligent assistive robot, in accordance with one or more exemplary embodiments.

[0054] FIG. 4 is a flowchart depicting an exemplary method for initiating the call with the doctor, in accordance with one or more exemplary embodiments.

[0055] FIG. 5A is a flowchart depicting an exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments.

[0056] FIG. 5B is an example diagram depicting a schematic representation of the robot operating system navigation stack, in accordance with one or more exemplary embodiments.

[0057] FIG. 6A and FIG. 6B are example diagrams depicting the localization for a robot moving in 2D, in accordance with one or more exemplary embodiments.

[0058] FIG. 7A and FIG. 7B are example diagrams depicting the G-Mapping Package, in accordance with one or more exemplary embodiments.

[0059] FIG. 7C is an example diagram depicting the map visualization to the user on the computing devices by assistance module, in accordance with one or more exemplary embodiments.

[0060] FIG. 8 is a flowchart depicting an exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments.

[0061] FIG. 9 is an exemplary flowchart depicting an exemplary method for state of the functionality of the buttons, in accordance with one or more exemplary embodiments.

[0062] FIG. 10 is a flowchart depicting an exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments.

[0063] FIG. 11 is a flowchart depicting another exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments.

[0064] FIG. 12 is a block diagram illustrating the details of a digital processing system in which various aspects of the present disclosure are operative by execution of appropriate software instructions.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0065] It is to be understood that the present disclosure is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the drawings. The present disclosure is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.

[0066] The use of “including”, “comprising” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms “a” and “an” herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced items. Further, the use of terms “first”, “second”, and “third”, and so forth, herein do not denote any order, quantity, or importance, but rather are used to distinguish one element from another.

[0067] Referring to FIG. 1A is a diagram 100a depicting a schematic representation of a system for assisting users by an intelligent assistive robot, in accordance with one or more exemplary embodiments. The diagram 100a includes an intelligent assistive robot 101. The intelligent assistive robot 101 includes edge intelligence 103, digital processor and controllers 105, operating system 109, sensors 111, peripherals 113, power source 115, communication ports 117. The edge intelligence 103 includes embedded AI for vitals, navigation and algorithms, semantics and conversational AI, computer vision algorithms, data processing and analytics. The digital processors and controllers 105 includes CPU, primary memory, secondary memory, micro controller, input interface. The operating system 109 includes an android, RTOS, and Robot OS. The sensors 111 includes mapping sensors, navigation peripherals, motion sensors, and vision sensors. The peripherals 113 includes vital monitors, multimedia peripherals, pill dispenser and storage compartment. The cloud server 119 includes artificial intelligence, server, API integrations, and database.

[0068] The intelligent assistive robot 101 may be configured to capture the voice-based interaction through the peripherals (like micro phone) which is controlled by the digital controllers 105. The captured interaction data is sent to the intelligent semantics and conversational AI engines embedded in the edge intelligence 103 which are run on the operating systems 109 installed in the digital processor 105. These intelligent semantics and conversational AI engines embedded in the edge intelligence 103 may be configured to derive the context and meaning of this voice interaction and respond to it through execution of a task or a reply in voice conversation. The task or activity may include, but not limited to Activities include vital monitoring, teleconsultation, attend door bell, replying to somebody in the home, attending the incoming calls, playing games with users, providing hints, push reminders and notifications to users, play activity videos to help the user, and start proactive conversations, and so forth.

[0069] The task execution may include, enabling the intelligent assistive robot 101 to move or perform internal processing and respond through the voice-based interaction. For Navigation, the floor plan is downloaded from the cloud 119 and Navigation stack (as shown in FIG. 1B) embedded in the edge intelligence 103 plans the movement of the robot within the floor map. The movement of the intelligent assistive robot 101 is executed using the navigation peripherals 113 through the controller 105. The movement of the intelligent assistive robot 101 is dynamically adapted based on the obstacles in the path identified by the navigation stack 114e (as shown in FIG. 1B) located in the edge intelligence 103 and by analysing the data obtained from the mapping and vision sensors 111. The intelligent assistive robot 101 configured to load a static map and from the cloud server 119 as per user interaction and the navigation stack 114e initializes path parameters as per the static map. The path parameters may include, navigation parameters, robot parameters, and optimization parameters, obstacle parameters, planning parameters, obstacle distance, obstacle pose, obstacle radius, turning radius, dynamic obstacle, motion, orientation, poses, obstacle heading threshold, and so forth.

[0070] The internal processing activities includes utilizing multimedia to play videos, songs, and info graphics and so forth. The intelligent assistive robot 101 also includes capturing vitals data of the user using the vital monitor and transmitting this data to cloud 119 for storage and analysing for abnormalities with the AI engines. The artificial intelligence embedded in the cloud 119 works on the collected data to find any abnormalities instantly and raise emergency responses if needed. The motion sensors and vision sensors data is analysed by the computer vision algorithms through the processor 105 and help in detecting the user. The pill dispenser dispenses the medication as per the prescription and checks whether the user has consumed the medication with the computer vision algorithms through the vision sensors. The data captured by the intelligent assistive robot 101 is transferred to remote server /cloud server 119 through Wi-Fi/GSM and other modes of wireless transmission.

[0071] Referring to FIG. 1B is another exemplary diagram 100b depicting a schematic representation of a system for assisting users by an intelligent assistive robot, in accordance with one or more exemplary embodiments. The system 100b includes the intelligent assistive robot 101. The intelligent assistive robot 101 may include a vitals monitoring subsystem 102. The vitals monitoring subsystem 102 includes an automatic pill dispensing and reminder unit 104a, a heart rate and blood oxygen monitoring unit 104b, an infrared thermometer 104c, and a glucometer 104d, a blood pressure monitoring unit 104e, a digital stethoscope 104f, and an inspection camera 104g, and an electrocardiogram capturing device 104h.

[0072] The intelligent assistive robot 101 includes a processing device 106, a communication subsystem 108 comprises a display system 110a, an inbuilt camera 110b, an audio system 110c and a wireless connectivity module 110d, a navigation subsystem 112 comprises navigation peripherals 114a, mapping sensors 114b, a power system 114c, a robot operating system 114d, and a navigation stack 114e. The intelligent assistive robot 101 includes an interactive voice response system 116, an internal memory 118, and a SOS button 120. The navigation peripherals 114a includes motor 122a, driver 122b and encoder 122c, wheels 122d. The mapping sensors 114b may include, a depth sensor 124a, a cliff sensor 124b, and a LiDAR sensor 124c. The navigation stack 114e includes a localization engine 115a, a global planner engine 115b and a local planer engine 115c. The audio system 110c includes a high-resolution microphone 111a and a high-definition speaker 111b

[0073] The heart rate and blood oxygen monitoring unit 104b includes an integrated pulse oximeter 126a, and a heart rate monitoring sensor 126b. The blood pressure monitoring unit 104e includes a noninvasive sphygmomanometer 126c. Further, the system 100a may include a network 128, a first computing device 130, a second computing device 132 and a cloud server 119.

[0074] The first computing device 130 and the second computing device 132 may be connected to the one or more computing devices via the network 128. The computing device 102 may include, but is not limited to, a personal digital assistant, smartphones, personal computers, a mobile station, computing tablets, a handheld device, an internet enabled calling device, an internet enabled calling software, a telephone, a mobile phone, a digital processing system, and so forth.

[0075] The network 128 may include but not limited to, an Internet of things (IoT network devices), an Ethernet, a wireless local area network (WLAN), or a wide area network (WAN), a Bluetooth low energy network, a ZigBee network, a WIFI communication network e.g., the wireless high speed internet, or a combination of networks, a cellular service such as a 4G (e.g., LTE, mobile WiMAX) or 5G cellular data service, a RFID module, a NFC module, wired cables, such as the world-wide-web based Internet, or other types of networks may include Transport Control Protocol/Internet Protocol (TCP/IP) or device addresses (e.g. network-based MAC addresses, or those provided in a proprietary networking protocol, such as Modbus TCP, or by using appropriate data feeds to obtain data from various web services, including retrieving XML data from an HTTP address, then traversing the XML for a particular node) and so forth without limiting the scope of the present disclosure.

[0076] Although the first computing device 130, and the second computing device 132 is shown in FIG. 1, an embodiment of the system 100 may support any number of computing devices. The first computing device 130 may be operated by the user. The user may include, but not limited to, an individual, a client, an operator, a patient, old age people, elderly people, aged people, senior citizens, and so forth. The second computing device 132 may be operated by the doctors and/or the family members. The doctor and/or the family members may include, but not limited to, a doctor, a physician, a surgeon, a specialist, a consultant, neighbours, relatives, friends, and so forth. The first computing device 130 and the second computing device 132 supported by the system 100 is realized as a computer-implemented or computer-based device having the hardware or firmware, software, and/or processing logic needed to carry out the computer-implemented methodologies described in more detail herein.

[0077] The first computing device 130, and the second computing device 132 may include the assistance module 136. The term “module” is used broadly herein and refers generally to a program resident in the memory of the computing devices 130, and 132. The assistance module 136 may be downloaded from the cloud server 119. For example, the assistance module 136 may be any suitable applications downloaded from, GOOGLE PLAY® (for Google Android devices), Apple Inc.'s APP STORE® (for Apple devices, or any other suitable database). In some embodiments, the assistance module 136 may be software, firmware, or hardware that is integrated into the computing devices 130, 132. The assistance module 136, which is accessed as mobile applications, web applications, software that offers the functionality of accessing mobile applications, and viewing/processing of interactive pages, for example, are implemented in the first and second computing devices 130, 132 as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein. The assistance module 136 may be configured to enable the doctor and the family members to track the health history/health vitals of the user. The assistance module 136 may be configured to provide assistance and services to the user. The assistance module 136 may be configured to provide various health care services and customizes offering based on the user generated health data. The health care services may include but not limited to, on request doctor/specialist video consultation, prescription refilling, counselling services, on demand diagnostic tests, artificial intelligence based clinician decision support system to identify abnormalities from health data and connect to clinician automatically or with user concern based on urgency, emergency response services, deliver nursing service and so forth. The intelligent assistance robot 101 may be trained about various locations in the house using the assistance module 136 on the first and second computing devices 130, 132.

[0078] The assistance module 136 may be configured to enable the user to perform the human robot interaction. In our robot this interaction may be through voice interaction or through the assistance module 136, or through the touch interface on the robot 101. The user inputs from any of the voice-based interaction or through the assistance module 136, or through the touch interface may be automatically converted to ROS commands for the robot to execute a task using state machines. The assistance module 136 consists of visualization of map along with robots position represented by a marker. The robot 101 may also be controlled by the computing devices 130/132 using the assistance module 136 by giving robot goals on the map. The assistance module 136 consists of dynamic map representing the static obstacles in the environment and robot 101 may get the destination points to reach a goal by clicking on the part of the map that the user wants the robot to go to. The assistance module 136 may also provide status of the robot navigation along with buttons to save the robot position and to gold buttons to move the robot to the previously saved positions. Multiple locations can be stored in a house. User can provide voice-based interaction to order the intelligent assistive robot 101 to reach to the stored locations or to reach himself/another person. The intelligent assistive robot 101 is at the docking position and is equipped to move from home position to various locations of the home such as TV Area, Bedroom, Study Table, Dressing Area and Kitchen area, and so forth.

[0079] The intelligent assistive robot 101 may include the vitals monitoring subsystem 102 embedded within its body. The vitals monitoring subsystem 102 may include the automatic pill dispensing and reminder unit 104a, the heart rate and blood oxygen monitoring unit 104b, the infrared thermometer 104c, and the glucometer 104d, the blood pressure monitoring unit 116, the digital stethoscope 104f, and the inspection camera 104g and the subsystems may be configured to capture health vitals of the user and send the information (vital data) to the processing device 106. The health vitals may include, but not limited to, random glucose level, Blood pressure, Blood oxygen level, ECG, Body temperature, SpO2, pulse, and so forth. The processing device 106 may include but not limited to, a microcontroller (for example ARM 7 or ARM 11), a raspberry pi, a microprocessor, a digital signal processor, a microcomputer, a field programmable gate array, a programmable logic device, a state machine or logic circuitry, Arduino board, and the like. The processing device 106 may be represented as a base controller, controller in the following description. The vitals monitoring subsystem 102 may be configured to capture the health vitals of the user based on the user request. Based on the health vitals data the intelligent assistance robot 101 may be configured to detect the abnormalities and predict the diseases and infections using artificial intelligence based algorithms. The intelligent assistance robot 101 may include both the linear and angular motion with acceleration capabilities.

[0080] The automated automatic pill dispensing and reminder unit 104a may be configured to dispense the required medicines to the user as per the preloaded prescription based on the timings and dosage suggested to the user by the doctor. The automated automatic pill dispensing and reminder unit 104a may be configured to order the prescription to refill the medicines based on the available stock. The automated automatic pill dispensing and reminder unit 104a may be configured to assist the user with the voice-based reminders to consume the pills to ensure compliance with the prescription.

[0081] The heart rate and blood oxygen monitoring unit 104b may include an integrated pulse oximeter 126a and a heart rate monitoring sensor 126b. The heart rate and blood oxygen monitoring unit 104b may be configured to combine two LEDs, a photodetector, optimized optics, and low-noise analog signal processing to detect pulse oximetry and heart-rate signals using spectrophotometry technique. The heart rate and blood oxygen monitoring unit 104b may be configured to detect blood oxygen using touch based sensor which works on the principle of photometry. The infrared thermometer 104c may be configured to detect the body temperature of the user and is designed for non-contact temperature sensing. The infrared thermometer 104c includes an internal high-resolution ADC and a powerful DSP configured to contribute high accuracy and resolution.

[0082] The glucometer 104d is a medical device for determining the blood glucose levels of the body. An electromechanical strip may be provided to the user. The user can take a small drop of blood from their finger and apply on the strip before inserting into the glucometer 104d. Once the strip is inserted into the glucometer 104d may calculate the glucose levels in 15 to 20 seconds and display on the display system 110a or announce using the audio system 110c through the processing device 106.

[0083] The blood pressure monitoring unit 116 may be embedded with an intermittent non-invasive sphygmomanometer 126c and touch based cuff less spectrophotometry is configured to identify the systolic and diastolic blood pressure values of the user is configured to identify blood pressure values of the user. The intermittent non-invasive sphygmomanometer 126c may use two ways to capture the blood pressure values using a photometry or an oscillatory method. For example, In Oscillatory method, the user is asked to put his wrist inside the cuff and air is pumped into the cuff to measure the systolic and diastolic values. Whereas in the photometric method the user is asked to put his finger on the non-invasive sphygmomanometer 126c to obtain the systolic and diastolic values.

[0084] The digital stethoscope 104f may be configured to capture and convert the heart sounds of the user into a digital format. The user must take out this digital stethoscope 104f from the slot provided in the intelligent assistive robot 101 and must put it on the various locations on the chest to capture the heart sounds. The digital stethoscope 104f may be positioned in the mid-region of the intelligent assistive robot 101. The intelligent assistive robot 101 may include a press and release functionality, where the user has to press a round shaped button to remove the digital stethoscope 104f and press it again to place the digital stethoscope 104f back. There is a storage cabin set on the back of the intelligent assistive robot 101 to place the digital stethoscope 104f.

[0085] The inspection camera 104g is a supplementary camera in the intelligent assistive robot 101 with Full High-definition clarity and Zooming up to 1000x to capture and stream high resolution video of the user body parts and organs. The inspection camera 104g may be a portable hand holder full HD inspection camera for full HD multiple zooms of user skin, eyes, throat etc. The inspection camera 120 may be configured to detect and recognize the motion of the user or object coming near to the intelligent assistive robot 101. The inspection camera 104g may be configured to use during the doctor consultation to check for any ENT, Skin related issues and so forth.

[0086] The intelligent assistive robot 101 may be configured to interact with the user using the touch screen display system 110a which captures the user input and transmits to the processing device 106. The display system 110a may also help in displaying information (health vitals) received from the processing device 106, inbuilt camera 110b and play videos. The display system 110a may be used for video conferencing with doctor, family, friends and so forth with the help of the inbuilt camera 110b. The inbuilt camera 110b and/or the inspection camera 104g may be configured to recognize the object and user using the edge computer vision algorithms. The audio system 110c may be configured to communicate with the user with help of the high-resolution microphone 111a and the high-definition speaker 111b.

[0087] The intelligent assistive robot 101 may be used to care and to monitor health of the user with proactive conversations. The interactive voice response system 116 may be configured to provide seamless teleconsultation with the doctor either by voice-based interaction or when an abnormality is detected. The interactive voice response system 116 may be configured to generate voice-based reminders to the user. The interactive voice response system 116 may be configured to support multiple languages. The multiple languages may include, but not limited to, user known languages, English, Hindi, Telugu, French, and so forth. The interactive voice response system 116 may be operated by coordinating with an artificial intelligence based natural language processing algorithms which interprets and process the user’s voice-based interaction. The interactive voice response system 116 may also be configured to provide replies to the user using algorithm generated voice through the speaker 111b.

[0088] The intelligent assistive robot 101 may be configured to initiate the SOS operations automatically through a button /a voice-based interaction or by detecting the user’s fall through the user’s posture and facial expression recognition. The interactive voice response system 116 may be configured to engage the user by initiating conversations of their interest and mood. The interactive voice response system 116 may use artificial intelligence based semantics and proactive conversational artificial intelligence to initiate the conversations with the user. The intelligent assistive robot 101 may be configured to assess the user mood based on the facial expressions and responses to the conversation.

[0089] The intelligent assistive robot 101 may include the SOS button 120 configured to enable the user to connect to the emergency services. The intelligent assistive robot 101 may be configured to interact with the other IoT devices. The IoT devices may include, but not limited to, a fall detector device, a sleep monitoring device and so forth. The internal memory 118 may be configured to store the geo-locations, floor plans of the house (environment) and the object’s measurements both in two-dimensional and three-dimensional. The depth sensor 124a and cliff sensor 124b may be configured to detect glass or transparent obstacles in the pathway. The cliff sensor 124b may be configured to sense obstacles and negative the intelligent assistive robot 101 with the help of high precision IR sensors and avoid any intelligent assistive robot 101 motion down-hill when the intelligent assistive robot 101 is moving towards a goal. The depth sensor 124a may be configured to assess the depth and helps is obstacle detection and mapping.

[0090] The intelligent assistive robot 101 may be configured to provide Biological, Social, and psychological automated assessment through the interactive voice response system 116. The intelligent assistive robot 101 may be configured to detect the user motion and posture through the Lidar and three-dimensional depth camera technologies. The intelligent assistive robot 101 may be configured to drive near the user and dispense medications stored in the automatic pill dispensing and reminder unit 104a and generates the voice-based reminders to the user for taking pills. The intelligent assistive robot 101 may be configured to trigger the activity reminders based on the doctors’ recommendation or through assessing the user choices. The intelligent assistive robot 101 may be configured to perform the automated diagnostic tests to the user. The intelligent assistive robot 101 may be configured to send the notifications to the first computing device 130 and the second computing device 132. The notifications may include, alerts, SMS, text message, e-mails and so forth. The notifications may include the health vital information, user health history, task remainders, activity remainder, and so forth.

[0091] The intelligent assistive robot 101 may be configured to calculate the cognitive levels of the user based on the user performance evaluation in the gaming session. The intelligent assistive robot 101 may include a routine learning engine which learns the user routine and behaviour patterns on daily basis. The intelligent assistive robot 101 may be configured to collect highly valuable social data points and integrate them with the periodic healthcare data to provide a holistic way to augment the quality of life. The intelligent assistive robot 101 may be configured to provide Interest based engagement groups, chat rooms using artificial intelligence based automated moderators.

[0092] The wireless connectivity module 110d may be configured to establish the communication between the intelligent assistance robot 101 and the first and second computing devices 130 and 132. The vitals monitoring subsystem 102 may include the electrocardiogram capturing device 104h configured to capture a single lead Electrocardiography from the tips of the fingers from both hands of the user. This will be a lead single ECG captured using metal electrodes placed on the system. The electrocardiogram capturing device 104h may be configured to monitor small electrical changes on the skin of a patient's body(user body) arising from the activities of the human heart. This simple and non-invasive measurement easily indicates a variety of heart diseases. The inbuilt camera 110b may use the Lidar sensor 124c and three-dimensional depth camera (depth sensor) technologies to detect the user motion and user posture.

[0093] Referring to FIG. 1C is an example diagram 100C depicting a schematic representation of the intelligent assistance robot, in accordance with one or more exemplary embodiments. The diagram 100c includes, the vital monitoring subsystem 102, wheels 122d, the inbuilt camera 110b, and the navigation subsystem 112. The navigation subsystem 112 may include the indoor mapping technology to customize the robot free movement regions, restricted regions and boundaries. The navigation subsystem 112 may be configured to collect the pathway information from the Lidar sensor 124c and the depth sensor 124a and transforms that into a floor map. The wheels 122d may be shape-shifting wheels and linear and angular motion with acceleration capabilities. The intelligent assistance robot may be capable to get down or climb up a staircase using cliff detectors and shape shifting wheels 122d. The wheels 122d may convert the angular motion of the motor to the linear motion of the intelligent assistance robot 101.

[0094] The intelligent assistance robot 101 may directly connects through the WIFI or LTE as per the user network connectivity. The intelligent assistance robot 101 may include the assistance module 136, which offers various health care services and customizes offering based on the user generated health data (health vital). The health care services may include but not limited to, on request doctor/specialist video consultation, prescription refilling, counselling services, on demand diagnostic tests, artificial intelligence based clinician decision support system to identify abnormalities from health data and connect to clinician automatically or with user concern based on urgency, emergency response services, deliver nursing service and so forth.

[0095] The intelligent assistance robot 101 may include the navigation subsystem 112. The intelligent assistance robot 101 may avoid static and dynamic obstacles using the navigation subsystem 112. The intelligent assistance robot 101 may automatically calibrates precise path to the target using local and global planner methodology. The intelligent assistance robot 101 may include an autonomous indoor navigation capability with path planning and navigation. The navigation subsystem 112 works on the global planner and the local planner methodology, the Global planner works on Dijkstra’s algorithm, whereas the local planner works on the time elastic band principle. The global planner and the local planner utilize the cost map principles to store the obstacles. The intelligent assistance robot 101 may locate the target in any location using name tags assigned by the user, and identifies people or object and their position using computer vision algorithms and proximity analysis techniques.

[0096] The intelligent assistance robot 101 may be trained about the various locations in the house over voice or touch based interactions. The intelligent assistance robot 101 may be equipped with multi core high speed computing processor for processing the data from the sensors and to execute navigation plan. The mapping sensors 114b include, the depth sensor 124a, the cliff sensor 124b, the lidar sensor 124c which may help in three-dimensional and two-dimensional mapping of the area. The intelligent assistance robot 101 may be configured to detect cliff, staircase (obstacles) automatically and is capable to reroute the direction of the wheels 122d to avoid the obstacles. The navigation subsystem 112 may be configured to perform auto mapping to capture a static map. The auto mapping may include exploratory algorithms. The navigation subsystem 112 may be configured to navigate the intelligent assistance robot 101 to move towards the charging dock and the intelligent assistance robot 101 may connects to charging dock automatically.

[0097] The intelligent assistance robot 101 may store the user’s health history in the internal memory 118. The intelligent assistance robot 101 may include the internet of things enabled elderly finder with fall detection and connects to the emergency services in the event of emergencies. The intelligent assistance robot 101 may include user detection, fall detection or shadowing capabilities configured to support users to walk and keep track of their location to help them navigate to a place if required.

[0098] The intelligent assistance robot 101 may include an audio and video player and entertainment like games and so forth. The intelligent assistance robot 101 may include an augmented reality for gaming and exercise and activities for the elderly people. The intelligent assistance robot 101 may be configured to provide virtual reality module for immersive experience for social engagement. The intelligent assistance robot 101 may integrate with a sleep monitor device ( a device placed under the bed of the user wirelessly connecting to intelligent assistance robot 101) and a fall detector device, For example a locket in the neck which will initiate alarm in case of a fall and also has an emergency alarm button integrated to it. The intelligent assistance robot 101 may be configured to perform automated task execution based on the user’s routine, choices and doctor recommendation. The intelligent assistance robot 101 may be configured to generate reminders, dispense pills and engage activities based on the user’s routine, choices and doctor recommendation.

[0099] The mapping sensors or the sensors 114b includes the depth sensor 124a, the cliff sensor 124b, and the LiDAR sensor 124c. The depth sensor 124a, the cliff sensor 124b, and the LiDAR sensor 124c may be configured to helping in 3d and 2d mapping of the area. The Lidar sensor 124c may use laser beam to identify the distance from the objects in and around its path. The Lidar sensor 124c may publish the identified distance data to the robot operating system 114d. The Depth sensor 124a may be configured to measure the depth and helps is obstacle detection and mapping. The Cliff Sensor may be configured to sense negative obstacles with the help of high precision IR sensors and avoid the intelligent assistive robot 101 motion down-hill when the intelligent assistive robot 101is moving towards a goal. The Power system 114c may be configured to supply the power for the intelligent assistive robot 101 which includes the battery charging and battery resources. The robot operating system 114d includes an instruction set which helps in collecting the information from the Lidar sensor 124c and depth sensor 124a and transforms that into a floor map. The Embedded firmware includes a set of instructions which help in programming the controller 107 and to capture and process the data from the mapping sensors 114b. It also helps in controlling navigation peripherals 114a. The navigation stack 114e includes a localization engine 115a, a global planner engine 115b and a local planer engine 115c. The localization engine 115a may use current local position mapping of the intelligent assistive robot 101. The global planner engine 115b may be configured to obtain the target position of the user and deliver to the local planner engine 115c. The local planner engine 115c may be configured to guide the robot to move depending upon the obstacles in the path.

[00100] The user or the robot 101 can initiate the social engagement feature on the first and second computing device and/or through the voice-based interaction. The social engagement include virtual chat rooms, online discussion forums, talks from KOL’s and influencers, blogging pages, gaming, exercises, hobbies and more. If the social engagement feature is initiated by the robot 101 it is termed as recommendation. This recommendation is given based on the user prior choices or a physician/psychiatrist prescription. The user can receive this recommendation as a voice-based reminder. User can reject or accept the recommendation through voice or by touch-based UI. If the social engagement is initiated by the user and it is termed as user-initiated activity. Where in the user gives a random instruction to robot and robot utilizes Semantics AI to decode the intent and meaning of the user instruction and accordingly tries to execute the task. If the robot couldn’t understand properly the intent/meaning of the user instruction, it can ask the user to repeat the instruction or convey the user that it cannot perform the instructed task through audio system.

[00101] The intelligent assistive robot understands the daily schedule of the user. It identifies free time of the user to start a conversation with the user. The robot locates the user in the house and assess the current activity through the computer vision and tries to bring in a topic of conversation based on his prior choices. These conversations may be in terms of conveying related news or asking a trivia question. Post the ice breaking session on the above the intelligent assistive robot 101 analyses the response of the user and adds feedback to adaptive user profile which contains the user choices and behavioural patterns. It tries to collect more information from the user based on their life story to analyse mood and cognitive ability. These conversations are timed so they don’t interrupt other important daily activities

[00102] Referring to FIG. 1D and FIG. 1E are example diagrams 100d and 100e depicting a schematic representation of the navigation peripherals 114a, in accordance with one or more exemplary embodiments. The diagram 100d and 100e includes the motor 122a, a driver 122b and an encoder (not shown), wheels 122d, controller 107. The navigation peripherals 114a and navigation subsystem 112 which help in the movement of the intelligent assistive robot 101. The motor 122a may be a DC motor which help in rotating the wheels 122d for the navigation of the intelligent assistive robot 101. The navigation peripherals 114a are driven by a driver 122b controlled by an encoder 122c and a controller 107. The driver 122b and encoder 122c may be configured to drive the motor 122a through the controller 107 and help in controlling the speed and accuracy of motor 122a rotation. Encoders 122c may acts as odometer and issues Odom commands. The wheels 122d may be configured to translate the angular motion of the motor 122a to the linear motion of the intelligent assistive robot 101.

[00103] Referring to FIG. 2A is an example diagram 200a depicting a schematic representation of the vitals monitoring subsystem 102 and communication subsystem 106, in accordance with one or more exemplary embodiments. The diagram 200a includes the inbuilt camera 110b, the display system 110a and the vitals monitoring subsystem 102. The display system 110a may be used for video conferencing with doctor, family, friends and so forth with the help of the inbuilt camera 110b. The inbuilt camera 110b may also be configured to recognize the object and people (For example, user) using the edge computer vision algorithms. The vitals monitoring subsystem 102 may be configured to capture the health vitals of the user based on the user request. Based on the health vitals data the intelligent assistance robot 101 may be configured to detect the abnormalities and predict the diseases and infections using artificial intelligence based algorithms.

[00104] Referring to FIG. 2B is an example diagram 200b depicting a schematic representation of the communication subsystem 108, in accordance with one or more exemplary embodiments. The diagram 200b includes the audio system 110c, and the communication subsystem 106. The audio system 110c includes the speaker 111b and the microphone 111a. The communication subsystem 108 may be used for video conferencing with the doctor, family and friends through the inbuilt camera 110b. The communication subsystem 108 may be configured to receive the health data, and the sensor data from the vital monitoring subsystem and the navigation system thereby displaying on the display system 110a.

[00105] Referring to FIG. 3 is a flowchart 300 depicting an exemplary method for executing tasks by the intelligent assistive robot, in accordance with one or more exemplary embodiments. As an option, the method 300 is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1D, FIG. 1E, FIG. 2A and FIG. 2B. However, the method 300 is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00106] The exemplary method 300 commences at step 302, providing a voice-based interaction to the intelligent assistive robot by a user through a microphone. Thereafter at step 304, decoding the voice-based interaction by the intelligent assistive robot. Determining whether the voice-based interaction matches with the predefined executable task list by the intelligent assistive robot, at step 306. If the answer at step 306 is No, the method continues at step 308, the intelligent assistive robot notifies to the user about no matches were found in the predefined executable task list using the voice-based response system. If the answer at step 306 is Yes, the method continues at step 310, executing the task by the intelligent assistive robot based on the decoded voice-based interaction. Determining whether the task requires the movement of the intelligent assistive robot, at step 312. IF the answer at step 312 is yes, the method continues at step 314, executing the task as per the floor plan and equipment installations of the user. Thereafter at step 316, executing the task dynamically by avoiding the obstacles. IF the answer at step 312 is No, the method continues at step 318, connecting to the doctors through the WIFI or LTE as per the user network availability. Thereafter at step 320, enabling the doctor to capture the health vitals of the user by the intelligent assistive robot using the assistance module on the first computing device. Thereafter at step 322, requesting for further interaction from the user by the intelligent assistive robot once the task is executed. Thereafter at step 324, executing the tasks by the intelligent assistive robot upon the user request, or else return to its docking region. The user can request the intelligent assistive robot through at least one of: the touchscreen display system on the intelligent assistive robot; voice-based interaction using a microphone; and the assistance module embedded in the internal memory of the intelligent assistive robot; the first computing device, and the second computing device to execute the required task.

[00107] Referring to FIG. 4 is a flowchart 400 depicting an exemplary method for initiating the call with the doctor, in accordance with one or more exemplary embodiments. As an option, the method 400 is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1E, FIG. 2A and FIG. 2B, FIG. 3. However, the method 400 is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00108] The exemplary method 400 commences at step 402, passing the voice-based interaction to the intelligent assistive robot for collecting vitals or to consult the doctor. Thereafter at step 404, providing instructions to the user by the intelligent assistive robot for the collection of health vitals. Thereafter at step 406, sending the health vitals data to the cloud server by the intelligent assistive robot. Thereafter at step 408, analysing the health vitals data of the user by using the clinical decision support system for identifying abnormalities. Determining whether the values of the health vitals are normal?, at step 410. If the answer at step 410 is yes, notifying the same to the user by the intelligent assistive robot and awaiting for the further instructions from the user whether to consult the doctor, at step 412. If the answer at step 410 is No, the method continues at step 414, recommending the user to consult the doctor immediately or initiating the emergency response automatically if the user values are indicating emergency. Determining whether the user accepts to consult the doctor?, at step 416. If the answer at step 416 is yes, initiating the call (video consultation) with the doctor by the intelligent assistive robot using the inspection camera, at step 418. If the answer at step 416 is No, the method continues at step 420, scheduling an appointment by the user based on the user convenience.

[00109] Referring to FIG. 5A is a flowchart 500a depicting an exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments. As an option, the method 500a is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1E, FIG. 2A and FIG. 2B, FIG. 3, and FIG.4. However, the method 500a is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00110] The method commence at step 502, Lidar sensor 124c uses laser beam to identify the distance from the objects in and around its path. Whereas the depth sensor may be a depth camera uses a RGB camera to provide depth maps with pattern and texture of the objects around it. This depth maps with pattern and texture of the objects around it is published to robot operating system (ROS) 114d. Thereafter at step 504, ROS 114d with the navigation stack 114e processes the data from the Lidar sensor 124c and transforms that into a floor map. This map is used by the system to plan the precise path to navigate towards the target automatically. This map also sets the boundaries for operation of the device. It helps in detection of any static or dynamic obstruction in the path within the configured range. It changes the navigation path within the specified boundaries to avoid obstruction and reach the target. Thereafter at step 506, ROS 114d publishes information to the controller 107 to run the navigation peripherals 114a. Robot control is published from ROS 114d and provides instructions to the driver 112b which in turn executes the DC motion using motors 122a. The controller 107 can collect data from the motor 122a and send feedback to the ROS 114d. Thereafter at step 508, driver 122b receives the voice-based interaction from the controller 107 and delivers the required current to drive motor 122a accordingly. The motor 122a provide the required torque to drive the chassis. The motor 122a are capable to provide differential drive for the system.

[00111] Referring to FIG. 5B is an example diagram 500b depicting a schematic representation of the robot operating system navigation stack, in accordance with one or more exemplary embodiments. The diagram 500b includes a sensor sources 503, an encoder source 505, base controller 107, sensor transform 509, Cost map Configuration 511, Localization 513, G-Mapping Package 515, global planner 517, Timed Elastic Band local planner 519. The navigation stack 114e collects information from odometer/encoder, sensor streams, goal pose(target) and evaluates to issue safe velocity commands to the mobile base (the Robot).

[00112] The navigation stack 114e may be configured to use sensor information from the mapping sensors to avoid obstacles in the environment, the navigation stack assumes that these sensors are publishing either sensor_msgs/LaserScan or sensor_msgs/Point Cloud messages over ROS 114d. The sensor information may include, distance information, depth information, route, path, odometry information, target location information, user location information, depth camera information, lidar information, cliff information, distance from the objects in and around its path, and so forth. The navigation stack 114e requires that encoder source information (odometry information) to be published using sensor information and the nav_msgs/Odometry messages. The navigation stack 114e assumes that it can send velocity commands using a geometry_msgs/Twist messages. Robot receives a “cmd_vel” topic which helps to move the intelligent assistive robot 101. “cmd_vel” consists of (vx, vy, vtheta) <==> (cmd_vel.linear.x, cmd_vel.linear.y, cmd_vel.angular.z) velocities and converting them into motor commands by the base controller 107 to send to a mobile base (the intelligent assistive robot). The intelligent assistive robot 101must be publishing coordinate frame information using the sensor transform 509. It should receive sensor_msgs/LaserScan or sensor_msgs/Point Cloud messages from the mapping sensors that are to be used with the navigation stack 114e It will be publishing odometry information using both sensor transform 509 and the nav_msgs/Odometry message.

[00113] The navigation stack 114e uses two cost maps to store information about obstacles in the environment. One cost map configuration 511 is used for global planning, meaning creating long-term plans over the entire environment, and the other is used for local planning and obstacle avoidance. There are three sections below for cost map configuration: common configuration options, global configuration options, and local configuration options.

[00114] The AMCL localization 513 is probabilistic localization system for the intelligent assistive robot moving in 2D. The localization 513 implements the adaptive (or KLD-sampling) Monte Carlo localization approach (as described by Dieter Fox), which uses a particle filter to track the pose of the intelligent assistive robot against a known map. AMCL Configuration has many configuration options that will affect the performance of localization.

[00115] The G-mapping package 515 a package of ROS wrapper for OpenSlam's global mapping. The global mapping package 515 provides laser-based SLAM, as a ROS node called slam_gmapping. SLAM is simultaneous localization and mapping module. Using slam_gmapping may create a 2-D occupancy grid map from laser and goal pose(target) data collected by the intelligent assistive robot 101. Navfn provides a fast interpolated navigation function that can be used to create plans for the robot 101. The global planner 517 assumes a circular robot and operates on a cost map to find a minimum cost plan from a start point to an end point in a grid. The navigation function is computed with Dijkstra's algorithm.

[00116] The local planner 519 is an important module in the navigation stack 114e that determines runtime robot motion planning and obstacle avoidance. The commonly known planners are DWA (Dynamic Window Approach to local robot navigation on a plane), e- band, pure pursuit planner. The local planner used in the robot is state of the earth planet which works perfectly for a differential drive robot and dynamic obstacle avoidance. TEB(Timed Elastic Band) planner 519 is configured and tuned in a way that helps the robot determine the local Harijan for its motion and make the dynamic obstacle avoidance best in class.

[00117] The teb_local_planner package implements a plugin to the base_local_planner of the 2D navigation stack 114e. The underlying method called Timed Elastic Band locally optimizes the robot's trajectory with respect to trajectory execution time, separation from obstacles and compliance with Kino dynamic constraints at runtime’s planner primarily tries to seek for time optimal solution, but it can also configure for global reference path fidelity. This approach discretizes the trajectory along the prediction horizon in terms of time and is a continuous numerical optimization scheme. Depending on the resolution the degree of freedom along the production Harijan can be very high. The constrained optimization problem is transferred to gain shorter computation Times.

[00118] The TEB planner 519 may optimize multiple trajectories in different topologies at once in order to find the solution. With a certain amount of computation power, the TEB planner 519 achieves a better controller performance when compared to other local planners. The TEB planner 519 may provide support for dynamic obstacle avoidance performance. The TEB planner 519 is suitable suited for all robot types which have different kind of dynamic model.

[00119] Referring to FIG. 6A and FIG. 6B are example diagrams 600a and 600b depicting the localization for a robot moving in 2D, in accordance with one or more exemplary embodiments. The diagram 600a and 600b depicts the AMCL localization 513 for the intelligent assistive robot moving in 2D. The localization 513 implements the adaptive (or KLD-sampling) Monte Carlo localization approach (as described by Dieter Fox), which uses a particle filter to track the pose of the intelligent assistive robot against a known map. AMCL Configuration has many configuration options that will affect the performance of localization.

[00120] Referring to FIG. 7A and FIG. 7B are example diagrams 700a and 700b depicting the G-Mapping Package, in accordance with one or more exemplary embodiments. The diagram 700a and 700b depicts the G-mapping package 515 of ROS wrapper for OpenSlam's global mapping. The global mapping package 515 provides laser-based SLAM, as a ROS node called slam_gmapping. SLAM is simultaneous localization and mapping module. Using slam_gmapping may create a 2-D occupancy grid map from laser and goal pose(target) data collected by the intelligent assistive robot 101.

[00121] Referring to FIG. 7C is an example diagram 700c depicting the map visualization to the user on the computing devices by assistance module, in accordance with one or more exemplary embodiments. The diagram 700c consists of visualization of map along with robots position represented by a marker. The robot can also be controlled from webpage by giving robot goals on the map.

[00122] Referring to FIG. 8 is a flowchart 800 depicting an exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments. As an option, the method 800 is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1E, FIG. 2A and FIG. 2B, FIG. 3, FIG.4, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6b, FIG. 7A, and FIG. 7B. However, the method 800 is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00123] The method commence at step 802, loading static map and initializing parameters. Thereafter at step 804, obtaining the target pose. Thereafter at step 806, planning the global path using the Dijkstra’s algorithm. Thereafter at step 808, converting global path to Timed Elastic Band initial trajectory sequence. Thereafter at step 810, Timed Elastic Band planning local trajectory. Thereafter at step 812, calculating the control variables. Thereafter at step 814, moving the robot to the target point according to the movement instruction. Determining whether the current pose of the robot is target pose? at step 816. If answer at step 816 is Yes, End, at step 818. If the answer at step 816 is no, the method reverts at step 808.

[00124] Referring to FIG. 9 is an exemplary flowchart 900 depicting an exemplary method for state of the functionality of the buttons, in accordance with one or more exemplary embodiments. As an option, the method 900 is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1E, FIG. 2A and FIG. 2B, FIG. 3, FIG.4, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6b, FIG. 7A, FIG. 7B, and FIG. 8. However, the method 900 is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00125] The method commence at step 902, enabling the user to select on the button available in the assistance module on the first/second computing device/robot. Thereafter at step 904, determining the state of the button selected by the user. Thereafter at step 906, enabling the user to select get stored location to obtain the previously stored location of the robot. Thereafter at step 908, enabling the user to select the move base to obtain the status of the moving robot to the user with positions in the map. Thereafter at step 910, enabling the user to select the store pose for saving the location of the room /robot.

[00126] The state machines bring modularity to the navigation stack 114e by separating each task in the form of states and behaviour. State machines in the robot connect the (assistance module) webpage and navigation stack 114e for smooth transition between the states and better overall performance.

[00127] Referring to FIG. 10 is a flowchart 1000 depicting an exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments. As an option, the method 1000 is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1E, FIG. 2A and FIG. 2B, FIG. 3, FIG.4, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6b, FIG. 7A, FIG. 7B, FIG. 8 and FIG. 9. However, the method 1000 is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00128] The method commence at step 1002, identifying distance from the objects in and around its path using the Lidar sensor. Thereafter at step 1004, providing depth maps with pattern and texture of the objects around it using the depth sensor. Thereafter 1006, transmitting the distance and depth maps data to the robot operating system. Thereafter at step 1008, processing the distance and depth maps data received from the Lidar and transforms the data into a floor map using the robot operating system with the navigation stack. Thereafter at step 1010, planning the precise path using the floor map by the robot operating system to navigate the intelligent assistive robot towards the target/user automatically. Thereafter at step 1012, detecting static or dynamic obstruction in the path within the configured range by the robot operating system. Thereafter at step 1014, transmitting the path information to the controller to run the navigation peripherals by the robot operating system. Thereafter at step 1016, providing path instructions to the driver from the robot operating system which in turn executes the DC motion using motor. Thereafter at step 1020, collecting the path information from the motor and send feedback to the robot operating system. Thereafter at step 1022, receiving the command by the driver from the controller and delivers the required power to drive motor accordingly. Thereafter at step 1024, providing the required torque to drive the chassis using the motors and driving the intelligent assistive robot to the target location by avoiding obstacles.

[00129] Referring to FIG. 11 is a flowchart 1100 depicting another exemplary method for driving the intelligent assistive robot using navigation subsystem, in accordance with one or more exemplary embodiments. As an option, the method 1200 is carried out in the context of the details of FIG. 1A, FIG. 1B, FIG. 1C, FIG. 1E, FIG. 2A and FIG. 2B, FIG. 3, FIG.4, FIG. 5A, FIG. 5B, FIG. 6A, FIG. 6b, FIG. 7A, FIG. 7B, FIG. 8 and FIG. 9. However, the method 1100 is carried out in any desired environment. Further, the aforementioned definitions are equally applied to the description below.

[00130] The method commence at step 1102, collecting the sensor information using the sensors through the controller. Thereafter at step 1104, obtaining the Odometry information from the sensors information by a navigation stack. Thereafter at step 1106, obtaining the target location tags or the location coordinates from the user voice-based interaction. Thereafter at step 1108, enabling the intelligent assistive robot to load the static map from the cloud server as per voice-based interaction of the user and initialising the path parameters as per the map. Thereafter at step 1110, planning and generating the global path from the current location of the intelligent assistive robot to the target location using Dijkstra algorithm. Thereafter at step 1112, dividing the global path into smaller segments and converting to the timed elastic band using the TEB planner for local path planning. The segments are divided such that the current segment is within the sensor range of the intelligent assistive robot. This is also called the local path planning. Thereafter at step 1114, changing the local path(segment) dynamically upon the detection of any obstacles. Thereafter at step 1116, calculating the angular and the linear velocity of the intelligent assistive robot. Thereafter at step 1118, enabling the intelligent assistive robot to move to the target location by avoiding the obstacles and providing assistance, one or more health services to the user. Thereafter at step 1120, executing one or more tasks based on the user request through at least one of: the voice-based interaction using an interactive voice response system; a communication subsystem; an assistance module embedded in a first computing device, a second computing device, and the intelligent assistive robot.

[00131] Referring to FIG. 12 is a block diagram 1200 illustrating the details of a digital processing system 1200 in which various aspects of the present disclosure are operative by execution of appropriate software instructions. The Digital processing system 1200 may correspond to the computing devices 130, 132 (or any other system in which the various features disclosed above can be implemented).

[00132] Digital processing system 1200 may contain one or more processors such as a central processing unit (CPU) 1210, random access memory (RAM) 1220, secondary memory 1230, graphics controller 1260, display unit 1270, network interface 1280, and input interface 1290. All the components except display unit 1270 may communicate with each other over communication path 1250, which may contain several buses as is well known in the relevant arts. The components of Figure 12 are described below in further detail.

[00133] CPU 1210 may execute instructions stored in RAM 1220 to provide several features of the present disclosure. CPU 1210 may contain multiple processing units, with each processing unit potentially being designed for a specific task. Alternatively, CPU 1210 may contain only a single general-purpose processing unit.

[00134] RAM 1220 may receive instructions from secondary memory 1230 using communication path 1250. RAM 1220 is shown currently containing software instructions, such as those used in threads and stacks, constituting shared environment 1225 and/or user programs 1226. Shared environment 1225 includes operating systems, device drivers, virtual machines, etc., which provide a (common) run time environment for execution of user programs 1226.

[00135] Graphics controller 1260 generates display signals (e.g., in RGB format) to display unit 1270 based on data/instructions received from CPU 1210. Display unit 1270 contains a display screen to display the images defined by the display signals. Input interface 1290 may correspond to a keyboard and a pointing device (e.g., touch-pad, mouse) and may be used to provide inputs. Network interface 1280 provides connectivity to a network 128 (e.g., using Internet Protocol), and may be used to communicate with other systems (such as those shown in Figure 1) connected to the wireless connectivity module 120d.

[00136] Secondary memory 1230 may contain hard drive 1235, flash memory 1236, and removable storage drive 1237. Secondary memory 1230 may store the data software instructions (e.g., for performing the actions noted above with respect to the Figures), which enable digital processing system 1200 to provide several features in accordance with the present disclosure.

[00137] Some or all of the data and instructions may be provided on removable storage unit 1240, and the data and instructions may be read and provided by removable storage drive 1237 to CPU 1210. Floppy drive, magnetic tape drive, CD-ROM drive, DVD Drive, Flash memory, removable memory chip (PCMCIA Card, EEPROM) are examples of such removable storage drive 1237.

[00138] Removable storage unit 1240 may be implemented using medium and storage format compatible with removable storage drive 1237 such that removable storage drive 1237 can read the data and instructions. Thus, removable storage unit 1240 includes a computer readable (storage) medium having stored therein computer software and/or data. However, the computer (or machine, in general) readable medium can be in other forms (e.g., non-removable, random access, etc.).

[00139] In this document, the term "computer program product" is used to generally refer to removable storage unit 1240 or hard disk installed in hard drive 1235. These computer program products are means for providing software to digital processing system 1200. CPU 1210 may retrieve the software instructions, and execute the instructions to provide various features of the present disclosure described above.

[00140] The term “storage media/medium” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage memory 1230. Volatile media includes dynamic memory, such as RAM 1220. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.

[00141] In accordance with one or more exemplary embodiments of the present disclosure, a system and method for assisting user by the intelligent assistive robot, comprising, collecting sensor information using the sensors 114b through the controller 107. Obtaining the Odometry information from the sensors information by the navigation stack 114e. Obtaining at least one of: target location tags; location coordinates from voice-based interaction of a user using the microphone 110a. Enabling the intelligent assistive robot 101 to load a static map from the cloud server 119 as per voice-based interaction of the user and initialising path parameters as per the static map by the navigation stack 114e. Planning and generating a global path from current location of the intelligent assistive robot to a target location using the Dijkstra algorithm. Dividing the global path into smaller segments and converting to timed elastic band using a timed elastic band planner 519 for local path planning, the segments are divided such that the current segment is within the sensor range of the intelligent assistive robot. Changing the local path(segment) dynamically upon the detection of the obstacles. Calculating angular and linear velocity of the intelligent assistive robot 101; and enabling the intelligent assistive robot to move to the target location by avoiding the obstacles and providing assistance, health services to the user and executing one or more tasks based on the user request through at least one of: the voice-based interaction using the interactive voice response system 116; a communication subsystem 108; an assistance module 136 embedded in the first computing device 130, the second computing device 132, and the intelligent assistive robot 101.

[00142] In accordance with one or more exemplary embodiments of the present disclosure, receiving sensor information from the lidar sensor 124c, the cliff sensor 124b and the depth sensor 124a and processes the sensor information through the controller 107 for mapping and obstacle detection to execute the navigation plan by the navigation subsystem 112. Identifying distance from the objects in and around its path and providing depth maps with pattern and texture of the objects using the Lidar sensor 124c and the three-dimensional camera (depth sensor). Transmitting distance data and depth maps data to the robot operating system 114d and processing the data received from the sensors 114b thereby transforming the data into floor map with the navigation stack 114e using computer vision algorithms and artificial intelligence algorithms. Planning the precise path using the floor map by the robot operating system 114d to navigate the intelligent assistive robot towards the target/user automatically. Detecting and recognizing static or dynamic obstruction in the path within the configured range by the robot operating system 114d. Transmitting the path information to the controller 107 to run navigation peripherals 114a by the robot operating system 114d. Providing path instructions to drive a chassis using the encoders 122c and motors 122a thereby navigating the intelligent assistive robot 101 to the target location by avoiding obstacles. Collecting the motion information from the motor 122a, the encoder 122c and send feedback to the robot operating system 114d.

[00143] In accordance with one or more exemplary embodiments of the present disclosure, performing auto mapping using exploratory algorithms to capture a static map and a dynamic map of the surroundings by the navigation subsystem 112. Calibrating a precise path to a user’s desired location automatically using a global planner engine 115b and a local planner engine 115c. Obtaining the target position from the global planner engine 115b and delivering to the local planner engine 115c. Guiding the intelligent assistive robot 101 to move to the any target location by avoiding the obstacles in the path by the local planner engine 115c. Tracking the user and identifies fall detection for elderly care and tracks a user location to navigate the intelligent robot 101 by the navigation subsystem 112. Enabling the intelligent assistive robot 101 to locate the user in any location of the room using name tags assigned by the user, and identifies people or object and their position using computer vision algorithms and proximity analysis techniques.

[00144] In accordance with one or more exemplary embodiments of the present disclosure, storing locations, floor plans and object’s measurements both in two-dimensional and three-dimensional in the cloud server 119 and enabling to access through at least one of; the first computing device 130; the second computing 132; and the intelligent assistive robot 101 over the network 128. Enabling the intelligent assistive robot to drive near the user and dispense medications stored in an automatic pill dispensing and reminder unit and generates the voice-based reminders to the user for taking pills. Detecting the user posture by the in-built camera 110b to check whether the user consumed the medication using computer vision algorithms as soon as the automatic pill dispensing and reminder unit dispenses the medication. Capturing health vitals by vitals monitoring subsystem 102 and detecting abnormalities based on captured health vitals and predicts diseases and infections using an artificial intelligence based algorithms. Enabling the user to interact with a doctor, family and friends through at least one of: the touchscreen display system 110a; voice-based interaction using microphone 111a; and the assistance module 136 on the first and second computing devices 130 and 132 and providing a seamless teleconsultation with the doctor by the interactive voice response system 116 through at least of: the voice-based interaction of the user; or when an abnormality is detected.

[00145] In accordance with one or more exemplary embodiments of the present disclosure, initiating SOS operations automatically by connecting to emergency services over the network 128, the SOS operations are initiated automatically by connecting to the emergency services upon detecting at least one of: a user posture; a facial expression; and a fall detection. Enabling at least one of: the user; and the intelligent assistive robot 101 to initiate the social engagement through at least one of: the voice-based interaction; and the assistance module 136 installed on the first and the second computing devices 130, 132. The social engagement comprises virtual chat rooms, online discussion forums, talks from key opinion leaders and influencers, blogging pages, gaming, exercises, and hobbies.

[00146] In accordance with one or more exemplary embodiments of the present disclosure, enabling the intelligent assistive robot 101 to locate the user in the house and assess current activity of the user through computer vision algorithms and tries to bring in a topic of conversation based on the prior choices of the user. Enabling the intelligent assistive robot 101 to collect more information from the user using cognitive games, activities and voice interactions to analyse mood and cognitive ability using artificial intelligence engines and algorithms.

[00147] Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus (communication path) 1250. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.

[00148] Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in an embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.

[00149] Although the present disclosure has been described in terms of certain preferred embodiments and illustrations thereof, other embodiments and modifications to preferred embodiments may be possible that are within the principles of the invention. The above descriptions and figures are therefore to be regarded as illustrative and not restrictive.

[00150] Furthermore, the described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the above description, numerous specific details are provided such as examples of programming, software modules, user selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the disclosure.

[00151] Thus the scope of the present disclosure is defined by the appended claims and includes both combinations and sub-combinations of the various features described herein above as well as variations and modifications thereof, which would occur to persons skilled in the art upon reading the foregoing description.
,CLAIMS:We Claim:
1. A method for assisting user by an intelligent assistive robot, comprising:

collecting sensor information using the one or more sensors through a controller;

obtaining an Odometry information from the sensors information by a navigation stack;

obtaining at least one of: target location tags; location coordinates from voice-based interaction of a user using a microphone;

enabling the intelligent assistive robot to load a static map from a cloud server as per voice-based interaction of the user and initialising path parameters as per the static map by the navigation stack;

planning and generating a global path from current location of the intelligent assistive robot to a target location using a Dijkstra algorithm;

dividing the global path into smaller segments and converting to timed elastic band using a timed elastic band planner for local path planning, the segments are divided such that the current segment is within the sensor range of the intelligent assistive robot;

changing the local path(segment) dynamically upon the detection of one or more obstacles;

calculating angular and linear velocity of the intelligent assistive robot; and

enabling the intelligent assistive robot to move to the target location by avoiding the obstacles and providing assistance, one or more health services to the user and executing one or more tasks based on the user request through at least one of: the voice-based interaction using an interactive voice response system; a communication subsystem; an assistance module embedded in a first computing device, a second computing device, and the intelligent assistive robot.

2. The method as claimed in claim 1, comprising a step of receiving sensor information from the lidar sensor, a cliff sensor and a depth sensor and processes the sensor information through a controller for mapping and obstacle detection to execute a navigation plan by a navigation subsystem.

3. The method as claimed in claim 1, comprising a step of identifying distance from the objects in and around its path providing depth maps with pattern and texture of the objects using the Lidar sensor and the three-dimensional camera.

4. The method as claimed in claim 1, comprising a step of transmitting distance data and depth maps data to a robot operating system and processing the data received from one or more sensors thereby transforming the data into floor map with a navigation stack using computer vision algorithms and artificial intelligence algorithms.

5. The method as claimed in claim 1, comprising a step of planning the precise path using the floor map by the robot operating system to navigate the intelligent assistive robot towards the target/user automatically.

6. The method as claimed in claim 1, comprising a step of detecting and recognizing static or dynamic obstruction in the path within the configured range by the robot operating system.

7. The method as claimed in claim 1, comprising a step of transmitting the path information to a controller to run one or more navigation peripherals by the robot operating system.

8. The method as claimed in claim 1, comprising a step of providing path instructions to drive a chassis using the encoders and motors thereby navigating the intelligent assistive robot to the target location by avoiding obstacles.

9. The method as claimed in claim 1, comprising a step of collecting the motion information from the motor, the encoder and send feedback to the robot operating system.

10. The method as claimed in claim 1, comprising a step of performing auto mapping using exploratory algorithms to capture a static map and a dynamic map of the surroundings by the navigation subsystem.

11. The method as claimed in claim 1, comprising a step of calibrating a precise path to a user’s desired location automatically using a global planner engine and a local planner engine.

12. The method as claimed in claim 1, comprising a step of obtaining the target position from the global planner engine and delivering to the local planner engine.

13. The method as claimed in claim 1, comprising a step of guiding the intelligent assistive robot to move to the any target location by avoiding the obstacles in the path by the local planner engine.

14. The method as claimed in claim 1, comprising a step of tracking the user and identifies fall detection for elderly care and tracks a user location to navigate the intelligent robot by the navigation subsystem.

15. The method as claimed in claim 1, comprising a step of enabling the intelligent assistance robot to locate the user in any location of the room using name tags assigned by the user, and identifies people or object and their position using computer vision algorithms and proximity analysis techniques.

16. The method as claimed in claim 1, comprising a step of storing locations, floor plans and object’s measurements both in two-dimensional and three-dimensional in a cloud and are enabling to access through at least one of; the first computing device; the second computing; and the intelligent assistive robot over the network.

17. The method as claimed in claim 1, comprising a step of enabling the intelligent assistive robot to drive near the user and dispense medications stored in an automatic pill dispensing and reminder unit and generates the voice-based reminders to the user for taking pills.

18. The method as claimed in claim 1, comprising a step of detecting the user posture by the in-built camera to check whether the user consumed the medication using computer vision algorithms as soon as the automatic pill dispensing and reminder unit dispenses the medication.

19. The method as claimed in claim 1, comprising a step of capturing one or more health vitals by vitals monitoring subsystem and detecting one or more abnormalities based on one or more captured health vitals and predicts diseases and infections using an artificial intelligence based algorithms.

20. The method as claimed in claim 1, comprising a step of enabling the user to interact with a doctor, family and friends through at least one of: a touchscreen display system; voice-based interaction; and an assistance module on a first and second computing devices and providing a seamless teleconsultation with the doctor by an interactive voice response system through at least of: the voice-based interaction of the user; or when an abnormality is detected.

21. The method as claimed in claim 1, comprising a step of initiating SOS operations automatically by connecting to one or more emergency services over a network, the SOS operations are initiated automatically by connecting to the emergency services upon detecting at least one of: a user posture; a facial expression; and a fall detection.

22. The method as claimed in claim 1, comprising a step of enabling at least one of: the user; and the intelligent assistive robot to initiate the social engagement through at least one of: the voice-based interaction; and the assistance module installed on the first and the second computing devices.

23. The method as claimed in claim 27, wherein the social engagement comprises virtual chat rooms, online discussion forums, talks from key opinion leaders and influencers, blogging pages, gaming, exercises, and hobbies.

24. The method as claimed in claim 1, comprising a step of enabling the intelligent assistive robot to locate the user in the house and assess current activity of the user through computer vision algorithms and tries to bring in a topic of conversation based on the prior choices of the user.

25. The method as claimed in claim 1, comprising a step enabling the intelligent assistive robot to collect more information from the user using cognitive games, activities and voice interactions to analyse mood and cognitive ability using artificial intelligence engines and algorithms.

26. A system for assisting users by an intelligent assistive robot, comprising:
a communication subsystem, an interactive voice response system, a navigation subsystem, a vital monitoring subsystem, and one or more sensors, an internal memory, and a SOS button are electrically coupled to a processing device, whereby the communication subsystem configured to enable the user to interact with a doctor, family and friends through at least one of: a touchscreen display system on the intelligent assistive robot; one or more voice-based interaction using a microphone; and an assistance module embedded in an internal memory the intelligent assistive robot; a first computing device, and a second computing device, the communication subsystem comprises a display system, an inbuilt camera, the audio system, and a wireless connectivity system;

the interactive voice response system embedded with an artificial intelligence-based semantics and artificial intelligence based proactive conversational agent configured to interpret and process the user’s voice-based interaction and generate replies to the user using algorithm generated voice through a speaker,

the interactive voice response system configured to provide seamless teleconsultation with the doctor through at least of: the voice-based interaction of the user; or when an abnormality is detected using the inbuilt camera, the interactive voice response system configured to generate voice-based reminders to the user to execute one or more tasks and engage the user by initiating conversations based up on user interest and mood;

the navigation subsystem configured to receive sensor information from the one or more sensors and processes the sensor information through the processing device to execute a navigation plan, the navigation subsystem configured to track a user location and identifies fall detection for elderly care and navigates the intelligent assistive robot to a required place upon a user request through the voice-based interaction;
the navigation subsystem configured to calibrate a precise path to the user location automatically using a global planner engine and a local planner engine and performs auto mapping using exploratory algorithms to capture a static map and a dynamic map, wherein the global planner engine configured to obtain a user position and deliver to the local planner engine, the local planner engine configured to guide the intelligent assistive robot to move to the user location by avoiding the obstacles in the path, the navigation subsystem configured to enable the intelligent assistance robot to locate a target in any location of the room using an image recognition, an object recognition, and proximity analysis techniques;

the vital monitoring subsystem is configured to perform automated diagnostic tests to the user for elderly care, the vital monitoring subsystem comprises an automatic pill dispensing and reminder unit, an heart rate and blood oxygen monitoring unit, an infrared thermometer, a glucometer, a blood pressure monitoring unit, a digital stethoscope, an inspection camera, and an Electrocardiogram Capturing Device, whereby the vital monitoring subsystem configured to monitor health of the elderly on a regular basis to capture one or more health vitals, the vital monitoring subsystem configured to detect one or more abnormalities based on the one or more health vitals captured and predict diseases and infections using an artificial intelligence based algorithms, and

a SOS button configured to initiate SOS operations automatically by connecting to one or more emergency services over the network, the SOS operations are initiated automatically upon detection of abnormalities by connecting to the emergency services upon detecting at least one of: a user posture; a facial expression; and a fall detection.

27. The system as claimed in claim 26, wherein the intelligent assistive robot, a first computing device; a second computing device comprises the assistance module is configured to customize the intelligent assistance robot free movement regions, restricted regions and boundaries using an indoor mapping technology.

28. The system as claimed in claim 26, wherein the assistance module is configured to enable at least one of: the doctor; and the family member; to view the health history of the user.

29. The system as claimed in claim 26, wherein the assistance module is configured to enable the user to personalise and tag one or more locations based on user choice.

30. The system as claimed in claim 26, wherein the assistance module is configured to provide assistance and one or more health services to the user.

31. The system as claimed in claim 26, wherein the assistance module is configured to provide interest based engagement groups, chat rooms using artificial intelligence based automated moderators.

32. The system as claimed in claim 26, wherein the navigation subsystem comprises a navigation peripherals, mapping sensors, and a navigation stack.

33. The system as claimed in claim 32, wherein the navigation peripherals comprising motor, driver, encoder and wheels are configured to obtain Odometery information from controller for the navigation of the intelligent assistive robot.

34. The system as claimed in claim 32, wherein the mapping sensors comprises the depth sensor, the cliff sensor and the lidar sensor configured to create a floor map and detect static and dynamic obstacles.

35. The system as claimed in claim 32, wherein the depth sensor and cliff sensor are configured to detect a cliff, a staircase, one or more transparent obstacles in the pathway and reroutes the pathway of the intelligent assistive robot.

36. The system as claimed in claim 32, wherein the Lidar sensor, depth sensor, three dimensional camera configured to identify the distance from the objects in and around its path and publish the identified distance data to a robot operating system to transform into a floor map.

37. The system as claimed in claim 26, wherein the navigation subsystem is configured to enable the intelligent assistive robot to drive near the user and dispense medications stored in the automatic pill dispensing and reminder unit and generates the voice-based reminders to the user for taking pills using an audio system.

38. The system as claimed in claim 26, wherein the internal memory and a cloud server is configured to store geolocations, floor plans and object’s measurements both in two-dimensional and three-dimensional and enable to access through at least one of; the first computing device; the second computing; and the intelligent assistive robot.

39. The system as claimed in claim 26, wherein the heart rate and blood oxygen monitoring unit comprises an integrated pulse oximeter and a heart rate monitoring sensor are configured to detect pulse oximetry and heart-rate signals using a spectrophotometry technique and announces using the audio system.

40. The system as claimed in claim 26, wherein the infrared thermometer comprises the inbuilt camera is configured to detect body temperature of the user and is designed for non-contact temperature sensing capable of capturing from predetermined distance.

41. The system as claimed in claim 26, wherein the glucometer is configured to enable the user to take a small drop of blood from the finger and apply on a strip to calculate the glucose levels of the body in 15 to 20 seconds and display on a display system and/or announces using the audio system.

42. The system as claimed in claim 26, wherein the blood pressure monitoring unit is embedded with an intermittent non-invasive sphygmomanometer and touch based cuff less technology using spectrophotometry is configured to identify the systolic and diastolic blood pressure values of the user.

43. The system as claimed in claim 26, wherein the vital monitoring subsystem comprises the digital stethoscope is configured to capture heart sounds of the user and converts into a digital format to display on at least one of: the display system; the first computing device; and the second computing device and announces using the audio system.

44. The system as claimed in claim 26, wherein the electrocardiogram capturing device configured to capture a single lead Electrocardiography from the fingertips of the both hands of the user.

45. The system as claimed in claim 26, wherein at least one of: the inspection camera; and the inbuilt camera; are configured to detect the user motion and posture using the Lidar sensor and the depth sensor (three-dimensional depth camera technologies) and connects to the emergency services by detecting at least one of: the user posture; the facial expression; and the fall detection.

46. The system as claimed in claim 26, wherein the inspection camera is a supplementary camera with Full High-definition clarity and Zooming up to 1000x to capture and stream high resolution video during the doctor consultation to check the user body parts, organs, ENT, and skin related issues.

47. The system as claimed in claim 26, wherein the inbuilt camera is configured to detect and recognize the motion of the user or object coming near to the intelligent assistive robot and assess the user interest and mood based on the one or more facial expressions of the user.

48. The system as claimed in claim 26, wherein the automatic pill dispensing and reminder unit is configured to dispense the required medicines to the user as per the preloaded prescription based on the timings and dosage suggested by the doctor and order prescription automatically to refill the medicines based on the available stock.

49. The system as claimed in claim 26, wherein the automatic pill dispensing and reminder unit is configured to assist the user with the voice-based reminders using an audio system to consume the pills to ensure compliance with the prescription.

50. The system as claimed in claim 26, wherein the inbuilt camera is configured to detect the user posture to check whether the user consumed the medication as soon as the automatic pill dispensing and reminder unit dispenses the medication.

51. The system as claimed in claim 26, wherein the display system is configured to display the health vitals received from the processing device and plays multimedia videos.

52. The system as claimed in claim 26, wherein the display system is configured to display notifications, info graphics for the user to guide through activities such as exercise, cooking, painting, hobbies.

Documents

Application Documents

# Name Date
1 202141013007-STATEMENT OF UNDERTAKING (FORM 3) [25-03-2021(online)].pdf 2021-03-25
2 202141013007-PROVISIONAL SPECIFICATION [25-03-2021(online)].pdf 2021-03-25
3 202141013007-POWER OF AUTHORITY [25-03-2021(online)].pdf 2021-03-25
4 202141013007-FORM FOR STARTUP [25-03-2021(online)].pdf 2021-03-25
5 202141013007-FORM FOR SMALL ENTITY(FORM-28) [25-03-2021(online)].pdf 2021-03-25
6 202141013007-FORM 1 [25-03-2021(online)].pdf 2021-03-25
7 202141013007-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [25-03-2021(online)].pdf 2021-03-25
8 202141013007-EVIDENCE FOR REGISTRATION UNDER SSI [25-03-2021(online)].pdf 2021-03-25
9 202141013007-DRAWINGS [25-03-2021(online)].pdf 2021-03-25
10 202141013007-DECLARATION OF INVENTORSHIP (FORM 5) [25-03-2021(online)].pdf 2021-03-25
11 202141013007-Correspondence_Form1. Form3, Form5, Form28, Power of Attorney_29-03-2021.pdf 2021-03-29
12 202141013007-PostDating-(16-03-2022)-(E-6-66-2022-CHE).pdf 2022-03-16
13 202141013007-APPLICATIONFORPOSTDATING [16-03-2022(online)].pdf 2022-03-16
14 202141013007-Response to office action [04-04-2022(online)].pdf 2022-04-04
15 202141013007-DRAWING [25-04-2022(online)].pdf 2022-04-25
16 202141013007-COMPLETE SPECIFICATION [25-04-2022(online)].pdf 2022-04-25
17 202141013007-FORM-26 [12-03-2024(online)].pdf 2024-03-12
18 202141013007-FORM 18 [12-03-2024(online)].pdf 2024-03-12
19 202141013007-STARTUP [02-09-2024(online)].pdf 2024-09-02
20 202141013007-FORM28 [02-09-2024(online)].pdf 2024-09-02
21 202141013007-FORM 18A [02-09-2024(online)].pdf 2024-09-02