Sign In to Follow Application
View All Documents & Correspondence

Hybrid Reality Based I Bot Navigation And Control

Abstract: A system for interaction, navigation and control of an autonomous robot. The system comprises of two electronic devices, which are compatible with all operating systems and also compatible with magnetic trigger of a virtual reality device. When a virtual reality application run, the application provides two scenes at the display. The interaction, navigation and control over the autonomous robot is based on movement triggered from the virtual reality device. On moving the head upwards and downwards the robot will move forward and backward respectively. The left and right movement of the platform would result in the brain infrastructure movement of the robot. The robot can interact with people in real time and it can also identify if the robot had already interacted with the person. The robot can identify person on gender, age, voice and face recognition basis and it can also make calls, texts and clicking selfies etc.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 December 2015
Publication Number
24/2017
Publication Type
INA
Invention Field
PHYSICS
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-07-20
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point, Mumbai, Maharashtra 400021, India

Inventors

1. TOMMY, Robin
Tata Consultancy Services Limited, CLC Bodhipark Building, Technopark Phase 1 Campus, Kariyavattom P.O Kazhakuttom, Trivandrum, Kerala, India - 695581
2. VERMA, Arvind
Tata Consultancy Services Limited, CLC Bodhipark Building, Technopark Phase 1 Campus, Kariyavattom P.O Kazhakuttom, Trivandrum, Kerala, India - 695581
3. BHATIA, Aaditya
Tata Consultancy Services Limited, CLC Bodhipark Building, Technopark Phase 1 Campus, Kariyavattom P.O Kazhakuttom, Trivandrum, Kerala, India - 695581
4. DESHMUKH, Lokesh
Tata Consultancy Services Limited, CLC Bodhipark Building, Technopark Phase 1 Campus, Kariyavattom P.O Kazhakuttom, Trivandrum, Kerala, India - 695581
5. CHAKRABORTY, Kaustav
Tata Consultancy Services Limited, CLC Bodhipark Building, Technopark Phase 1 Campus, Kariyavattom P.O Kazhakuttom, Trivandrum, Kerala, India - 695581

Specification

DESC:
FORM 2

THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003

COMPLETE SPECIFICATION
(See Section 10 and Rule 13)

Title of invention:
HYBRID REALITY BASED i-BOT NAVIGATION AND CONTROL

Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India

The following specification particularly describes the invention and the manner in which it is to be performed.

CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY

[001] The present application claims priority from Indian provisional specification no. 4671/MUM/2015 filed on 11 December 2015, the complete disclosure of which, in its entirety is herein incorporated by references.

FIELD OF THE INVENTION

[002] The present subject matter described herein, in general, relates to navigation and control of an autonomous robot and more particularly systems and methods to facilitate a virtual reality based interaction, navigation and control of the autonomous robot.

BACKGROUND OF THE INVENTION

[003] In todays’ world an autonomous robot is a need for each sphere of the industry. An example of the autonomous robot is an i-BOT. i-BOT is a powered social robot has a number of features distinguishing the i-BOT from most powered social robots. i-BOT by rotating its two sets of powered wheels about each other can walk up and down stairs. The wheels can roll slightly at each step to compensate for a wide range of stair dimensions. When stair climbing without assistance, the user requires a sturdy handrail and a strong grip.

[004] The existing autonomous robots require human effort for navigation and interaction with human and IoT devices. Navigation, control and Interaction of the autonomous robots with human and Internet of things (IoT) has become more prominent. The control and interaction of the autonomous robots from a virtual world is also a dream today due to the technological challenges.

[005] Virtual reality (hereinafter read as VR) supplemented with augmented reality technology. The real world is the environment that an observer can see, feel, hear, taste, or smell using the observer’s owns senses. The virtual world is defined as a generated environment stored in a storage medium or calculated using a processor.

[006] There are number of situations in which it would be advantageous to superimpose computer-generated information on a scene being viewed by a human viewer. For example, a mechanic working on a complex piece of equipment would benefit by having the relevant portion of the maintenance manual displayed within his field of view while he is looking at the equipment. Display systems that provide this feature are often referred to as “Augmented Reality” systems. Typically, these systems utilize a head mounted display that allows the user’s view of the real world to be enhanced or added to by “projecting” into it computer generated annotations or objects.

[007] In today’s world, there is an untapped need for autonomous robot which can navigate and can avoid obstacles without having manual interference. Interaction, control and communication with people and IoT devices are also equally important for autonomous robot. The current state-of-the-art systems for autonomous robot have many disadvantages, including: (a) virtual reality based control of the autonomous robot; (b) navigation of the autonomous robot based on a predefined path; (c) interaction and communication of autonomous robot with IoT devices; and (d) a remote control based face recognition, gender and age based human interaction by the autonomous robot.

OBJECT OF THE INVENTION

[008] In accordance with the present invention, the primary objective is to provide a system and a method to facilitate a virtual reality based interaction, navigation and control of the autonomous robot.

[009] Another objective of the invention is to provide a system and a method for navigation of the autonomous robot based on a predefined path.

[0010] Another objective of the invention is to provide a system and a method for interaction and communication of autonomous robot with IoT based devices.

[0011] Another objective of the invention is to provide a system and a method for a remote control based face recognition, gender, age and emotion based human interaction by the autonomous robot.

[0012] Other objectives and advantages of the present invention will be more apparent from the following description when read in conjunction with the accompanying figures, which are not intended to limit the scope of the present disclosure.

SUMMARY OF THE INVENTION
[0013] The following presents a simplified summary of some embodiments of the disclosure in order to provide a basic understanding of the embodiments. This summary is not an extensive overview of the embodiments. It is not intended to identify key/critical elements of the embodiments or to delineate the scope of the embodiments. Its sole purpose is to present some embodiments in a simplified form as a prelude to the more detailed description that is presented below.

[0014] The present subject relates to a virtual reality based interaction, navigation and control of the autonomous robot. The system includes an i-BOT which is controlled by a mobile electronic device. It would be appreciated that the i-BOT can be used as autonomous robot. The system comprises of two electronic devices, which are compatible with all operating system, one is a mobile electronic device and other one is handheld electronic device. The mobile electronic device is mounted on the autonomous robot. The mobile electronic device acts as a controller of the i-BOT. The controller enables controlling the motion of i-BOT, giving instruction and also monitoring IoT based devices in the surroundings of the i-BOT.

[0015] The handheld electronic device is also compatible with virtuality reality based platform. The handheld electronic device is having a virtual reality application. On running the application and the handheld electronic device ensures interaction, navigation and control of autonomous robot from virtual environment. The system ensures setting reminders, human detection and activation, age and gender identification. The system also have features like clicking photos, making calls, sending texts, etc.

[0016] In the present invention, a system for virtual reality based interaction, navigation and control of the autonomous robot. The system comprises of an input/output interface, a memory with plurality of instructions, a processor in communication with the memory, an autonomous robot electronically coupled with the processor, a mobile electronic device electronically coupled with processor, mounted on the autonomous robot, is a controller to interact with one or more human and internet of things based objects, a virtual reality device electronically coupled with the processor, the virtually reality platform to visualize a stereoscopic image with wide field view in a virtual environment and a handheld electronic device electronically coupled with processor, placed within the virtual reality device, to execute plurality of actions over the autonomous robot from a virtual environment.

[0017] In the present invention, a computer implemented method for interaction between an autonomous robot and human, the method comprises of receiving a real time image feed from the autonomous robot on a virtual reality application installed in a handheld electronic device, visualizing a stereoscopic image with wide field view, of the received real time image using a virtual reality device, in a virtual environment, determining one or more human is present in the received image using cognitive intelligence of the autonomous robot, learning one or more characteristics of the determined one or more human by social intelligence of the autonomous robot and communicating accordingly with the determined one or more human based on learned the one or more characteristics through a mobile electronic device.

BRIEF DESCRIPTION OF THE FIGURES
[0018] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:

[0019] Figure 1 is a block diagram showing the system for virtual reality based navigation, control and interaction of the autonomous robot; and

[0020] Figure 2 illustrates a flow diagram showing a method for virtual reality based navigation, control and interaction of the autonomous robot.

DETAILED DESCRIPTION OF THE INVENTION
[0021] Some embodiments of this invention, illustrating all its features, will now be discussed in detail. The words "comprising," "having," "containing," and "including," and other forms thereof, are intended to be equivalent in meaning and be open ended in that an item or commodities following any one of these words is not meant to be an exhaustive listing of such item or commodities, or meant to be limited to only the listed item or commodities.

[0022] It must also be noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural references unless the context clearly dictates otherwise. Although any systems and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present invention, the preferred, systems and methods are now described. In the following description for the purpose of explanation and understanding reference has been made to numerous embodiments for which the intent is not to limit the scope of the invention.

[0023] One or more components of the invention are described as module for the understanding of the specification. For example, a module may include self-contained component in a hardware circuit comprising of logical gate, semiconductor device, integrated circuits or any other discrete component. The module may also be a part of any software program executed by any hardware entity for example processor. The implementation of module as a software program may include a set of logical instructions to be executed by a processor or any other hardware entity.

[0024] The disclosed embodiments are merely exemplary of the invention, which may be embodied in various forms.

[0025] The elements illustrated in the Figures interoperate as explained in more detail below. Before setting forth the detailed explanation, however, it is noted that all of the discussion below, regardless of the particular implementation being described, is exemplary in nature, rather than limiting. For example, although selected aspects, features, or components of the implementations are depicted as being stored in memories, all or part of the systems and methods consistent with the attrition warning system and method may be stored on, distributed across, or read from other machine-readable media.

[0026] Method steps of the invention may be performed by one or more computer processors executing a program tangibly embodied on a computer-readable medium to perform functions of the invention by operating on input and generating output. Suitable processors include, by way of example, both general and special purpose microprocessors. Generally, the processor receives (reads) instructions and data from a memory (such as a read-only memory and/or a random access memory) and writes (stores) instructions and data to the memory. Storage devices suitable for tangibly embodying computer program instructions and data include, for example, all forms of non-volatile memory, such as semiconductor memory devices, including EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROMs. Any of the foregoing may be supplemented by, or incorporated in, specially-designed ASICs (application-specific integrated circuits) or FPGAs (Field-Programmable Gate Arrays). A computer can generally also receive (read) programs and data from, and write (store) programs and data to, a non-transitory computer-readable storage medium such as an internal disk (not shown) or a removable disk.

[0027] In view of the foregoing, an embodiment herein provides a system and a method for interaction navigation and control of an autonomous robot. The system includes an autonomous robot, two electronic devices which are compatible with all operating systems, a mobile electronic device and a handheld electronic device. The handheld electronic device is having magnetometer functionality. Further, the system includes a virtual reality device (Google Cardboard), a memory, and a processor in communication with the memory.

[0028] In the preferred embodiment, further the system comprises of a plurality of sensors in the autonomous robot for communicating with the autonomous robot in a predefined range. It should be appreciated that the predefined range is equivalent to the communication range of the plurality of sensors. The system is useful for the general, handicapped and elderly users. The system provides a platform to the user to control the autonomous robot and interact with human and IoT based devices from a virtual environment. The upward and downward movement of the Google Cardboard will move the autonomous robot forward and backward respectively. On left and right movement of the Google Cardboard would result in servo rotation of the autonomous robot. The system also provides a virtual reality environment in which the user can control the navigation, ultrasonic detection and obstacle avoidance of the autonomous robot.

[0029] In the preferred embodiment, the mobile electronic device acts as a controller of the autonomous robot. The system provides a remote control which is operated by mobile electronic device. The system can be extended to provide a full hand-free interaction that can interact with humans and the plurality of IoT based devices like TV, lighting system of the room etc., which are in surrounding to autonomous robot. The system determines the presence of an individual or a crowd in the surrounding. The system learns characteristics such as age and gender of an individual using cognitive intelligence of the autonomous robot. The system also learns whether it had communicated with the individual or not. If yes, the system may start communication in accordance to previous communication using the social intelligence of the autonomous robot. If the system had not communicated previously, the system may start communication as per emotions of the individual using predictive intelligence of the autonomous robot. The system also provides a physical assistance for elderly people and the children.

[0030] According to an embodiment of the invention, a system 100 for interaction, navigation and control of an autonomous robot 102. The system 100 includes a mobile electronic device 104, a handheld electronic device 106, an virtual reality application 108 installed in the handheld electronic device 106, a virtual reality device 110, a user interface 112, a memory 114, and a processor 116 in communication with the memory 114.

[0031] In an embodiment of the invention, the mobile electronic device 104 mounted on the autonomous robot 102 and the handheld electronic device 106 having the virtual reality application 108. The mobile electronic device 104 and the handheld electronic device 106 are configured to work with all operating systems. The virtual reality application108 is compatible with all operating system such as android, iOS and windows. The handheld electronic device 106 is configured to place within the virtual reality device 110. The virtual reality device 110 to visualize a stereoscopic image with wide field view, in a virtual environment. In this case a Google cardboard may be used as a virtual reality device. The Google cardboard can be mounted on the head.

[0032] In the preferred embodiment of the invention, the mobile electronic device 104 mounted on the autonomous robot 102 acts as a remote controller of the autonomous robot 102. The mobile electronic device 104 is incorporated with a voice controlled interface. When the autonomous robot 102 determines one or more human, the mobile electronic device 104 initialize and start interaction with determined human. The mobile electronic device 104 save the people whom the robot is interacted. The mobile electronic device 104 can provide assistance to the autonomous robot 102 in making reminders, calls, texts and clicking selfies. The mobile electronic device 104 also have a face recognition to recognize people whom the robot has been already interacted with. A user interface 112 through which the user is able to see the live video of the surroundings and the processor 116 provides controlling functions of the system100.

[0033] In the preferred embodiment of the invention, the plurality of IoT based devices in the surrounding of the autonomous robot 102, are distinct to each other IoT based devices. The system 100 allows the user to control the plurality of IoT devices by moving head upwards and downwards. On moving head in right direction the system 100 provides instructions on display of the user interface 112 to guide the user about the switching of the IoT based devices are in surrounding to the autonomous robot 102.

[0034] In the preferred embodiment of the invention, the autonomous robot 102 comprises of a plurality of sensors to establish interaction with one or more human and a plurality of IoT based devices. The plurality of sensors of the autonomous robot include IR sensor, WiFi, Bluetooth, temperature sensor, magnetometer sensor, accelerometer sensor and gyroscope sensor. The autonomous robot 102 further comprises of a cognitive intelligence, a social intelligence and a predictive intelligence. The cognitive intelligence of the autonomous robot 102 involves its ability to reason, plan, learn quickly and learn from experience. The social intelligence of the autonomous robot 102 determines characteristics such as age, gender and emotion of an individual or the crowd. According to the age and gender of an individual the autonomous robot 102 may start communication. The autonomous robot 102 stores the characteristic and the communication has been established with an individual. The autonomous robot 102 is configured to recall the previous communication when the individual interacts with the autonomous robot 102 again, and can also start from last communication. The autonomous robot can move as predefined path.

[0035] According to an illustrative embodiment of the invention, figure 2 illustrates a method 200 for interaction, navigation and control of the autonomous robot 102. At step 202, the user may install a virtual reality application 108 on the handheld electronic device 106. At step 204, when the user runs the virtual reality application 108 on the handheld electronic device 106, the application 108 splits the screen of the handheld electronic device 106 into two halves. At step 206, the user put the handheld electronic device 106 in the Google cardboard/virtual reality device 110 and wear it on head. The virtual reality application 108 consists of two scenes. First scene is a simple menu screen providing two options to the user to play and to exit from the application. The second scene which has a stereoscopic image with wide field view is a virtual environment. Initially the user is able to see a real time image feed from the autonomous robot captured by the mobile electronic device camera.

[0036] In the preferred embodiment of the invention, at step 208, when user moves his head upwards and downwards the autonomous robot 102 is moving forward and backward respectively. On moving head left and right the result will be servo rotation of the autonomous robot 102. The movement of head can control the plurality of IoT devices, in the surrounding of the autonomous robot 102. The user can select options by pulling the any triggering object from the virtual reality environment such as a magnetic trigger on the left side of the cardboard to select control of the autonomous robot or to enable the plurality of IoT devices in the surrounding of the autonomous robot 102. At step 210, on moving head right side the user may get some guidance and an image on the display user interface 112 to describe the switching on and off to the plurality of IoT devices in the surrounding to the autonomous robot 102.

[0037] In the preferred embodiment of the invention, at step 212, the user can play further or exit from the application 108, based on the selection of buttons from menu, and enter into remote control mode. In the remote control mode the mobile electronic device 104 acts as a brain and may interact, navigate and control the autonomous robot 102. In the remote control mode the user has voice recognition application, face recognition application and control options as well as button clicking options. At step 214, a mobile electronic device 104 mounted on the top of the autonomous robot 102 having voice controlled interface provides assistance in making reminders, calls, texts, clicking selfies etc. whether the system is in a virtual environment control mode or in the remote control mode.

[0038] In an example of the invention, the autonomous robot consists of high torque motors connected to motor drivers. The motor drivers are guided by arduino instructions. The ultrasonic sensors of the autonomous robot helps in navigation to detect and avoid obstacles. The arduino receives Bluetooth instructions from the mobile electronic device mounted on the autonomous robot. The mobile electronic device is in connection with servo motors of the autonomous robot, which moves in the direction in which the obstacle is detected. This enables the autonomous robot to determine if it has encountered a human or not.
,CLAIMS:1. A system comprising:
an input/output interface;
a memory with plurality of instructions;
a processor in communication with the memory;
an autonomous robot electronically coupled with the processor;
a mobile electronic device electronically coupled with the processor, the mobile electronic device mounted on the autonomous robot, wherein the mobile electronic device is a controller to interact with one or more human and one or more IoT based devices present in a predefined range of the autonomous robot ;
a virtual reality device electronically coupled with the processor, the virtually reality platform to visualize a stereoscopic image with wide field view, in a virtual environment; and
a handheld electronic device electronically coupled with the processor, placed in the virtual reality device, the handheld electronic device configured to execute a plurality of actions using the autonomous robot from the virtual environment.

2. The system claimed in claim 1, wherein the autonomous robot communicates with one or more human according to the age, gender and emotions of the human using the mobile electronic device.

3. The system claimed in claim 1, wherein the mobile electronic device and the handheld electronic device are compatible with android, iOS, and windows.

4. The system claimed in claim 1, wherein the plurality of actions executed by the handheld electronic device from a virtual environment includes interaction, navigation and control of the autonomous robot.

5. The system claimed in claim 1, wherein the autonomous robot comprises a plurality of sensors to establish interaction with one or more human and the one or more IoT based devices, the plurality of sensors include IR sensor, WiFi, Bluetooth, temperature sensor, magnetometer sensor, accelerometer sensor and gyroscope sensor.

6. The system claimed in claim 1, wherein the mobile electronic device acts as a remote controller of the autonomous robot.

7. The system claimed in claim 1, wherein the mobile electronic device configured to provide assistance to autonomous robot in making calls, sending texts, taking selfies.

8. The system claimed in claim 1, the mobile electronic device configured to store details whom the autonomous robot is interacted with.

9. A computer implemented method for providing interaction between an autonomous robot and one or more human, the method comprising:
receiving a real time image feed from the autonomous robot on a virtual reality application installed in a handheld electronic device;
visualizing a stereoscopic image with wide field view, of the received real time image using a virtual reality device, in a virtual environment;
determining the presence of one or more human in the received image using cognitive intelligence of the autonomous robot;
learning one or more characteristics of the determined one or more human using social intelligence of the autonomous robot; and
communicating with the determined one or more human based on the learned one or more characteristics using a mobile electronic device.

10. A computer implemented method for providing interaction between an autonomous robot and one or more IoT based devices, the method comprising:
receiving a real time image feed from the autonomous robot on a virtual reality application installed in a handheld electronic device;
visualizing a stereoscopic image with wide field view, of the received real time image using a virtual reality device, in a virtual environment;
determining the presence of one or more IoT based devices in the received image using cognitive intelligence of the autonomous robot;
learning one or more characteristics of the determined one or more IoT based devices using cognitive intelligence of the autonomous robot;
interacting with the determined one or more IoT based devices based on the learned one or more characteristics through a mobile electronic device; and
executing at least one action over the one or more IoT based devices.

Documents

Application Documents

# Name Date
1 Form 3 [11-12-2015(online)].pdf 2015-12-11
2 Drawing [11-12-2015(online)].pdf 2015-12-11
3 Description(Provisional) [11-12-2015(online)].pdf 2015-12-11
4 REQUEST FOR CERTIFIED COPY [29-02-2016(online)].pdf 2016-02-29
5 Drawing [14-03-2016(online)].pdf 2016-03-14
6 Description(Complete) [14-03-2016(online)].pdf 2016-03-14
7 Request For Certified Copy-Online.pdf 2018-08-11
8 Form-2(Online).pdf 2018-08-11
9 ABSTRACT1.jpg 2018-08-11
10 4671-MUM-2015-Power of Attorney-220316.pdf 2018-08-11
11 4671-MUM-2015-Form 1-060516.pdf 2018-08-11
12 4671-MUM-2015-Correspondence-220316.pdf 2018-08-11
13 4671-MUM-2015-Correspondence-060516.pdf 2018-08-11
14 4671-MUM-2015-FER.pdf 2020-02-25
15 4671-MUM-2015-OTHERS [25-08-2020(online)].pdf 2020-08-25
16 4671-MUM-2015-FER_SER_REPLY [25-08-2020(online)].pdf 2020-08-25
17 4671-MUM-2015-COMPLETE SPECIFICATION [25-08-2020(online)].pdf 2020-08-25
18 4671-MUM-2015-CLAIMS [25-08-2020(online)].pdf 2020-08-25
19 4671-MUM-2015-PatentCertificate20-07-2023.pdf 2023-07-20
20 4671-MUM-2015-IntimationOfGrant20-07-2023.pdf 2023-07-20

Search Strategy

1 TPOsearch_22-02-2020.pdf

ERegister / Renewals

3rd: 20 Oct 2023

From 11/12/2017 - To 11/12/2018

4th: 20 Oct 2023

From 11/12/2018 - To 11/12/2019

5th: 20 Oct 2023

From 11/12/2019 - To 11/12/2020

6th: 20 Oct 2023

From 11/12/2020 - To 11/12/2021

7th: 20 Oct 2023

From 11/12/2021 - To 11/12/2022

8th: 20 Oct 2023

From 11/12/2022 - To 11/12/2023

9th: 20 Oct 2023

From 11/12/2023 - To 11/12/2024

10th: 11 Dec 2024

From 11/12/2024 - To 11/12/2025