Abstract: The present invention is directed to a system for self-driving wheelchair (C) for providing real time navigational assistance in both indoor and outdoor environments . The system comprises of a controller unit (100) comprising of main controller (101), sub-controller (102), sensor (S), communication network module (W), motor driver (108) to drive the motor (109), software development unit (200) comprising of software monitoring unit (201) and communication network module (W) capable of functioning in closed network environment (106) or open network environment (107) and a handheld device (300) comprising of user interface application (301) for controlling the accurate movement of self-driving wheelchair (C) to predetermined destinations based on navigation map by identifying of obstacles in real time by using LiDAR (103).
DESC:FIELD OF THE INVENTION
The present invention relates to a system for self-driving wheelchair that provides navigational assistance in both indoor and outdoor environments. More particularly, the present invention relates to software enabled self-driving wheelchair with single LiDAR (Light Detection And Ranging) sensor without the need of internet or GPS. The present invention also provides a method for operating self-driving wheelchair.
BACKGROUND OF THE INVENTION
Wheelchairs are the primary means of mobility for people with different kinds of impairments. Last decade there was a lot of development in the field of science and technology. The percentage of technology for developing assistive robots is low as compared to industrial robots. The traditional joystick controlled electric powered wheelchairs are being used for decades and there are many issues related to control of the wheelchair. People with different physical issues cannot perfectly control the powered wheelchair and has to depend on others, reducing their independence and quality of life.
There are two basic requirements for any self-driving wheelchair system. First, a self-driving wheelchair must navigate safely for long periods of time with minimum human intervention. The system should be programmed efficiently that it is able to prevent harm from coming to the user. Second, the self-driven system should interact effectively with the user. Besides these two requirements, desirable features may include outdoor as well as indoor navigation, automatic mode selection based upon the current environment and task to reduce the cognitive overhead of the user, and easily adaptable user interfaces. Most people take low-level control for granted when walking or driving. For example, when walking down a busy corridor, a person is not usually aware of all of the small changes he makes to avoid people and other obstacles. So there is an urgent need for self-driving E-wheelchair which can address these aforementioned issues.
Reference is made to US Patent Application US20120316884A1 discloses a personal mobility vehicle, such as a wheelchair system, includes an input audio transducer having an output coupled to a speech recognition system and an output audio transducer having an input coupled to a speech synthesis system. The wheelchair system further includes a control unit having a data processor and a memory. The data processor is coupled to the speech recognition system and to the speech synthesis system and is operable in response to a recognized utterance made by a user to present the user with a menu containing wheelchair system functions. The data processor is further configured in response to at least one further recognized utterance made by the user to select from the menu at least one wheelchair system function, to activate the selected function and to provide audible feedback to the user via the speech synthesis system.
Another reference is made to US Patent Application US20160052139A1 discloses systems, devices, and methods described for providing, among other things, a wheelchair-assist robot for assisting a wheelchair user with everyday tasks or activities at work, at home, and the like. Further it discloses the mobile wheelchair-assist robot includes a wheelchair interface component configured to exchange control information with a wheelchair controller. Moreover, it provides a wheelchair-assist robot mount assembly for electrically and physically coupling to an associated wheelchair.
Another reference is made to PCT international application no. PCT/CN2017//072100 discloses systems and methods for controlling intelligent wheelchair. The system may include a processor that performs the operations including receiving information, building a map, planning a route and generating control parameters; a movement module that executes control parameters to move around and hold sensors for sensing information; and a gimbal for holding sensors to sense information. However, the application discloses about building a map using image processing method and planning a route. The application doesn’t have dynamic obstacle detection and avoidance and re-planning the route avoiding dynamic obstacle. The application requires networking for connecting multiple robots and GPS, GLONASS, COMPASS, QZSS and WIFI for positioning all of which used for outdoor use of wheelchair.
However, the existing inventions are costly and none of the existing prior arts provides user-friendly and self-propelled system which can function with minimum human intervention and maximum accuracy. So, there is an urgent need of the self-driving system, which is dynamic, work on real time information, without the need of positioning devices and networking devices. There is a dire need of a system which is affordable, adaptable to any indoor environment and easy to operate user interface than existing technologies which involves frequent concentration or support from the user for the movement of the wheelchair.
OBJECT OF THE INVENTION
In order to obviate the drawbacks in the existing state of the art, the main object of the present invention is to provide a system for self-driving wheelchair that provides navigational assistance in both indoor and outdoor environments without the need of internet or GPS.
Another object of the present invention is to provide a system for self-driving wheelchair wherein wheelchair is coupled with user device enabling creation of navigation map thereby moving the wheelchair to the predetermined destinations.
Yet another object of the present invention is to provide a system for software enabled self-driving wheelchair using single LiDAR sensor.
Yet another object of the present invention is to provide a system for self-driving wheelchair capable of functioning in multiple driving modes depending upon the necessity of the user so that the user can effectively drive wheelchair for longer duration without any restrictions.
Yet another object of the present invention is to provide a system for self-driving wheelchair adaptable to any indoor or outdoor environment with easy to operate user interface.
Yet another object of the present invention is to provide a system for self-driving wheelchair capable of functioning with minimum human intervention and maximum accuracy.
Yet another object of the present invention is to provide a method of operating software enabled self-driving wheelchair to provide navigational assistance in both indoor and outdoor environments with minimum human intervention and maximum accuracy.
SUMMARY OF THE INVENTION
The present invention relates to a system for self-driving wheelchair that provides navigational assistance in both indoor and outdoor environments. More particularly, the present invention relates to software enabled self-driving wheelchair system with single LiDAR (Light Detection And Ranging) sensors.
The present invention relates to system for self-driving wheelchair comprising of driving unit such as wheelchair, a processor unit also referred to as controller unit, a sensor unit, a memory unit, and a communication module. The system further includes a user interface unit in operative connection with the processor unit and at least one application stored on the memory unit and executable by the processor unit. The at least one application is executable to provide information via the user interface unit to a user of the wheelchair related to data from the sensor to assist the user to navigate the chair in accordance with parameters stored in the memory unit.
The said processor unit or controller unit of the self-driving system comprises of two controller units i.e. the main controller and a sub-controller. The main controller is used for computational processing and decision making. The main controller of said processor system is equipped with Robot Operating System (ROS) framework which is the most sophisticated meta-operating system used for robotic applications. The main controller is capable of performing all the tasks, however, to reduce the load on it, the sub-controller is used as an interface for encoder sensors, Inertial Measurement Unit (IMU) sensor and for controlling the motor drivers.
The sensor unit of the said system comprises of encoder sensors, Inertial Measurement Unit (IMU) sensor. The system uses a 360-degree 2D LiDAR (Light Detection and Ranging) sensor to identify the objects in the environment. Lidar emits pulses of light and calculates the time taken by the pulse to return to the sensor which is called as the time of flight. In this way, a 360-degree point cloud data of the laser scan can be generated and the distance between the Lidar and the objects can be observed. Using wheel rotary encoders and IMU, the wheelchair current position and orientation can be estimated. Encoders convert the rotations of the wheels into electric signals which can be used for detecting the speed and distance travelled by the wheelchair. IMU is used to determine the orientation of the wheelchair.
The communication module comprises of Bluetooth enabled android application. The Bluetooth is integrated with the android application to communicate with the wheelchair in a wireless manner. This application can be easily installed on a device like smartphone or tablet etc. The application can establish the wireless connection to the wheelchair at any time after powering the system.
There are three main phases in order to perform self-navigation by the wheelchair. They are Mapping, Localization and Navigation.
In Mapping phase, the said self-driving system creates a map of the environment to reach a destination with the help of processor and sensor. So, to create a map of the environment the said system has to know the position and orientation of the wheelchair which is called as pose estimation. To get proper pose estimate, it needs a good map of the environment. With the help of fused data of encoders and IMU, the map of the environment is created and saved in the ROS files. The wheelchair has to be moved manually to every corner of the environment to generate a complete map in ROS.
In localization phase, after the creation of map of the environment, the wheelchair has to be localized properly in it. In order to achieve the position and orientation of the wheelchair, the system determines the probability of wheelchair pose. As the wheelchair moves in the environment, the sensor readings are compared against the probable wheelchair poses. By this method, the wheelchair poses at any arbitrary point on the map can be achieved.
In navigation phase, there is navigation stack in a typical ROS environment which comprises of the set of packages and nodes. Navigation stack uses algorithms to calculate the shortest path to the destination avoiding dynamic obstacles throughout the path.
For the communication of user to the self-driving wheelchair, Bluetooth connectivity is established between the mobile device and the self-driving wheelchair through Bluetooth enabled android application. After establishing the Bluetooth connection, the user can control the wheelchair in 3 different modes which are manual, fixed destination, and auto-mode. Both the fixed and auto-mode runs the wheelchair in self-driving manner. The Bluetooth is integrated with the android application to communicate with the wheelchair in a wireless manner. This app can be easily installed on a mobile device like smartphone or tablet etc. The application can establish the wireless connection to the wheelchair at any time after powering the system. In the manual mode, the user can control the wheelchair by just touching on the direction tabs on the Bluetooth enabled application. The fixed destination mode is used for navigating to common places like shops in airport or rooms in a hospital. In this mode, the wheelchair self-drives itself always reach a predetermined fixed destination. This mode needs data from the sensors for its movement. The auto mode is used in any facility for self-driving to a destination in a which there is no fixed destination, which means that the user can set the destination dynamically to go anywhere.
The present invention also discloses the method for self-driving of wheelchair such that when the self-driving system is initiated, the sub-controller waits for the destination coordinates from android phone. Once the destination is through the app, which is by touching the required destination coordinate on the map, the Bluetooth module transmits the coordinates of the destination to the sub-controller. The data is then processed and compiled in the form of final grid coordinates which is then transferred to main-controller. As described earlier, the main-controller comprises navigation stack packages. The navigation stack is initialized and provided with destination coordinates, it uses shortest path algorithm and calculates the most suitable path to the destination. Once the path calculation is completed, velocity commands (linear and angular coordinates) are generated and sent back to sub-controller. The sub-controller processes the data and converts the commands into Pulse Width Modulation (PWM) values which are used to drive the wheels of the wheelchair using motor drivers.
In case, if any dynamic obstacles are detected along the path, the navigation system generates a new route to the destination avoiding the collisions with the obstacles. The navigation system has the ability to reroute any number of times provided there is one or other way to reach the destination. If the wheelchair is stuck with full of obstacles around, it then performs a 360-degree rotation for certain period of time to check if the obstacles were cleared or not. If the obstacles are cleared and the path is available, the wheelchair tries to reach the destination. In worst case scenario, If the obstacles were not cleared and there is no available path to the destination then the navigation process stops with an error message. In the entire navigation process, the Lidar plays an important role. It is used for generating the map of the environment, localizing the wheelchair and also detecting the dynamic obstacles.
Thus, the present invention can conquer the greater part of the issues utilizing minimized equipment and stable programming system with easy user interface. The present invention can reduce the dependencies of the people with mobility issues and increase the quality of life with smart transportation.
BRIEF DESCRIPTION OF DRAWINGS
Fig. 1 depicts the system architecture of self-driving wheelchair.
Fig. 2a, 2b, 2c depicts steps for installation of application for self-driving wheelchair on handheld device.
Fig. 3a, 3b, 3c depicts different modes of navigation.
Fig. 4a depicts virtual environment created in gazebo simulator.
Fig. 4b depicts map generated in Rviz visualizer.
Fig. 5 depicts real time mapping of indoor area for testing.
DETAILED DESCRIPTION OF THE INVENTION WITH ILLUSTRATIONS AND NON-LIMITING EXAMPLES
The present invention provides a self-driving wheelchair system for providing real time navigational assistance in both indoor and outdoor environments where the user can simply select a destination just by touching on the desired position in the generated map in the mobile application. Once the destination is provided, using different algorithms and techniques, the shortest path is calculated, and velocity commands are given to motor drives which ultimately moves the wheelchair accordingly avoiding all possible obstacles. With the help of the present system, users can freely move to all the available places without the help of a second.
As shown in Fig. 1, the self-driving wheelchair system (C) comprises of a controller unit (100) comprising of main controller (101)., a sub-controller (102), at least one sensor (S), communication network module (W), motor driver (108) to derive the motor (109) and memory unit (not shown in Figure). The sensor (S) in the system comprises of LiDAR (103) capable of identifying the surrounding and moving objects in close network environment (106) to avoid any sudden obstacle in path, rotary encoders (104) or inertial measurement unit (IMU) (105) or both configured to collect data from the surrounding environment to identify the accurate pose of the wheelchair.
The sub-controller (102) is configured to receive the data from the sensors (S) directly and transmit the received data to the main-controller (101). The main-controller (102) is powered by Robot Operating System (ROS) and is configured for compilation and decoding of data received from the sub-controller (102) in form of navigation stack packages.
The system also comprises of a software development unit (200) comprising of software monitoring unit (201) and communication network module (W) capable of functioning in closed network environment (106) or open network environment (107). The software monitoring unit (201) comprises of Robot Operating System (ROS) that powers the main-controller (101), said ROS installed in an operating system configured to visualize 2D and 3D values from ROS data in both the closed network environment (106) and open network environment (107) to create at least one navigation map. debugging the codes to enable tracking and monitoring of said self-driving wheelchair (C) remotely.
The system (1) is operated through a handheld device (300) comprises of user interface application (301) and communication network module (W) to enable the user to access preferable mode of navigation as well as to control and monitor the navigation of said self-driving wheelchair (C). The user interface application (301) is stored on memory unit and executable by processor via user interface to a user of the wheelchair related to data from the sensor (S) to navigate the chair in accordance with parameters stored in the memory unit. The communication module (W) couples the sub-controller (102) with the handheld device (300) which in turn couples the wheelchair (C) with said handheld device (300).
The communication network module (W) comprises of communication networks capable of operating in closed network environment (106) having short range wireless interconnection such as Bluetooth as well as open network environment (107) having global wireless interconnected networks such as internet.
The system of the present invention is deployed in a manner that said self-driving wheelchair (C) is capable of accurate movement to predetermined destinations based on said navigation map by identifying of obstacles in real time by said LiDAR (103), correcting of pose of the wheelchair (C) in real time by rotary encoder (104) and IMU (105) either alone or in combination, in closed network environment (106) as well as in open network environment (107) based on said navigation map created in real time by visualization tool facilitating accurate movement of said self-driving wheelchair. The system enables creation of navigation map in any mode of navigation to move the wheelchair (C) to predetermined destinations depending upon the necessity of the user so that the user can effectively drive wheelchair for longer duration without any restrictions.
Sub-Controller
The sub-controller (102) is used to receive data from the sensors (S) directly and transmit the data to the main controller (101). The rotary encoders (104) convert the rotations of the wheels into electrical signals which are used to identify the direction of the rotation of the wheels which is needed to identify the displacement made by the wheelchair. Through the IMU sensor (105) the orientation of the wheelchair is determined. The IMU sensor (105) includes gyroscope which outputs the yaw values through which orientation of the wheelchair can be determined. Although the rotary encoders (104) alone can be used to identify the pose of the wheelchair, to increase the accuracy of the pose of the wheelchair, the IMU sensor (1105) is used. The communication network module (W) makes a wireless serial communication between the handheld device (300) and the sub-controller (102). The communication network module (W) is used for communication between the handheld device (300) and the sub-controller (102).
The sub-controller (102) also receives two sets of data from the main controller (101). One set of said data is compiled and converted to Pulse Width Modulation (PWM) values which are directed to motor drivers (108) to drive the motor (109). The other set of said data is feedback data from navigation block which contains the status of the wheelchair on the map. This feedback data is sent back to handheld device (300) through the communication network module (W).
Main-Controller
A Raspberry pi 3 is used as a main controller (101) where all software framework is stored. The data from all the sensors (S) is directed to main controller (101) for compilation and decoding. The main controller (101) is powered by Robot Operating system (ROS) installed in Ubuntu core platform. The ROS is an open source stage mostly used for the automated applications. Due to its one of a unique package division and node handling capabilities, it stands out amongst the most utilized structure for the robotic applications. All the ROS communications take place between bundles of codes, known as packages. The ROS services are structured in 2 parts. On one side, there is services server which provides the functionality to the user while on the other side there is service client, who calls or requests the service functionality. Also, the ROS is a meta operating system, therefore it can be installed directly in any Linux software. All the raw data are compiled in the ROS and outputs the final commands back to the sub-controller (102).
Software Development Unit
A computing device with the ROS installed in Ubuntu operating system is used a software development unit (200). The main-controller (101) can be accessed remotely through SSH (Secure Shell) to monitor the navigation process of the wheelchair. The Software development unit (200) is only used for development purpose in the system such as to monitor the system remotely and for debugging the codes. This software development unit (200) can be omitted for the user which doesn’t require developing. An Rviz visualization tool is used to monitor the ROS communications. Rviz is a 3D visualization tool for the ROS which can visualize both 2D and 3D values from ROS topics.
The Rviz contains several panels. Top panel is tools panel which contains a different set of tools, among those 2D pose Estimate and 2D Nav Goal are the most important. The 2D pose estimate tool allows the developer to set an initial pose of the wheelchair on the map to seed the localization system. The 2D Nav Goal tool lets the developer set a goal which is sent on the /goal ROS topic. Left panel is Displays panel in which the data from the topics can be viewed. Using Add button, the required display types are selected from the list to view the information coming from the topics. Right panel is Views panel where the viewing parameters in the Rviz can be configured. Using the Rviz, the data such as robot models, data from different sensors like cameras, lasers etc can be visualized. After debugging and finalizing all the parameters, the use of software development unit (200) can be omitted for the user. The software development unit (200) is also used to simulate the entire self-driving ROS packages using a Gazebo simulator.
Handheld Device
The end user can access the self -driving wheelchair using any handheld device (300) like a smartphone or a tablet. The handheld device is preferably an android device through a communication network module (W) enabling creation of navigation map thereby moving the wheelchair (C) to the predetermined destinations and is capable of functioning in any mode of navigation depending upon the necessity of the user so that the user can effectively drive wheelchair for longer duration without any restrictions.
An android application or user interface application (301) is installed on the handheld device (300) of the user to enable the handheld device (300) to transmit commands to the sub-controller (102) and connecting the application with the communication network module (W) of the controller unit (100), wherein the application can establish the wireless connection to the wheelchair (C) at any time after powering the system. The communication network module (W) is capable of operating in closed network environment (106) having short range wireless interconnection such as Bluetooth as well as open network environment (107) having global wireless interconnected networks such as internet.
The present invention also discloses a method for system of self-driving wheelchair (C) for providing real time navigational assistance in both indoor and outdoor environments. The sensors (S) identify the position of the wheelchair (C) including the pose and orientation, convert it into electrical signal to obtain data in real time and transmit the obtain data related to position of the wheelchair (C) to the sub-controller (102). The user sets the destination coordinates on the navigation map in the form of input data through said handheld device (300) and transmit the input data of the destination coordinates to the sub-controller (102) via communication network module (W). The input data is received and processes by the sub-controller (102) for transmission to the main-controller (101) for compilation and decoding. The main-controller (101) powered by Robot Operating System (ROS) which calculates the shortest path by providing the functionality to the user request through service server and calling for the service functionality by the service client.
Two sets of data are generated by the main-controller (101) in order to transmit to the sub-controller (102), said two sets of data being:
- Pulse Width Modulation (PWM) values directed to the motor drivers (107), which are used to drive the motor (108)
- Feedback data from navigation block containing the status of the wheelchair (C) on said navigation map directed to the handheld device (300) through the communication network module (W),
An user interface application (301) is installed on said handheld device (300) enabling the handheld device (300) to transmit the commands to the sub-controller (102) and connecting the application with the communication network module (W) enabling network connection to the wheelchair in real time.
The method enables self-driving wheelchair (C) of accurate movement to predetermined destinations based on said navigation map by identifying of obstacles in real time by said LiDAR (103), by correcting of pose of the wheelchair (C) in real time by rotary encoder (104) and IMU (105) either alone or in combination, in closed network environment (106) as well as in open network environment (107) based on said navigation map created in real time by visualization tool facilitating accurate movement of said self-driving wheelchair.
The android application (301) utilizes touch screen of handheld device (300) to get the goal coordinates and the inbuilt communication network module (W) transmits the commands to the sub-controller (102). The application in the handheld device (300) must be associated with the communication network module (W) that is associated with the sub-controller (102). The application (301) requests the user for permission to access the communication network module (W) in the handheld device (300) as shown in the Fig. 2a. Once the permission is granted, the home screen appears as shown in the Fig. 2b, where the next step is to connect the application (301_ with the communication network module (W). The application contains 3 distinct modes: Manual, Fixed and Auto modes as shown in Fig. 2c.
In manual mode, the wheelchair (C) is straightforwardly worked by the user without the intercession of the ROS. The user is expected to press on the desired direction that is provided by the buttons on the application (301) as shown in the Fig. 3a. This mode also enables the user to take control of the wheelchair anytime while they are in auto or fixed modes (Fig. 3b, 3c).
The fixed and auto modes are the auto navigation modes. The generated map is shown in the android application (301) screen so the user can provide the destination coordinates. In fixed mode, the destination places are restricted to limited environment. As shown in the Fig. 3b, only the pre-determined destinations are accessible for the auto navigation. The most accessed places in the area like food corners, inquiry centres, washrooms are noticed and used for the fixed destination mode, so the user can give the destination without searching in the entire map. The auto mode provides the complete navigation feature. As shown in the Fig. 3c, the user can give destination anywhere on the provided map. The user also gets notified about the goal status, warnings and error messages. At any time, the user can change from one mode to another without any problem. During the auto mode, the manual mode can be accessed, just in case if the user wants to take over the control.
Experimental data:
To check the stability and performance of the system, a test scenario has been created. Since the wheelchair is developed for indoor purposes, a room is considered, and its map is generated. Before testing the wheelchair in the real-time, the simulation of the entire system is done to check the stability of the ROS packages. The wheelchair model created in solid works software and imported as URDF (Universal Robotic Description Format) file which is in XML format for representing a robot model. Using Gazebo simulator, as shown in Fig. 4a, the entire environment is created, and the wheelchair is simulated in it. Now ROS packages are deployed, and a wheelchair is navigated in the created virtual environment. Using Rviz, we can visualize the navigation flow of all the topics in ROS. The map generation of the virtual environment is shown in Fig. 4b.
To test the ROS packages in simulation, 5 different destinations were fixed, and tests were performed with and without dynamic obstacles. The average values for all the iterations for simulation were shown in the Table 1.
Table 1: Simulation results from source 'A' to multiple destinations
Source A
Destinations B C D E
Dynamic Obstacles No Yes No Yes No Yes No Yes
Task success Yes Yes Yes Yes Yes Yes Yes Yes
Time(sec) 12.89 16.6 28.15 34.66 34.88 41.02 29.5 36.02
Collisions 0 0 0 0 0 0 0 0
Similar to the simulation test cases, five destinations were fixed and represented on the map in fixed mode and the wheelchair is self-navigated to each destination. Fig. 5 shows the real time mapping of the indoor area in which the tests were performed. The readings for the above-mentioned parameters were noted. Also, readings for the parameters including dynamic obstacles are also noted. The average readings for all the trials are mentioned in the Table 2.
Table 2: Real-time navigation results from source 'A' to multiple destinations
Source A
Destinations B C D E
Dynamic Obstacles No Yes No Yes No Yes No Yes
Task success Yes Yes Yes Yes Yes Yes Yes Yes
Path length (m) 3.17 5.3 4.75 4.41
Time (sec) 17.656 24.036 35.746 36.822 35.682 41.826 28.882 40.008
Path length optimality ratio 0.645 0.616 0.598 0.661 0.693 0.737 0.694 0.547
Collisions 0 0 0 0 0 0 0 0
Average Speed (m/sec) 0.278 0.214 0.248 0.218 0.192 0.154 0.22 0.202
Response time (sec) 1.534 1.572 1.444 1.162 1.782 1.504 1.698 1.48
The ability of the wheelchair to pass through the door is also tested. The wheelchair width is 0.63 meters while the door width is 1.22 meters.
A self-driving wheelchair system is developed and presented, where the user can simply give a destination just by touching on the desired position in the generated map of the indoor environment, in the mobile app. The wheelchair automatically takes the user to the desired destination without the need for continuous navigation.
,CLAIMS:1. A system for self-driving wheelchair (C) for providing real time navigational assistance in both indoor and outdoor environments, said system comprises:
- a controller unit (100) comprising of main controller (101), sub-controller (102), at least one sensor (S), communication network module (W), motor driver (108) to drive the motor (109)
- software development unit (200) comprising of software monitoring unit (201) and communication network module (W) capable of functioning in closed network environment (106) or open network environment (107), and
- handheld device (300) comprising of user interface application (301) and communication network module (W) to enable the user to access preferable mode of navigation as well as to control and monitor the navigation of said self-driving wheelchair (C)
wherein,
- said communication network module (W) comprises of communication networks capable of operating in closed network environment (106) having short-range wireless interconnection such as Bluetooth as well as open network environment (107) having global wireless interconnected networks such as Internet,
- said software monitoring unit (201) comprises of Robot Operating System (ROS) that powers the main controller (101), said ROS being configured to visualize 2D and 3D values from ROS data in both closed network environment (106) and open network environment (107) to create at least one navigation map, debugging the codes to enable tracking and monitoring of said self-driving wheelchair (C) remotely,
- said sensors (S) comprises of LiDAR (103) capable of identifying the surrounding and the moving objects in said closed network environment (106) to avoid any sudden obstacle in path identified in said navigation map, rotary encoders (104) to identify pose of the self-driving wheelchair (C) and inertial measurement unit (IMU) (105) to increase the accuracy of the pose of the self-driving wheelchair (C),
said system being deployed in a manner such that said self-driving wheelchair (C) is capable of accurate movement to predetermined destinations based on said navigation map by identifying of obstacles in real time by said LiDAR (103), correcting of pose of the wheelchair (C) in real time by rotary encoder (104) and IMU (105) either alone or in combination thereof, in closed network environment (106) as well as in open network environment (107) based on said navigation map created in real time by visualization tool facilitating accurate movement of said self-driving wheelchair.
2. The system as claimed in claim 1 wherein said rotary encoders (104) converts rotations of wheel into electrical signals such that the electrical signals are used to identify the direction of the rotation of the wheel thereby computing the displacement made by the wheelchair.
3. The system as claimed in claim 1 wherein said inertial measurement unit (IMU) (105) determines the orientation of the wheelchair.
4. The system as claimed in claim 1 wherein said close network environment (106) coupling the sub-controller (102) with the handheld device (300) is Bluetooth.
5. The system as claimed in claim 1 wherein said open network environment (107) is internet.
6. The system as clamed in claim 1 wherein said LiDAR (103) is a 360-degree 2D LiDAR (Light Detection and Ranging) sensor to identify the objects in the surrounding environment, generating the map of the environment, localizing the wheelchair and also detecting the dynamic obstacles.
7. The system as claimed in claim 1 wherein the operating system for installation of Robot Operating System (ROS) is Ubuntu operating system.
8. The system as claimed in claim 1 wherein the operating system for said handheld device (300) is android.
9. The system as claimed in claim 1 wherein said main-controller (101) is Raspberry pi3.
10. The system as claimed in claim 1 wherein the sub-controller (102) is used as an interface for rotary encoders (104), Inertial Measurement Unit (IMU) (105) for controlling the motor driver (108) to reduce the load on the main-controller (102).
11. The system as claimed in claim 1 wherein said self-navigation is performed by mapping, localization and navigation by
- estimating pose by knowing position and orientation of the wheelchair through said rotary encoder (104) and said IMU (105) to create a map of the environment to reach predetermined destination with the help of processor and sensor, said created map is stored in ROS system files,
- determining of probable wheelchair poses by the system so that when the wheelchair (C) moves in the environment, the sensor (S) readings are compared against the probable wheelchair poses,
- calculating the shortest path to the destination by navigation stack in the form of set of packages and nodes avoiding dynamic obstacles throughout the path.
12. The system as claimed in claim 1 wherein said ROS data is monitored by an RVIZ visualization tool providing 2D and 3D values on a graphic user interface.
13. The system as claimed in claim 1 wherein said mode of navigation are manual mode, fixed mode and auto-mode of which both the fixed and auto-mode runs the wheelchair in self-driving manner.
14. The system as claimed in claim 13 wherein in the manual mode, the user can control the wheelchair by touching on the direction tabs on the Bluetooth enabled application without the intercession of the ROS.
15. The system as claimed in claim 13 wherein the fixed mode needs data from the sensors (S) for its movement so that the wheelchair self-drives itself to a predetermined fixed destination.
16. The system as claimed in claim 13 wherein said auto mode allows the user to dynamically set a destination on the provided map and notifies the user about the goal status, warnings and error messages.
17. The system as claimed in claim 13 wherein the user can change from one mode to another at any time.
18. The system as claimed in claim 1 wherein said system is capable of functioning with minimum human intervention and maximum accuracy.
19. A method for system of self-driving wheelchair (C) for providing real time navigational assistance in both indoor and outdoor environments as claimed in claim 1 wherein said method comprises the steps of:
(a) identifying position of the wheelchair (C) including the pose and orientation by said at least one sensor (S) converting them into electrical signal in real time to obtain data related to position of the wheelchair (C),
(b) input of destination coordinates on navigation map by an user through said handheld device (300) and transmission of said input data of the destination coordinates to the sub-controller (102) via communication network module (W) capable of functioning in closed network environment (106) or open network environment (107),
(c) receiving and processing of said input data by said sub-controller (102) for transmission to the main-controller (101),
receiving data from said at least one sensor (S) by sub-controller (102) and transmitting the data directly to said main-controller (101) for compilation and decoding of the data by said main-controller (101) powered by Robot Operating System (ROS) wherein ROS calculates the shortest path by:
- providing the functionality to the user request through service server
- calling for the service functionality by the service client
(d) generating two sets of data by the main-controller (101) and transmitting to the sub-controller (102), said two sets of data being:
- Pulse Width Modulation (PWM) values directed to the motor drivers (107), which are used to drive the motor (108)
- Feedback data from navigation block containing the status of the wheelchair (C) on said navigation map directed to the handheld device (300) through the communication network module (W),
(e) installing an application on said handheld device (300) enabling the handheld device (300) to transmit the commands to the sub-controller (102) and connecting the application with the communication network module (W) enabling network connection to the wheelchair in real time.
wherein the method enables self-driving wheelchair (C) capable of accurate movement to predetermined destinations based on said navigation map by identifying of obstacles in real time by said LiDAR (103), correcting of pose of the wheelchair (C) in real time by rotary encoder (104) and IMU (105) either alone or in combination thereof, in closed network environment (106) as well as in open network environment (107) based on said navigation map created in real time by visualization tool facilitating accurate movement of said self-driving wheelchair.
20. The method as claimed in claim 19 wherein said communication module (106) is Bluetooth integrated with the android application to communicate with the wheelchair (C) in a wireless manner.
21. The method as claimed in claim 19 wherein in case any dynamic obstacles are detected along the path, the navigation system generates a new route to the destination avoiding the collisions with the obstacles.
22. The method as claimed in claim 19 wherein the navigation system has the ability to reroute any number of times provided there is one or other way to reach the destination.
23. The method as claimed in claim 19 wherein if the wheelchair (C) is stuck with obstacles around, it performs a 360-degree rotation for certain period of time to check if the obstacles are cleared.
| # | Name | Date |
|---|---|---|
| 1 | 201841029066-STATEMENT OF UNDERTAKING (FORM 3) [02-08-2018(online)].pdf | 2018-08-02 |
| 2 | 201841029066-PROVISIONAL SPECIFICATION [02-08-2018(online)].pdf | 2018-08-02 |
| 3 | 201841029066-FORM 1 [02-08-2018(online)].pdf | 2018-08-02 |
| 4 | 201841029066-DECLARATION OF INVENTORSHIP (FORM 5) [02-08-2018(online)].pdf | 2018-08-02 |
| 5 | 201841029066-PETITION UNDER RULE 137 [06-02-2019(online)].pdf | 2019-02-06 |
| 6 | 201841029066-FORM-26 [06-02-2019(online)].pdf | 2019-02-06 |
| 7 | Correspondence By Agent_Power of Attorney_15-02-2019.pdf | 2019-02-15 |
| 8 | 201841029066-Proof of Right (MANDATORY) [26-02-2019(online)].pdf | 2019-02-26 |
| 9 | 201841029066-ENDORSEMENT BY INVENTORS [26-02-2019(online)].pdf | 2019-02-26 |
| 10 | 201841029066-PETITION UNDER RULE 137 [28-02-2019(online)].pdf | 2019-02-28 |
| 11 | Correspondence by Agent_Form1 And Form5_04-03-2019.pdf | 2019-03-04 |
| 12 | 201841029066-DRAWING [01-08-2019(online)].pdf | 2019-08-01 |
| 13 | 201841029066-COMPLETE SPECIFICATION [01-08-2019(online)].pdf | 2019-08-01 |
| 14 | 201841029066-FORM 18 [31-07-2020(online)].pdf | 2020-07-31 |
| 15 | 201841029066-FER.pdf | 2021-11-23 |
| 16 | 201841029066-FORM-26 [10-03-2022(online)].pdf | 2022-03-10 |
| 17 | 201841029066-Correspondence_Power of Attorney_14-03-2022.pdf | 2022-03-14 |
| 18 | 201841029066-Proof of Right [23-05-2022(online)].pdf | 2022-05-23 |
| 19 | 201841029066-MARKED COPIES OF AMENDEMENTS [23-05-2022(online)].pdf | 2022-05-23 |
| 20 | 201841029066-FORM 13 [23-05-2022(online)].pdf | 2022-05-23 |
| 21 | 201841029066-FER_SER_REPLY [23-05-2022(online)].pdf | 2022-05-23 |
| 22 | 201841029066-ENDORSEMENT BY INVENTORS [23-05-2022(online)].pdf | 2022-05-23 |
| 23 | 201841029066-EDUCATIONAL INSTITUTION(S) [23-05-2022(online)].pdf | 2022-05-23 |
| 24 | 201841029066-DRAWING [23-05-2022(online)].pdf | 2022-05-23 |
| 25 | 201841029066-CLAIMS [23-05-2022(online)].pdf | 2022-05-23 |
| 26 | 201841029066-AMMENDED DOCUMENTS [23-05-2022(online)].pdf | 2022-05-23 |
| 27 | 201841029066-ABSTRACT [23-05-2022(online)].pdf | 2022-05-23 |
| 28 | 201841029066-PatentCertificate15-03-2024.pdf | 2024-03-15 |
| 29 | 201841029066-IntimationOfGrant15-03-2024.pdf | 2024-03-15 |
| 1 | 2021-05-2512-30-42E_15-07-2021.pdf |