Abstract: The present disclosure relates to an apparatus (100) for navigating in a region of interest, the apparatus includes a motion control unit (106) configured on the apparatus, one or more wheels, upon operation of the motion control unit, adapted to move the apparatus in the region of interest. A planning unit (104) configured to receive, from the perception unit (102), the first set of data pertaining to distance attributes, receive, from the position estimation unit (118), the second set of data pertaining to location attributes of the apparatus; and process the received first set of data and the second set of data to generate an occupancy grid map, wherein based on the generated occupancy grid map, the planning unit configured to execute a navigational routine by commanding the motion control unit to navigate the payloads of variable height.
Claims:1. An apparatus (100) for navigating in a region of interest, said apparatus comprising:
a motion control unit (106) configured on the apparatus, one or more wheels provided on the apparatus, the one or more wheels, upon operation of the motion control unit, designed to move the apparatus in the region of interest;
a perception unit (102) configured on the chassis of the apparatus, the perception unit, upon detection of distance attributes between the apparatus and a physical object in the region of interest, generate a first set of data;
a position estimation unit (118) configured on the chassis of the apparatus, the position estimation unit, upon determining location attributes of the apparatus, generates a second set of data; and
a planning unit (104) operatively coupled to the perception unit, position estimation unit and motion control unit, the planning unit configured to:
receive, from the perception unit (102), the first set of data pertaining to distance attributes;
receive, from the position estimation unit (118), the second set of data pertaining to location attributes of the apparatus; and
process the received first set of data and the second set of data to generate an occupancy grid map, wherein based on the generated occupancy grid map, the planning unit configured to execute a navigational routine to control the movement of the motion control unit to navigate the payloads of variable height.
2. The apparatus as claimed in claim 1, wherein the perception unit (102) comprises one or more sensors that comprise light detection and ranging (LIDAR) sensors (112), ultrasonic sensors (110) and depth sensor (114), wherein the LIDARs and depth camera are mounted on the chassis in any or a combination of horizontal plane and vertical plane to cover entire 360 degrees in the horizontal plane and 90 degrees in the vertical plane to avoid obstacles, wherein the ultrasonic sensors located on left side and right side of the apparatus to detect transparent obstacles undetected by LIDARs.
3. The apparatus as claimed in claim 1, wherein the position estimation unit (118) comprises inertial measurement unit (IMU), the IMU determines the location attributes of the apparatus.
4. The apparatus as claimed in claim 1, wherein the payload (132) of variable height is mounted on top of the apparatus.
5. The apparatus as claimed in claim 1, wherein the planning unit (104) is an on-board computer, wherein the planning unit (104) execute a mapping routine that collects point cloud dataset from the one or more sensors and incorporates mapping algorithm to generate a 2-D binary occupancy grid map.
6. The apparatus as claimed in claim 1, wherein the planning unit (104) configured to execute a localization routine configured to update the occupancy grid map based on the received first set of data and the second set of data.
7. The apparatus as claimed in claim 1, wherein the planning unit (104) executes the navigation routine to perform global and local path planning algorithms and global and local cost map generation, wherein Universal robot description format (URDF) is used to generate a model of the moving apparatus.
8. The apparatus as claimed in claim 1, wherein the planning unit (104) configured to execute the navigation routine with multiple waypoints, wherein a user interface configured to operate the apparatus to perform any or a combination of teleoperation, mapping, and navigation with multiple waypoints starting from the initial location of the apparatus.
9. The apparatus as claimed in claim 1, wherein the motion control unit (106) comprises two motor driver unit (116-1, 116-2), capacitive encoders and drive controller, wherein the planning unit generates motion commands that are passed to motor driver unit for actuation control.
10. A method (400) for navigating in a region of interest, said method comprising:
operating (402), by a computing device, a motion control unit to move an apparatus in the region of interest, one or more wheels, upon operation of the motion control unit, adapted to move the apparatus in the region of interest;
generating (404), at a perception unit, a first set of data, upon detection of distance attributes between the apparatus and a physical object in the region of interest, the perception unit configured on a chassis of an apparatus;
generating (406), at a position estimation unit, a second set of data, upon determining location attributes, the position estimation unit configured on the chassis of the apparatus; and
receiving (408), at the computing device, the first set of data from the perception unit and the second set of data from the position estimation unit;
processing (410), at the computing device, the received first set of data from the perception unit, and second set of data from the position estimation unit to generate an occupancy grid map,
wherein, based on the generated occupancy grid map, the computing device (412) configured to execute a navigational routine to control the movement of the motion control unit to navigate payloads of variable height.
, Description:TECHNICAL FIELD
[0001] The present disclosure relates, in general, to robot, and more specifically, relates to a multi-Light Detection and Ranging (LIDAR) based indoor autonomous robot.
BACKGROUND
[0002] Autonomous indoor mobile robots have emerged from research organizations in recent years that can be used for delivery, sanitization and the like. They can navigate through the environment by sensing and acting according to nearby conditions. The essential tasks involved in autonomous navigation are mapping, localization, navigation and obstacle avoidance.
[0003] Robots using laser-based sensors have a wide field of view and can sense long-range stationary or moving obstacles. However, scanning performed by a single 2-dimensional LIDAR is limited to one plane, which does not provide adequate reliability for robot navigation. Also, perception using only depth cameras have a narrower field of view and a shorter effective range of obstacle detection. The robot with a limited field of view may often get stuck and may not recover, requiring frequent human interventions during the operation.
[0004] The autonomous indoor robot using a combination of 2-D LIDAR and depth cameras for mapping and localization are more economical to design and less computationally demanding than the robots based on point cloud of 3-dimensional LIDAR. By using a combination of 2-D LIDAR and depth cameras, more reliability is added for navigation compared to the existing known arts, which use either 2D LIDARs or depth cameras.
[0005] In an existing system, the mobile robot includes an elongated body having at least one depth camera that is mounted near the top of the body and having at least one field of view and a drive mechanism disposed within the body. Sensor data from the camera images and odometers and position beacons are used to localize the position of the robot and perform navigation. Here, obstacle detection is limited by depth cameras. The depth cameras included in the robot have a much narrower field of view (typically less than 90°), a much shorter effective range of obstacle detection. In contrast, LIDARs have a wide field of view (up to 360°) and can sense long-range stationary or moving obstacles.
[0006] Another existing indoor robot navigation system comprises a power supply module, a sensor module, a motion execution module and a control module. The control module comprises a network module, a positioning module and a path planning module; the positioning module comprises data fusion unit, Simultaneous Localization and Mapping (SLAM) mapping unit and Adaptive Monte Carlo Localization (AMCL) positioning unit; the path planning module comprises a global unit planning module, a local planning unit and an obstacle identification unit; the motion execution module comprises a double-wheel differential driving structure and a millimetre. The existing system has the beneficial effects that: the environment does not need to be modified and marked, the method has good environmental adaptability, and can realize flexible obstacle avoidance. However, the robot is equipped with a single 2D LIDAR, which provides obstacle avoidance in one horizontal plane and it does not account for payloads of varying height.
[0007] Therefore, there is a need in the art to provide a means that enable to provide indoor robot navigation by enhanced environment perception and robot localization.
OBJECTS OF THE PRESENT DISCLOSURE
[0008] An object of the present disclosure relates, in general, to robot, and more specifically, relates to a multi- Light Detection and Ranging (LIDAR) based indoor autonomous robot.
[0009] Another object of the present disclosure is to provide an apparatus that enables 360 degrees coverage by amalgamating range data from LIDARs and ultrasonic sensors.
[0010] Another object of the present disclosure that provide enhancement of vertical field of view using LIDAR for reliable navigation by vertical obstacle detection.
[0011] Another object of the present disclosure provides an apparatus that enable precise localization in global positioning system (GPS) denied indoor environment using IMU and odometry sensor data fusion.
[0012] Another object of the present disclosure provides an apparatus that autonomously navigate the payloads of variable height reliably by enhanced environment perception and robot localization with use of sensors.
[0013] Another object of the present disclosure provides an apparatus that can enhance the ability to detect and avoid obstacles.
[0014] Yet another object of the present disclosure provides a combination of 2-D LIDAR and depth cameras for mapping and localization that are more economical to design.
SUMMARY
[0015] The present disclosure relates, in general, to robot, and more specifically, relates to a multi-Light Detection and Ranging (LIDAR) based indoor autonomous robot. The present disclosure uses multiple 2-D LIDARS, ultrasonic sensors, a depth camera and an inertial measurement unit (IMU) to provide indoor robot navigation by enhanced environment perception and robot localization.
[0016] In an aspect, the present disclosure provides an apparatus for navigating in a region of interest, the apparatus including a motion control unit configured on the apparatus, one or more wheels provided on the apparatus. The one or more wheels, upon operation of the motion control unit, adapted to move the apparatus in the region of interest, a perception unit configured on the chassis of the apparatus, the perception unit, upon detection of distance attributes between the apparatus and a physical object in the region of interest, generate a first set of data, a position estimation unit configured on the chassis of the apparatus, the position estimation unit, upon determining location attributes of the apparatus, generate a second set of data and a planning unit operatively coupled to the perception unit, position estimation unit and motion control unit, the planning unit configured to receive, from the perception unit, the first set of data pertaining to distance attributes, receive, from the position estimation unit, the second set of data pertaining to location attributes of the apparatus and process the received first set of data and the second set of data to generate an occupancy grid map, wherein based on the generated occupancy grid map, the planning unit configured to execute a navigational routine to control the movement of the motion control unit to navigate the payloads of variable height.
[0017] In an embodiment, the perception unit can include one or more sensors that comprise light detection and ranging (LIDAR) sensors, ultrasonic sensors and depth sensor, wherein the LIDARs and depth camera are mounted on the chassis in any or a combination of horizontal plane and vertical plane to cover entire 360 degrees in the horizontal plane and 90 degrees in the vertical plane to avoid obstacles, wherein the ultrasonic sensors located on left side and right side of the apparatus to detect transparent obstacles undetected by LIDARs.
[0018] In an embodiment, the position estimation unit can include inertial measurement unit (IMU), the IMU determines the location attributes of the apparatus.
[0019] In an embodiment, the payload of variable height can be mounted on top of the apparatus.
[0020] In an embodiment, the planning unit is an on-board computer, wherein the planning unit execute a mapping routine that collects point cloud dataset from one or more sensors and incorporates mapping algorithm to generate a 2-D binary occupancy grid map.
[0021] In an embodiment, the planning unit configured to execute a localization routine configured to update the occupancy grid map based on the received first set of data and the received second set of data.
[0022] In an embodiment, the planning unit executes the navigation routine to perform global and local path planning algorithms and global and local cost map generation, wherein universal robot description format (URDF) is used to generate a model of the moving apparatus.
[0023] In an embodiment, the planning unit configured to execute the navigation routine with a plurality of waypoints, wherein a user interface configured to operate the apparatus to perform any or a combination of teleoperation, mapping and navigation with the plurality of waypoints starting from the initial location of the apparatus.
[0024] In an embodiment, the motion control unit can include two motor driver unit, capacitive encoders and drive controller, wherein the planning unit generates motion commands that are passed to motor driver unit for actuation control.
[0025] In an aspect, the present disclosure provides a method for navigating a region of interest, the method includes operating, by a computing device, a motion control unit to move an apparatus in a region of interest, one or more wheels, upon operation of the motion control unit, adapted to move the apparatus in the region of interest, generating, at a perception unit, a first set of data, upon detection of distance attributes between the apparatus and a physical object in the region of interest, the perception unit configured on a chassis of an apparatus, generating, at a position estimation unit, a second set of data, upon determining location attributes of the apparatus, the position estimation unit configured on the chassis of the apparatus and receiving, at the computing device, the first set of data from the perception unit and the second set of data from the position estimation unit, and analysing, at the computing device, the received first set of data from the perception unit, and second set of data from the position estimation unit to generate an occupancy grid map, wherein, based on the generated occupancy grid map, the computing device configured to execute a navigational routine to control the movement of the motion control unit to navigate payloads of variable height.
[0026] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] The following drawings form part of the present specification and are included to further illustrate aspects of the present disclosure. The disclosure may be better understood by reference to the drawings in combination with the detailed description of the specific embodiments presented herein.
[0028] FIG. 1A illustrates an exemplary functional component of multi-LIDAR based autonomous robot, in accordance with an embodiment of the present disclosure.
[0029] FIG. 1B illustrates an exemplary front side view of an autonomous robot, in accordance with an embodiment of the present disclosure.
[0030] FIG. 1C illustrates an exemplary view of a payload mounted in an autonomous robot, in accordance with an embodiment of the present disclosure.
[0031] FIG. 1D illustrates an exemplary rear side view of an autonomous robot, in accordance with an embodiment of the present disclosure.
[0032] FIG. 2 is a perspective view illustrating the two fields of view (FOV) of the autonomous robot, in accordance with an embodiment of the present disclosure.
[0033] FIG. 3 is a top view illustrating four field of views (FOV) in horizontal plane of the autonomous robot, in accordance with an embodiment of the present disclosure.
[0034] FIG. 4 illustrates an exemplary method for navigating in a region of interest, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION
[0035] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[0036] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[0037] The present disclosure relates, in general, to robot, and more specifically, relates to a multi- Light Detection and Ranging (LIDAR) based indoor autonomous robot. The present disclosure uses multiple 2-D LIDARS, ultrasonic sensors, a depth camera and an inertial measurement unit (IMU) to provide indoor robot navigation by enhanced environment perception and robot localization. The present disclosure can be described in enabling detail in the following examples, which may represent more than one embodiment of the present disclosure.
[0038] FIG. 1A illustrates an exemplary functional component of multi-LIDAR based autonomous robot, in accordance with an embodiment of the present disclosure.
[0039] Referring to FIG. 1A, an autonomous robot 100 (also referred to as an apparatus 100, herein) configured to provide navigation by enhanced environment perception and robot localization. Apparatus 100 can include a perception unit 102, planning unit 104, motion control unit 106, position estimation unit 118, and power supply unit 108. The present disclosure has the advantage of autonomously navigating the payloads of variable height reliably by enhanced environment perception and robot localization with the use of multiple sensors.
[0040] In an exemplary embodiment, autonomous robot as presented in the example may be indoor autonomous robot. As can be appreciated, the present disclosure may not be limited to this configuration but may be extended to other configurations. The present disclosure deals with a robotic vehicle that can perform indoor autonomous navigation along with dynamic obstacle avoidance carrying different type of payloads.
[0041] In an embodiment, the perception unit 102 can include one or more sensors also interchangeably referred to as range sensor units, where one or more sensors can include ultrasonic sensors 110, LIDARs 112 and depth camera 114. In an exemplary embodiment, the LIDARs 112 can be two-dimensional (2D) LIDAR. The perception unit 102 can detect the distance between the robot and a physical object in an environment. The position estimation unit 118 can include an inertial measurement unit (IMU) to determine the location of the robot. The present disclosure uses LIDAR 112, which is mounted vertically to get a 90° field of view to cater for payloads of varying height. The present disclosure uses a combination of horizontally and vertically mounted 2D LIDARs 112 and depth camera 114 that covers the entire 360 degrees in a horizontal plane and 90 degrees in the vertical plane in the direction of the robot to perform autonomous operation reliably for various payloads which may vary in height up to 1.5m.
[0042] In another embodiment, the motion control unit 106 can perform precise velocity control for the robot motion, the motion control unit 106 can include two motor driver units (116-1, 116-2). In an exemplary embodiment, the two motor driver units (116-1, 116-2) can include two brushless direct current (BLDC) motors. The motion control unit 106 can further include capacitive encoders and drive controller, where the capacitive encoders may be connected to the motor driver units (116-1, 116-2) and reads the angular position/odometry data of the motor driver units (116-1, 116-2).
[0043] In another embodiment, the power supply unit 108 can be configured to supply power to the apparatus 100. In another exemplary embodiment, the power unit 108 can include a battery, charger and a battery management system (BMS), which can provide power for robotic vehicle operation.
[0044] In an exemplary embodiment, the planning unit 104 can include onboard computer (OBC), which controls full functioning of the robotic vehicle, where the planning unit 104 can be coupled to the perception unit 102, the position estimation unit 118 and the motion control unit 106 to execute a mapping routine configured to maintain an occupancy grid map and a navigational routine configured to control the motor driver units (116-1, 116-2) to move the mobile robot. The planning unit 104 can perform any or a combination of area mapping, localization, path planning and dynamic obstacle avoidance.
[0045] In an exemplary embodiment, the motion control unit 106 configured on the apparatus 100, one or more wheels (128, 130 as illustrated in FIG. 1B), upon operation of the motion control unit 106, adapted to move the apparatus 100 in the region of interest, where the region of interest can be any indoor environment. The perception unit 102, upon detection of distance attributes between the apparatus 100 and the physical object in the region of interest, generate a first set of data. The position estimation unit 118, upon determining the location attributes of the apparatus 100, generate a second set of data.
[0046] The planning unit 104 operatively coupled to the perception unit 102, position estimation unit 118 and motion control unit 106, the planning unit 104 configured to receive, from the perception unit 102, the first set of data pertaining to distance attributes. The planning unit 104 can receive from the position estimation unit 118, the second set of data pertaining to location attributes of the apparatus 100. The planning unit 104 processes the received first set of data and the second set of data to generate an occupancy grid map. The planning unit 104 execute a localization routine configured to update the occupancy grid map based on the received first set of data and the second set of data.
[0047] The planning unit 104 execute a mapping routine that collects point cloud dataset from the one or more sensors and incorporates mapping algorithm to generate a 2-D binary occupancy grid map. The planning unit 104 configured to execute a navigational routine to control the movement of the motion control unit 106 to navigate the payloads of variable height. The planning unit configured to execute the navigation routine with multiple waypoints, where a user interface configured to operate the apparatus 100 to perform any or a combination of teleoperation, mapping, and navigation with multiple waypoints starting from the initial location of the apparatus 100. The planning unit generates motion commands that are passed to motor driver units (116-1, 116-2) for actuation control.
[0048] The OBC for executing navigation software, processing sensor input and commands from an operator, and controlling the components of the apparatus. For example, the navigation software may include routines for a waypoints based operation that can be selected on a teleoperation console, as well as various concurrent routines such as an obstacle avoidance behaviour that function automatically during operation of the apparatus 100.
[0049] FIG. 1B illustrates an exemplary front side view of an autonomous robot, in accordance with an embodiment of the present disclosure. As shown in FIG. 1B, the apparatus 100 as presented in the example can include a chassis 120 of rectangular configuration. As can be appreciated, the present disclosure may not be limited to this configuration but may be extended to other configurations. One or more sensors can be mounted in a front portion of the apparatus 100 such that it can cover an entire 360 degrees in the horizontal plane and 90 degrees in the vertical plane.
[0050] In an embodiment, the LIDAR 112 and the depth camera 114 can be mounted horizontally in the chassis 120. In another embodiment, the LIDAR 112 can be mounted vertically on the chassis of apparatus 100. The LIDARs 112 and depth camera 114 can be mounted on the chassis in any or a combination of horizontal plane and vertical plane to cover entire 360 degrees in the horizontal plane and 90 degrees in the vertical plane, the LIDARs and depth camera are merged to avoid obstacles, where the ultrasonic sensors 110 located on left side and right side of the apparatus 100 to detect transparent obstacles undetected by LIDARs. The horizontally mounted LIDARs 112, where one LIDAR at front and another one at rear of the robot and ultrasonic sensors 110 on the periphery is merged to provide 360-degree coverage of scanning in one plane.
[0051] In another embodiment, the front panel of the apparatus 100 can include power on/off button 122, liquid-crystal display (LCD) display 124 and emergency stop button 126. The indoor autonomous robot further includes a plurality of caster wheels 128 and a pair of drive wheels 130. In another embodiment, FIG. 1C illustrates an exemplary view of a payload mounted in an autonomous robot 100, in accordance with an embodiment of the present disclosure. A payload 132 as illustrated in FIG. 1C, can be of variable height and can be mounted on top of the autonomous robot 100. The one or more sensors can cover an entire 360 degrees in the horizontal plane and 90 degrees in the vertical plane in the direction of the robot to perform autonomous operation reliably for various payloads, which may vary in height up to 1.5m or any suitable height.
[0052] FIG. 1D illustrates an exemplary rear side view of an autonomous robot, in accordance with an embodiment of the present disclosure. One or more sensors can be mounted in the rear portion of the apparatus 100. The LIDAR 112 can be located horizontally on the rear side of apparatus 100 and the ultrasonic sensor 110 can be located on the rear side of apparatus 100. In another embodiment, the ultrasonic sensor 110 can be located on the left side of apparatus 100.
[0053] The tasks accomplished by the robot 100 are perception, planning and control of the motion. The perception unit 102 with the LIDARs 112 can be used for autonomous navigation along with dynamic obstacle avoidance. Ultrasonic sensors 110 around the robotic vehicle help in detecting transparent obstacle which may get missed by LIDARs 112. In another embodiment, the planning unit 104 can control full functioning of the robotic vehicle. The planning unit 104 can perform following operations:
• Mapping routine can collect the laser scan and point cloud data from the one or more sensors and can incorporate mapping algorithm to generate a 2-D binary occupancy grid map.
• Localization routine performs IMU and odometry fusion using Extended Kalman Filter (EKF) filter and scan matching algorithms to estimate precise positioning of the robot.
• Navigation routine performs global and local path planning algorithms and global and local cost map generation.
• It also uses Universal robot description format (URDF) to generate a model of the moving robot.
• The dynamic window approach (DWA) planner generates vehicle motion commands that are passed to motor driver units (16-1, 116-2) for actuation control.
[0054] In another embodiment, each of the two motor driver units (116-1, 116-2) can be used for actuation control. It drives the motors based on the vehicle motion commands received from the planning unit 104 and in turn sends the motor revolution per minute (RPM) information back to the planning unit 104. In another embodiment, the power unit 108 can provide continuous supply for operation of the robot up to 2.5 hrs. The robot 100 can accommodate multiple payloads of different heights and has the provision of power supply for payloads if required.
[0055] The embodiments of the present disclosure described above provide several advantages. The one or more of the embodiments provide an apparatus that enables 360 degrees coverage by amalgamating range data from LIDARs 112 and ultrasonic sensors 110. The present disclosure provides enhancement of vertical field of view using LIDARs 112 for reliable navigation by vertical obstacle detection. The apparatus 100 enable precise localization in global positioning system (GPS) denied indoor environment using IMU and odometry sensor data fusion. The apparatus 100 autonomously navigate the payloads of variable height reliably by enhanced environment perception and robot localization with use of sensors and avoid driving into the obstacles. Further, the present disclosure provides a combination of 2-D LIDAR and depth cameras 114 for mapping and localization that are more economical to design.
[0056] FIG. 2 is a perspective view illustrating the two fields of view (FOV) of the autonomous robot, in accordance with an embodiment of the present disclosure. Referring to FIG. 2, the vertical front FOV of perception unit 102 namely depth camera 114 and vertically mounted LIDARs 112. The two fields of view 202, 204 of the autonomous robots of which the FOV 202 corresponds to depth camera 114 and the FOV 204 corresponds to vertically mounted LIDAR 112.
[0057] FIG. 3 is a top view illustrating four field of views (FOV) in horizontal plane 300 of the autonomous robot, in accordance with an embodiment of the present disclosure. As shown in FIG. 3, the four field of views (302 to 308) of which the field of view 302 corresponds FOV of front LIDAR 112, FOV 304 which corresponds FOV of rear LIDAR 112 and FOV 306, 308 which correspond FOV left side and right-side ultrasonic sensors 110 respectively.
[0058] FIG. 4 illustrates an exemplary method 400 for navigating in a region of interest, in accordance with an embodiment of the present disclosure.
[0059] Referring to FIG. 4, the method 400 can be implemented using a computing device, which can include OBC. At block 402, the computing device configured to operate a motion control unit to move an apparatus in a region of interest, one or more wheels, upon operation of the motion control unit, adapted to move the apparatus in the region of interest. At block 404, the perception unit generates a first set of data, upon detection of distance attributes between the apparatus and a physical object in the region of interest, the perception unit configured on a chassis of an apparatus.
[0060] At block 406, the position estimation unit generates a second set of data, upon determining location attributes of the apparatus, the position estimation unit configured on the chassis of the apparatus. At block 408, the first set of data from the perception unit and the second set of data from the position estimation unit are received at the at the computing device. At block 410, the computing device processes the received first set of data from the perception unit, and second set of data from the position estimation unit to generate an occupancy grid map. At block 412, based on the generated occupancy grid map, the computing device configured to execute a navigational routine to control the movement of the motion control unit to navigate payloads of variable height.
[0061] It will be apparent to those skilled in the art that the apparatus100 of the disclosure may be provided using some or all of the mentioned features and components without departing from the scope of the present disclosure. While various embodiments of the present disclosure have been illustrated and described herein, it will be clear that the disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the scope of the disclosure, as described in the claims.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0062] The present disclosure provides an apparatus that enables 360 degrees coverage by amalgamating range data from LIDARs and ultrasonic sensors.
[0063] The present disclosure provides enhancement of vertical field of view using LIDAR for reliable navigation by vertical obstacle detection.
[0064] The present disclosure provides an apparatus that enable precise localization in global positioning system (GPS) denied indoor environment using IMU and odometry sensor data fusion.
[0065] The present disclosure provides an apparatus that autonomously navigate the payloads of variable height reliably by enhanced environment perception and robot localization with use of sensors.
[0066] The present disclosure provides a combination of 2-D LIDARS and depth cameras for mapping and localization that are more economical to design.
[0067] The present disclosure provides an apparatus that can enhance the ability to detect and avoid obstacles.
| # | Name | Date |
|---|---|---|
| 1 | 202141014756-STATEMENT OF UNDERTAKING (FORM 3) [31-03-2021(online)].pdf | 2021-03-31 |
| 2 | 202141014756-POWER OF AUTHORITY [31-03-2021(online)].pdf | 2021-03-31 |
| 3 | 202141014756-FORM 1 [31-03-2021(online)].pdf | 2021-03-31 |
| 4 | 202141014756-DRAWINGS [31-03-2021(online)].pdf | 2021-03-31 |
| 5 | 202141014756-DECLARATION OF INVENTORSHIP (FORM 5) [31-03-2021(online)].pdf | 2021-03-31 |
| 6 | 202141014756-COMPLETE SPECIFICATION [31-03-2021(online)].pdf | 2021-03-31 |
| 7 | 202141014756-Proof of Right [15-07-2021(online)].pdf | 2021-07-15 |
| 8 | 202141014756-POA [18-10-2024(online)].pdf | 2024-10-18 |
| 9 | 202141014756-FORM 13 [18-10-2024(online)].pdf | 2024-10-18 |
| 10 | 202141014756-AMENDED DOCUMENTS [18-10-2024(online)].pdf | 2024-10-18 |
| 11 | 202141014756-FORM 18 [06-03-2025(online)].pdf | 2025-03-06 |