Sign In to Follow Application
View All Documents & Correspondence

A Method For Hybrid Mapping For Robotic Navigation

Abstract: A METHOD FOR HYBRID MAPPING FOR ROBOTIC NAVIGATION ABSTRACT A method for optimizing the usage of time and memory in a robot is disclosed. The method (100) includes receiving (102) plurality of data sets from one or more sensors, initialising (104) a particle filter to obtain one or more particle filter subsetscorresponding to the plurality of data set, sampling (106) the plurality of data sets, predicting (108) the robot’s new position, estimating (110) an angle of elevation of the robot, determining (112) whether the estimated angle of elevation lies with a traversable region, switching (114) to generating 3D mapping, determining (116) the efficiency of the predicted robot’s position and resampling (118) the particle filter subsets based on the efficiency of the robot’s predicted position. FIG.1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 November 2022
Publication Number
12/2023
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Amrita Vishwa Vidyapeetham
Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India

Inventors

1. MEGALINGAM, Rajesh Kannan
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
2. DEEPAK, Nagalla
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
3. GEESALA, Raviteja
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
4. GONTU, Vamsi
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
5. CHANDA, Ruthvik Rangaiah
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
6. ALLADA, Phanindra Kumar
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
7. PASUMARTI, Ravikiran
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India
8. NIGAM, Katti
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India

Specification

Description:FORM 2
THE PATENT ACT, 1970
(39 of 1970)
COMPLETE SPECIFICATION
(See section 10, rule 13)

TITLE: A METHOD FOR HYBRID MAPPING FOR ROBOTIC NAVIGATION

INVENTORS
MEGALINGAM, Rajesh Kannan, Indian Citizen
DEEPAK, Nagalla, Indian Citizen
GEESALA, Raviteja, Indian Citizen
GONTU, Vamsi, Indian Citizen
CHANDA, Ruthvik Rangaiah, Indian Citizen
ALLADA, Phanindra Kumar, Indian Citizen
PASUMARTI, Ravikiran, Indian Citizen
NIGAM, Katti, Indian Citizen
HuT Labs, Department of Electronics and Communication Engineering, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India

APPLICANT
AMRITA VISHWA VIDYAPEETHAM
Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala 690546, India

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED

A METHOD FOR HYBRID MAPPING FOR ROBOTIC NAVIGATION

CROSS-REFERENCES TO RELATED APPLICATIONS
The present application is a patent of addition of Indian Patent Application No. 202141006605 filed on February 17, 2021.

FIELD OF THE INVENTION
The present invention generally relates to three dimensional mappingand more particularly relates to methods for hybrid mapping using multiple data sets.
BACKGROUND OF THE RELATED ART
Three dimensional (3D) maps used by autonomous or semi-autonomous robotic vehicles provide a volumetric representation of an environment, offer useful insights on various parametersto map an unknown environment and to interact safely with it. With a consistent 3D map, the robot can detect vacant spaces, obstructions, and landmarks in order to navigate precisely and safely and can be used for a variety of applications including exploration, self-driving cars, subsea analysis, mining applications, search and rescue, structural inspection, and so on.
Traditionally, maps have been created using either two-dimensional (2D) laser data from LiDAR sensors or three-dimensional (3D) data from 3D sensors like Kinect sensors. The maps usually make use of a single type of data, such as 2D or 3D data. Although the 2D occupancy grid map is memory efficient, it fails take into account the 3D character of the area that the robot traverses. Octomap depicts the real world in 3D in some applications,however it also takes into consideration non-essential data from the 3D environment even in circumstances when it is not required. Additionally, Octomaps require considerable processing power and storage resources.Hence, to overcome the problems discussed earlier, a method to optimize time and memory usage is needed.
These and other advantages will be more readily understood by referring to the following detailed description disclosed hereinafter with reference to the accompanying drawing and which are generally applicable to other solar thermal evaporators to fulfill particular application illustrated hereinafter.
SUMMARY OF THE INVENTION
The present subject matter relates to a method for hybrid mapping to optimize time and memory usage.
According to one embodiment of the present subject matter, a method for hybrid mapping to optimize time and memory usage is disclosed. In various embodiments, the method for generating a hybrid map is based on 2D and 3D mapping for time and memory optimization in a robot traversing in an environmentis disclosed. The method includes receiving plurality of data sets from one or more sensors, wherein the sensors are configured to obtain data for 2D and 3D mapping. The initialising of a particle filter by a processing unit to obtain one or more particle filter subsets corresponding to the plurality of data set received from the one or more sensors to generate 2D mapping. The sampling of the plurality of data sets received from one or more sensors to determine the robot’s position in the environment based on a map and a control command at time t-1 provided to the robot. This is followed by predicting the robot’s new position and generating weights corresponding to the new position by the processing unit. After prediction is performed, estimation of an angle of elevation of the robot by the processing unit is carried out. The method further includes determiningwhether the estimated angle of elevation lies within a traversable region by the processing unit. This is followed by switching to generation of a 3D map when the estimated angle of elevation lies within the traversable region, wherein the switching is not performed when the estimated angle of elevation lies outside the traversable region. The efficiency of the predicted robot’s position is determined using weights. This is followed by resampling the particle filter subsets based on the efficiency of the robot’s predicted position.
According to some embodiments, the one or more sensors comprise Kinect sensors, LiDAR sensors, utltrasonic sensors or any combination thereof. In some embodiments, the 2D map generated is 2D occupancy grid map.In some embodiments, the3D map generated is a 3D octomap. In another embodiment, the operation of the robot is performed using control commands, including forward, reverse, left, right and stop.In one embodiment, the processing unit is connected to a memory unit configured to store the generated 2D and 3D maps.
This and other aspects are described herein.

BRIEF DESCRIPTION OF THE DRAWINGS
The invention has other advantages and features which will be more readily apparent from the following detailed description of the invention and the appended claims, when taken in conjunction with the accompanying drawings, in which:
FIG. 1 represents a method for generating hybrid map based on 2D and 3D mapping fortime and memory optimization.
FIG. 2 represents a block diagram of a robotic system for generating hybrid map based on 2D and 3D mapping for time and memory optimization,according to one embodiment of the present subject matter.
FIG. 3A show a sample environment namely scenario-1 with placement of obstacles, according to an embodiment of the present subject matter
FIG. 3B illustrates a hybrid map of scenario-1 generated using the method of the invention with robotic systems.
FIG. 4A show a sample environment namely scenario-2 with placement of obstacles, according to an embodiment of the present subject matter
FIG. 4Billustrates a hybrid map of scenario-2 generated using the method of the invention with robotic systems.
FIG. 5 is a graphical representation of a comparative analysis in terms of time taken to map the environment
FIG. 6 is a graphical representation of a comparative analysis in terms of memory required to map the environment.
Referring to the figures, like numbers indicate like parts throughout the various views.

DETAILED DESCRIPTION OF THE EMBODIMENTS
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material to the teachings of the invention without departing from its scope.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein unless the context clearly dictates otherwise. The meaning of "a", "an", and "the" include plural references. The meaning of "in" includes "in" and "on." Referring to the drawings, like numbers indicate like parts throughout the views. Additionally, a reference to the singular includes a reference to the plural unless otherwise stated or inconsistent with the disclosure herein.
A method 100 for generating hybrid map to optimize time and memory usage in robotic navigation is illustrated in FIG. 1 according to an embodiment of the present subject matter. In various embodiments, the method 100generates a hybrid map based on 2D and 3D mapping for time and memory optimization in a robot traversing in an environment.In various embodiments, step 102 includes receiving plurality of datasets from one or more sensors. This is followed by initializing a particle filter to obtain particle filter subset to generate a two dimensional map in step 104. Step 106 includes sampling the plurality of data sets to determine robot’s position at time t-1. Step 108 includes predicting robot’s new position and generating corresponding weights. This is followed by estimating an angle of elevation of robot in step 110. Determining whether angle of elevation lies with a traversable region takes place in 112, depending on which a two dimensional or a three dimensional map is generated in step 114. Step 116 includes determining efficiency of robot’s position using weight. Finally, resampling of the particle filter subsets is performed in step 118.
A block diagram of a system for generating hybrid map based on 2D and 3D mapping for time and memory optimization is illustrated in FIG. 2, according to one embodiment of the present subject matter. The system 200 may primarily include one or more sensors 208 withLiDARsensors202, Kinect sensors204, ultrasonic sensors206or any combination thereof. The system further includes a processing unit 210 connected to a memory unit 212. In various embodiments, a user may command the system to move all over the environment. The operation of the robot may be performed using control commands, including forward, reverse, left, right and stop. In one embodiment, the command is provided using teleoperation.
In various embodiments, the method may include receiving plurality of data sets from one or more sensors 202 at block 102. The plurality of data sets may include data related to robot’s pose, map at a command delivered by the user. One or more sensors as provided in step 102 mayincludethe LiDAR sensors202, the Kinect sensors204, theultrasonic sensors 206or any combination thereof. In various embodiments, one or more sensors 208may be configured to obtain data for 2D mapping. In various embodiments, data for 2D mapping may be based ona 2D occupancy grid map.In various embodiments, the 2D occupancy grid map may be built from the data obtained from Light Detection and Ranging (LiDAR) sensors 202. In occupancy grid mapping, the entire space where the robot environment map is built may be divided into grid cells. The map m is divided into smaller map units called grid cell m_i, where each unit m_i is a binary random variable, whose value denotes the state of the cell. The state can be either occupied or free; logic ‘1’ represents occupied and logic ‘0’ represents unoccupied. The overall map may be computed from the smaller map units as:
p(m|z_(1:t),x_(1:t)) = Π p(m_i |z_(1:t),x_(1:t)) (1)

z_(1:t)accounts for the observations till time t. x_(1:t)accounts for the robot’s pose till time t. p(m|z_(1:t),x_(1:t)) is the overall map estimate at time t which is the cumulation of smaller map units estimate p(m_i |z_(1:t),x_(1:t)) at time t.
In various embodiments, the data generated from ultrasonic sensors206 may be used to determine if data from Kinect sensor 204is required to generate complete map of the environment. In various embodiments, the Kinect sensors204may be configured to obtain data for 3D mapping. In various embodiments, once the plurality of data set is received at block 102, the processing unit210 may acquire the received data set from one or more sensors and then initiate a particle filter at block 104. In various embodiments, the particle filter is initiated to obtain one or more particle filter subsets. In various embodiments, one or more particle filter subsets are corresponding to the plurality of data set received from one or more sensors208. In various embodiments, the particle filter may be initiated to generate a 2D map.Next, when new laser data and control commands are obtained by the processing unit, the necessary updating takes place for all the particles.
In various embodments, the method may include sampling of the plurality of data sets received from one or more sensors 208 at block 106. In various embodiments, sampling at block 106 may be carried out to determine the robot’s position in the environment. In various embodiments, the robot’s position is determined based on a map and a control command at time t-1 provided to the robot. Robot’s new position may be predicted at block 108. Step 108 may further include generating weights corresponding to the new position by the processing unit 210. The processing unit210 may estimate an angle of elevation of the robotat block 110. The processing unit 210 may further determine whether the estimated angle of elevation lie within a traversable region at block 112.
In various embodiments, the processing unit 210may check whether the estimated angle of elevation lies within the traversable region at block114. If the estimated angle of elevation lies within the traversable region, a 3D map is generated. In various embodiments, the 3D map generated may be a 3D octomap. The 3D octomap may be a generalized version of the 2D occupancy grip mapping and uses a tree-based representation of space in 3D containing cubic volume. In various embodiments, if the estimated angle of elevation lies outside the traversable region, the processing unit may switch to generate a 2D mapat block114. In various embodiments, in step 116, the efficiency of the predicted robot’s position may be determined using weights. In step 118, the particle filter subset may be resampled. In various embodiments, the processing unit 210 may be connected to a memory unit 212 configured to store the generated 2D and 3D maps.
In various embodiments, the particle filter subsets may be resampled based on the efficiency of the robot’s predicted position. In various embodiments, pose estimation of the robot traversing in an environment may be performed using Rao-Blackwellized Particle Filter Simultaneous Localization and Mapping (RBPF SLAM).RBPF SLAM may further facilitate sampling, generating weights, estimating maps and resampling as per user requirement. In various embodiments, RBPF SLAM may facilitate localization of the robot’s pose. In various embodiments, a posterior p(x_(1:t) | z_(1:t),u_(0:t)) relating to the robots pose x at time t, i.e., x_t, may be estimated when the observations until time t given by z_t, and control commands until time t given by u_(0:t), is provided by the user. Further, the posterior over maps p(m | x_(1:t),z_(1:t)) may be computed if the knowledge of x_(1:t)and z_(1:t) is available.
EXAMPLES
Example-1: 3D map generation performance in various scenarios
Experiments were conducted using a ground robot. The robot was navigated through various sample areas with static objects of different shapes and sizes. The robot was equipped with aROS based Robot Visualization (RViz) tool to visualize the map being built while the robot was traversing in an environment. The ground robot was used to map in total 44 scenarios developed with the help of ROS Gazebo simulator for testing the mapping technique illustrated in the embodiments.Various parts of the scenarios considered for experiment were as follows:
Point Obstacle
A Initial robot position
B Final position of the robot
C Bookshelf
D Cubes
E Table-1
F Table-2
G Metal staircase
H Cylinder
I Cuboid
J Staircase
Two scenarios providing three dimensional maps using the method of invention are further illustrated with reference to FIG. 3A and FIG. 3B, FIG. 4A and FIG. 4B respectively.
Scenario 1:FIG. 3A represents the random sample areaunder study, with multiple rooms each connected to the other through a doorway and equipped with various obstacles. The sample area included one big room thatincluded obstacles like book shelf (C), cubes (D) and a table-1 (E). The big rectangular room was connected to two smaller areas through a passage and each area was equipped with table-2 (F). The big room further included one small room connected through a doorway, wherein a metal stair case (G) was placed. The sample area further included two smaller rooms adjacent to the big room, wherein one small room was connected to big room and another small room through a doorway. Another small roomwas equipped with one staircase (J) and a table-1 (E). The robot started traversing from point A and stopped at point B after covering entire sample area and scanning through obstacles C, D, E, F, G and J. The 3D mapped diagram FIG.3B clearly showed the sample areawith arranged obstacles and eliminated unnecessary aspects of the environment. With creation of hybrid map of the environment it took about 12.6 minutes and consumed 6.5 KB of memory.
Scenario 2: FIG. 4A represents the sample area with random trajectories and shapes, each connected to the other through a doorway and equipped with various obstacles. The sample area included a triangular path, around a triangular area, thatincluded an obstacle in the form of table-2 (F). The sample area further included one L-shaped room adjoining the triangular path and connected through a doorway. The L-shaped room contained two tables (E), another table (F) and a book shelf (C). The sample area further included one rectangular room at the base of triangular path, connected through a doorway and furnished with one staircase (J). The robot started traversing from point A and stopped at point B after covering entire sample area and scanning through obstacles E, F, and J. The 3D mapped diagram FIG. 4B clearly showed the sample area with arranged obstacles and eliminated unnecessary aspects of the environment. With creation of hybrid map of the environment it took about 12.6 minutes and consumed 6.5 KB of memory.
Example-2: Comparison of proposed 3D mapping technique with OctoMap technique
A comparison with regards to working of proposed 3D mapping technique with OctoMap was carried out. FIG. 5 shows a comparison with OctoMap in terms of time taken to map the environment. It was observed that the time taken by the proposed mapping technique was lesser than OctoMapping. With hybrid mapping of the environment it took the robot about 12.6 minutes for 10 scenarios whereas OctoMapping took 24.7 minutes on an average which was almost double the time taken by proposed technique. Further, FIG. 6 shows a comparison with respect to memory requirements for 10 different scenarios. In terms of memory optimization, while creating a map of the environment, the hybrid map consumed 6.5 KB of memory on an average for mapping the 10 scenarios whereas OctoMapping consumed 510 KB of memory.
Although the detailed description contains many specifics, these should not be construed as limiting the scope of the invention but merely as illustrating different examples and aspects of the invention. It should be appreciated that the scope of the invention includes other embodiments not discussed herein. Various other modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the system and method of the present invention disclosed herein without departing from the spirit and scope of the invention as claimed.
While the invention has been disclosed with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt to a particular situation or material the teachings of the invention without departing from its scope as defined in the claims to follow.


, Claims:WE CLAIM:
1. A method (100) for generating a hybrid map based on 2D and 3D mapping for time and memory optimization in a robot traversing in an environment, the method comprising:
- receiving (102) plurality of data sets from one or more sensors, wherein the sensors are configured to obtain data for 2D and 3D mapping;
- initialising (104) a particle filter by a processing unit (210)to obtain one or more particle filter subsetscorresponding to the plurality of data set received from the one or more sensors to generate 2D mapping;
- sampling (106) the plurality of data sets received from one or more sensors to determine the robot’s position in the environment based on a map and a control command at time t-1 provided to the robot;
- predicting (108) the robot’s new position and generating weights corresponding to the new position by the processing unit(210);
- estimating (110) an angle of elevation of the robot by the processing unit(210);
- determining (112) whether the estimated angle of elevation lies with a traversable region by the processing unit;
- switching (114) to generating 3D mapping when the estimated angle of elevation lies within the traversable region, wherein the switching is not performed when the estimated angle of elevation lies outside the traversable region;
- determining (116) the efficiency of the predicted robot’s position using weights; and
- resampling(118) the particle filter subsets based on the efficiency of the robot’s predicted position.
2. The method (100) as claimed in claim 1, the one or more sensors (208)comprise LiDAR sensors (202), Kinect sensors (204), utltrasonic sensors (206)or any combination thereof.
3. The method (100) as claimed in claim 1, wherein the 2D map generated is 2D occupancy grid map.
4. The method (100) as claimed in claim 1, the 3D map generated is a 3D octomap.
5. The method (100) as claimed in claim 1, wherein the operation of the robot is performed using control commands including forward, reverse, left, right and stop.
6. The method (100) as claimed in claim 1, wherein the processing unit(210)is connected to a memory unit (212) configured to store the generated 2D and 3D maps.

Sd.- Dr V. SHANKAR IN/PA-1733
For and on behalf of the Applicants

Documents

Application Documents

# Name Date
1 202243062997-STATEMENT OF UNDERTAKING (FORM 3) [04-11-2022(online)].pdf 2022-11-04
2 202243062997-FORM FOR SMALL ENTITY(FORM-28) [04-11-2022(online)].pdf 2022-11-04
3 202243062997-FORM 1 [04-11-2022(online)].pdf 2022-11-04
4 202243062997-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [04-11-2022(online)].pdf 2022-11-04
5 202243062997-EVIDENCE FOR REGISTRATION UNDER SSI [04-11-2022(online)].pdf 2022-11-04
6 202243062997-EDUCATIONAL INSTITUTION(S) [04-11-2022(online)].pdf 2022-11-04
7 202243062997-DRAWINGS [04-11-2022(online)].pdf 2022-11-04
8 202243062997-DECLARATION OF INVENTORSHIP (FORM 5) [04-11-2022(online)].pdf 2022-11-04
9 202243062997-COMPLETE SPECIFICATION [04-11-2022(online)].pdf 2022-11-04
10 202243062997-Proof of Right [04-01-2023(online)].pdf 2023-01-04
11 202243062997-FORM-26 [04-01-2023(online)].pdf 2023-01-04
12 202243062997-FORM-9 [17-03-2023(online)].pdf 2023-03-17
13 202243062997-FORM 18 [30-06-2023(online)].pdf 2023-06-30
14 202243062997-FORM-8 [03-02-2025(online)].pdf 2025-02-03
15 202243062997-RELEVANT DOCUMENTS [20-03-2025(online)].pdf 2025-03-20
16 202243062997-POA [20-03-2025(online)].pdf 2025-03-20
17 202243062997-FORM 13 [20-03-2025(online)].pdf 2025-03-20
18 202243062997-OTHERS [07-05-2025(online)].pdf 2025-05-07
19 202243062997-EDUCATIONAL INSTITUTION(S) [07-05-2025(online)].pdf 2025-05-07
20 202243062997-FER.pdf 2025-09-29

Search Strategy

1 202243062997_SearchStrategyNew_E_SearchHistoryE_26-09-2025.pdf