Abstract: ABSTRACT A LiDAR AND GNSS BASED SYSTEM AND METHOD FOR PRECISE LOCALIZATION OF AUTONOMOUS VEHICLE The present invention relates to a LiDAR and GNSS based system and method for precise localization of autonomous vehicle. The system comprises LiDAR sensor (1) to perceive the environment around the vehicle in 3D, GNSS sensor (2) providing the global latitude and longitude coordinates of a point and vehicle with a computer system (3) for mounting the sensors and processing the information provided by the sensor through the cloud server. Published with Figure 1
Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
The Patent Rules, 2003
COMPLETE SPECIFICATION
(See sections 10 & rule 13)
1. TITLE OF THE INVENTION
A LiDAR AND GNSS BASED SYSTEM AND METHOD FOR PRECISE LOCALIZATION OF AUTONOMOUS VEHICLE
2. APPLICANT (S)
S. No. NAME NATIONALITY ADDRESS
1 NMICPS Technology Innovation Hub On Autonomous Navigation Foundation IN C/o Indian Institute of Technology Hyderabad, Kandi, Sangareddy, Telangana– 502284, India.
2 Indian Institute Of Technology Hyderabad IN Kandi, Sangareddy, Telangana– 502284, India.
3. PREAMBLE TO THE DESCRIPTION
COMPLETE SPECIFICATION
The following specification particularly describes the invention and the manner in which it is to be performed.
FIELD OF INVENTION:
[001] The present invention relates to the field of autonomous vehicle. The present invention in particular relates to a system and method for precise localization of autonomous vehicle.
DESCRIPTION OF THE RELATED ART:
[002] Positioning solutions based on matching technology have been widely used in the field of autonomous driving and are known for their high matching accuracy and precise attitude estimation results. However, this solution has very strict requirements on the initial values. Only when the initial position and attitude are good can the algorithm ensure accuracy and timeliness.
[003] In the absence of an initial value, directly matching the current frame data with the global map will result in a search range that is too large and a platform computational load that is too heavy, thus rendering the algorithm ineffective.
[004] In open environments, pure satellite positioning performs well.
[005] However, in complex environments, such as areas with dense buildings and tall trees, multipath effects are prone to occur. Likewise, in enclosed areas such as tunnels and underground, signal loss becomes a problem.
[006] In addition, when the GNSS signal changes, attitude calculation may also produce errors. Therefore, for the above reasons, the pure GNSS solution is gradually being abandoned and replaced by a multi-sensor fusion positioning solution.
[007] When using 3D lidar data to construct large-scale environmental maps, an inevitable problem is whether the robot moves to a place that has been mapped before.
[008] Due to the existence of cumulative errors, the map constructed by the robot after passing through the same place twice is often not a closed form. Therefore, in order to ensure the accuracy and precision of the map, it is necessary to detect whether the robot has already run in a certain environment.
[009] Reference may be made to the following:
[010] IN Publication No. 202441016942 relates to a navigation system to generate one or more navigation instructions for display on a display device of a vehicle. The system includes a portable vehicle dimension-capturing device (PVDCD) and a street dimension-capturing device (SDCD). The PVDCD is configured to determine the dimension attributes of the vehicle in which the PVDCD is positioned. The SDCD is configured to determine a real-time dimension attribute of one or more streets connecting a destination. The SDCD is remotely located and the destination is received as a user input. The display device of the vehicle is communicably coupled to the PVDCD and to the SDCD. The display device is configured to generate a three-dimensional perspective view of a model representative of map data showing at least one street selected from one or more streets.
[011] IN Publication No. 202311084026 relates to a high-precision localization system designed for autonomous vehicles. The system features an integrated array of advanced sensors, including LiDAR, radar, cameras, and GPS, which work in unison to gather real-time environmental data. The core of the system lies in the sophisticated sensor fusion module, which adeptly combines the diverse data streams from said sensors, creating a unified and enhanced environmental perception. Additionally, the system utilizes high-definition maps that provide detailed information about the road network, essential for precise navigation. Central to the system's functionality are advanced localization algorithms, which process the fused sensor data in conjunction with the map information to accurately determine the vehicle's position within its environment. The combination of sensor fusion and algorithmic processing ensures unparalleled accuracy and reliability in vehicle localization, important for the safe and efficient operation of autonomous vehicles.
[012] Publication No. CN117169942 relates to an unmanned vehicle repositioning method based on LiDAR/GPS/IMU fusion, and the method comprises the steps: firstly completing the construction of a point cloud map of a surrounding environment of an unmanned vehicle driving region through the information of a laser radar and an IMU sensor; performing coarse positioning by adopting a GPS module, determining an approximate position of the unmanned vehicle in a built prior map, and performing further positioning by using an NDT algorithm to obtain a global initial pose; the high-frequency IMU information is used for preprocessing the point cloud, the screened point cloud is used for feature extraction and is matched with a prior map to obtain a laser odometer result, the system state and a covariance matrix are updated, and finally, a relatively accurate local pose is output. Compared with a traditional NDT positioning method, the positioning robustness and accuracy are greatly improved.
[013] Patent No. US8346480 relates to a navigation and control system including a sensor configured to locate objects in a predetermined field of view from a vehicle. The sensor has an emitter configured to repeatedly scan a beam into a two-dimensional sector of a plane defined with respect to a first predetermined axis of the vehicle, and a detector configured to detect a reflection of the emitted beam from one of the objects. The sensor includes a panning mechanism configured to pan the plane in which the beam is scanned about a second predetermined axis to produce a three dimensional field of view. The navigation and control system includes a processor configured to determine the existence and location of the objects in the three dimensional field of view based on a position of the vehicle and a time between an emittance of the beam and a reception of the reflection of the emitted beam from one of the objects.
[014] IN Publication No. 202341054143 relates to systems and methods for enhancing autonomous vehicle control using machine learning and computer vision. The invention utilizes a combination of data processing, sensor integration, and decision-making algorithms to enable safe and efficient autonomous navigation in various environments.
[015] IN Publication No. 201847030672 relates to methods and computing devices implementing the methods for analyzing sensor information to identify an abnormal vehicle behavior. A computing device may monitor sensors (e.g. a closely-integrated vehicle sensor a loosely-integrated vehicle sensor a non-vehicle sensor etc.) in the vehicle to collect the sensor information analyze the collected sensor information to generate an analysis result and use the generated analysis result to determine whether a behavior of the vehicle is abnormal. The computing device may also generate a communication message in response to determining that the behavior of the vehicle is abnormal and send the generated communication message to an external entity.
[016] Publication No. US2020401823 relates to a point cloud representing a region. The operations may also comprise identifying a cluster of points in the point cloud having a higher intensity than points outside the cluster of points. The operations may also comprise determining a bounding box around the cluster of points. The operations may also comprise identifying a traffic sign within the bounding box. The operations may also comprise projecting the bounding box to coordinates of an image of the region captured by a camera. The operations may also comprise employing a deep learning model to classify a traffic sign type of the traffic sign in a portion of the image within the projected bounding box. The operations may also comprise storing information regarding the traffic sign and the traffic sign type in a high definition (HD) map of the region.
[017] Publication No. CN113359782 relates to an unmanned aerial vehicle autonomous site selection landing method fusing LIDAR point cloud and image data. The method comprises the steps of generating a color point cloud image based on fusion of a laser radar point cloud data stream and an image data stream; calculating at least one smooth area in the color point cloud picture, taking the smooth area closest to the unmanned aerial vehicle as an initial landing point, and controlling the unmanned aerial vehicle to move towards the initial landing point; in the process of moving to the initial landing point, performing visual analysis on semantic information corresponding to all smooth areas, and screening to obtain at least one safe smooth area; and determining the safe smooth area closest to the current position of the unmanned aerial vehicle as a final landing point, and controlling the unmanned aerial vehicle to land to the final landing point. According to the method, the optimal landing point in the area is autonomously selected for landing over various terrains without relying on unreliable sensor information such as a GPS and an IMU, and the landing sites with potential safety hazards can be quickly and effectively screened.
[018] Publication No. US2021011164 relates to mobile LiDAR platforms for vehicle tracking. In various embodiments, a time-series of point clouds is received from a LiDAR sensor. Each point cloud of the time series of point clouds is projected onto a plane. The projected point clouds are provided to a convolutional neural network. At least one bounding box of a vehicle within each of the projected point clouds is received from the convolutional neural network. A relative speed of the vehicle is determined from the bounding boxes with the projected point clouds. A reference speed of the LiDAR sensor is determined from the GPS receiver. From the relative speed and the reference speed, an absolute speed of the vehicle is determined.
[019] IN Publication No. 201717027077 relates to methods and apparatus for real time machine vision and point cloud data analysis are provided for remote sensing and vehicle control. Point cloud data can be analyzed via scalable centralized cloud computing systems for extraction of asset information and generation of semantic maps. A data storage / preprocessor subdivides a data set for streaming to a distributed processing unit and operation via data analysis mechanisms. The output of the processing unit is aggregated by a map generator. Machine learning components can optimize data analysis mechanisms to improve asset and feature extraction from sensor data. Optimized data analysis mechanisms can be downloaded to vehicles for use in on board systems analyzing vehicle sensor data. Semantic map data can be used locally in vehicles along with onboard sensors to derive precise vehicle localization and provide input to vehicle to control systems.
[020] IN Publication No. 202217043406 relates to systems and methods are described for refining first point cloud data using at least second point cloud data and one or more sets of quantizer shifts. An example point cloud decoding method includes obtaining data representing at least a first point cloud and a second point cloud; obtaining information identifying at least a first set of quantizer shifts associated with the first point cloud; and obtaining refined point cloud data based on at least the first point cloud, the first set of quantizer shifts, and the second point cloud. The obtaining of the refined point cloud data may include performing a subtraction based on at least the first set of quantizer shifts. Corresponding encoding systems and methods are also described.
[021] Patent No. US10503171 relates to autonomous vehicle, and more particularly to method and system for determining a drivable navigation path for an autonomous vehicle. In one embodiment, a method may be provided for determining a drivable navigation path for the autonomous vehicle. The method may include receiving a base navigation path on a navigation map, a position and an orientation of the autonomous vehicle with respect to the base navigation path, and an environmental field of view (FOV). The method may further include deriving a navigational FOV based on the environmental FOV, the base navigation path, and the orientation of the vehicle. The method may further include determining a set of navigational data points from the navigational FOV, and generating at least a portion of the drivable navigation path for the autonomous vehicle based on the set of navigational data points.
[022] Publication No. US2024111056 relates to an enhanced light detection and ranging (LiDAR) assisted vehicle navigation system includes a global positioning system (GPS). A LiDAR device is in communication with the GPS generating and transmitting LiDAR signals reflected off a target proximate to a vehicle. Multiple vehicle sensors including at least a LiDAR sensor receive the LiDAR signals reflected off the target as a point cloud of data. A line profile of the target is used to identify if blooming is present. A characterization device performs a characterization analysis to identify an existence and extent of blooming of the LiDAR signals using the line profile of the target. A filter receives an output of the characterization device and removes the blooming if present to provide edge detection of the target.
[023] Publication No. GB2615100 relates to a point of interest and/or a road type is determined in a map by acquiring processed sensor data collected from one or more vehicles. A set of classification parameters is extracted from the processed sensor data and the one or more points of interest and its geographic location and/or one or more road types are determined based upon the set of classification parameters. A trained convolutional neural network classifier may be used. A plurality of individual trails for a plurality of object classes may be determined and potentially aggregated in a grid cell representation of a map. Class specific histograms may be determined for each cell in the map, where the histograms may include an average speed, angle deviation, creation time or directions of the trails. The sensor data may be LIDAR based or RADAR based with GPS.
[024] Publication No. EP3707469 relates to a system for registration of point clouds for autonomous driving vehicles is provided. The system receives a number of point clouds and corresponding poses from the autonomous driving vehicles equipped with LIDAR sensors capturing point clouds of a navigable area to be mapped, where the point clouds correspond to a first coordinate system. The system partitions the point clouds and the corresponding poses into one or more loop partitions based on navigable loop information captured by the point clouds. For each of the loop partitions, the system applies an optimization model to point clouds corresponding to the loop partition to register the point clouds. The system merges the one or more loop partitions together using a pose graph algorithm, where the merged partitions of point clouds are utilized to perceive a driving environment surrounding the autonomous driving vehicles.
[025] Publication No. CN114114215 relates to an improved calibration of a vehicle sensor is disclosed based on static objects detected in an environment in which the vehicle is traveling. A first sensor, such as LiDAR, may be calibrated to a global coordinate system via a second pre-calibration sensor, such as GPS IMU. Static objects, such as flags, present in the environment are detected. The type of the detected object is determined from static map data. Point cloud data representative of the static object is captured by the first sensor and a first transformation matrix for performing a transformation from a local coordinate system of the first sensor to a local coordinate system of the second sensor is iteratively re-determined until a desired calibration accuracy is achieved. The transformation to the global coordinate system is then effected via application of the first transformation matrix, followed by a second known transformation matrix.
[026] Publication No. US2022066006 relates to a vehicle sensor based on static objects detected within an environment being traversed by the vehicle is disclosed. A first sensor such as a LiDAR can be calibrated to a global coordinate system via a second pre-calibrated sensor such as a GPS IMU. Static objects present in the environment are detected such as signage. Point cloud data representative of the static objects are captured by the first sensor and a first transformation matrix for performing a transformation from a local coordinate system of the first sensor to a local coordinate system of the second sensor is iteratively redetermined until a desired calibration accuracy is achieved. Transformation to the global coordinate system is then achieved via application of the first transformation matrix followed by application of a second known transformation matrix to transition from the local coordinate system of the second pre-calibrated sensor to the global coordinate system.
[027] Publication No. US2020025578 relates to a process for constructing highly accurate three-dimensional mappings of objects along a rail tunnel in which GPS signal information is not available includes providing a vehicle for traversing the tunnel on the rails, locating on the vehicle a LiDAR unit, a mobile GPS unit, an inertial navigation system, and a speed sensor to determine the speed of said vehicle. A stationary GPS, whose geolocation is well-defined, is located near the entrance of the tunnel. Image-identifiable targets having a well-defined geodetic locations are located at preselected locations within the tunnel. The vehicle traverses the tunnel, producing mass point cloud datasets along said tunnel. Precise measurements of 3D rail coordinates are also obtained. The datasets are adjusted based on the mobile GPS unit, the inertial navigation system, the speed sensor, the location of the image-identifiable targets, and the precise measurements of 3D rail coordinates, to thereby produce highly accurate, and substantially geodetically correct, three-dimensional mappings of objects along the tunnel.
[028] The article entitled “Abandoned technology maps lidar blind spot” by Sally Ward-Foxton; eetimes.eu; September 30, 2019 talks about the advanced driver assistance systems (ADAS) and autonomous vehicles need to see the world in 3D. Technologies like radar and LiDAR are addressing the challenge, but roof-mounted sensors must often contend with the shadow cast by the body of the vehicle, where curbs, bushes, and — most important — pedestrians and cyclists can be in danger. LiDAR works by shining laser light at the environment, with a typical vertical field of view between 80° and 120° (Waymo’s LiDAR module has a vertical field of view of 95°, for example). If the LiDAR module is mounted on the roof of the vehicle, there will naturally be some blind spots in the shadows created by the vehicle body. Most companies get around this by mounting more LiDAR modules on the vehicle’s sides and/or bumpers, but the modules are expensive, and the range and accuracy are overkill for the near-field detection of pedestrians and curbs. An ultrasound system covering the near field neatly complements a single roof-mounted LiDAR, as it has exactly the opposite coverage. Most autonomous-driving implementations today rely on some combination of LiDAR, radar, GPS, and camera systems (Figure 2). LiDAR is currently the front-runner for intelligent vision in autonomous driving, given its impressive range and accuracy, though it is not a done deal. (Tesla is a notable exception; CEO Elon Musk has famously called LiDAR a “fool’s errand”).
[029] The article entitled “Lidar technology in autonomous vehicles” by Sujata Yadav; intellect-partners; January 23, 2024 talks about the LiDAR, an acronym for “light detection and ranging” or “laser imaging, detection, and ranging” is a sensor used for determining ranges by targeting an object or a surface with a laser and measuring the time for the reflected light to return to the receiver. With the functionality of scanning its environment, it is also sometimes called 3D laser scanning. Particularly, LiDAR image registration (LIR) is a critical task that focuses on techniques of aligning or registering lidar point cloud data with corresponding images. Advantages of mounting lidar above autonomous vehicles within an autonomous vehicle, the LiDAR sensor captures extensive data through rapid analysis of numerous laser pulses. This information, forming a ‘3D point cloud‘ from laser reflections, undergoes processing by an integrated computer to generate a dynamic three-dimensional representation of the surroundings. Training the onboard AI model with meticulously annotated point cloud datasets becomes pivotal to ensuring the precise creation of this 3D environment by LiDAR.
[030] The HD-map-based localization discussed above uses a GPS sensor to localize vehicles on the map, and for that, centia meter-level accurate GPS data using a costly RTK connection is required. Also, the range of the RTK base station is limited to a few km.
[031] However, a significant challenge arises when the vehicle initiates its journey at a considerable distance from the map's origin point. In such cases, the LiDAR-based localization algorithm fails to localize the vehicle within the mapped environment, a phenomenon commonly known as the kidnapped robot problem. LiDAR-based localization is typically employed only from a fixed starting point, usually the origin of the map, to the goal position. Consequently, if a vehicle commences autonomous navigation from a midpoint somewhere on the map, away from the origin, localization may fail due to the unavailability of similar features for reference. Also the cost of the GNSS sensor which gives accurate and precise position in global map is too high. Also, it requires a base station setup, which provides precise position in limited range from the base station. The existing GPS-based systems often encounter limitations in indoor environments.
[032] In order to overcome above listed prior art, the present invention aims to provide a LiDAR and GNSS based system and method for precise localization of autonomous vehicle. The system operates seamlessly in both indoor and outdoor settings, providing the same high level of accuracy down to the centimeter regardless of the environment.
[033] The LiDAR-based localization, system is able to localize near the origin of the map. The system involves creating equidistant unique nodes on the point cloud map by LiDAR and GNSS data fusion. Then, with live GNSS data, the closest node is identified, and its corresponding initial pose is used to initialize the localization using LiDAR at intermediate points on the map.
OBJECTS OF THE INVENTION:
[034] The principal object of the present invention is to provide the LiDAR and GNSS based system and method for precise localization of autonomous vehicle.
[035] Another object of the present invention is to provide a system and method for precise localization of autonomous vehicle which operates seamlessly in both indoor and outdoor settings.
[036] Yet another object of the present invention is to provide system and method for precise localization of autonomous vehicle which provides high level of accuracy down to the centimeter regardless of the environment.
SUMMARY OF THE INVENTION:
[037] The present invention relates to the LiDAR and GNSS nased system and method for precise localization of autonomous vehicle. The system comprises LiDAR sensor (1) to perceive the environment around the vehicle in 3D, GNSS sensor (2) providing the global latitude and longitude coordinates of a point and vehicle with a computer system (3) for mounting the sensors and processing the information provided by the sensor through the cloud server.
[038] The process begins by recording LiDAR and GPS data to construct the map. Within this 3D point cloud map generated from LiDAR data, distinct identification nodes are established. These unique ID nodes contain GPS data (latitude and longitude) along with their corresponding spatial points in the map (x, y, and z coordinates). When the vehicle operates in autonomous mode and commences navigation from any point within the map, the system identifies the nearest ID node using real-time GPS data matched with the GPS data of unique nodes in the map. Subsequently, the nearest node is identified, and the corresponding position and orientation of the vehicle with respect to the map frame are provided as the initial pose to the LiDAR localization method. The localization method then commences matching from the received initial pose and gradually corrects and converges to accurately localize the vehicle after several iterations. This correction process is based on matching similar features between the map cloud and the real-time point cloud data received from the LiDAR sensor.
[039] The merging LiDAR and GPS data to create distinct identification nodes. By using GPS data to estimate an approximate position, it offers an initial reference point for the LiDAR localization method to accurately align with the point cloud map. By identifying the nearest ID node using real-time GPS data, the system can initiate autonomous navigation from any point within the mapped area without requiring a fixed starting point. Through iterative correction based on matching features between the map cloud and real-time LiDAR data, the system continuously refines and converges to accurately localize the vehicle, even in dynamic environments.
BREIF DESCRIPTION OF THE INVENTION
[040] It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered for limiting of its scope, for the invention may admit to other equally effective embodiments.
[041] Figure 1 shows block diagram according to the present invention;
[042] Figure 2 shows flowchart according to the present invention;
[043] Figure 4 shows creation of unique nodes;
[044] Figure 3 shows finding nearest unique node and publishing initial pose.
DETAILED DESCRIPTION OF THE INVENTION:
[045] The present invention provides a LiDAR and GNSS based system and method for precise localization of autonomous vehicle. Refering to figure 1, the system comprises LiDAR sensor (1) to perceive the environment around the vehicle in 3D, GNSS sensor (2) providing the global latitude and longitude coordinates of a point and vehicle with a computer system (3) for mounting the sensors and processing the information provided by the sensor through the cloud server.
[046] Fig. 1 shows the block diagram of proposed system. A LiDAR sensor is utilized to construct a 3D map by stitching together continuous frames. Utilizing GNSS data, unique nodes are generated on the 3D map at equidistant intervals between each node. Subsequently, this map is employed for the localization of autonomous vehicles, with the nearest node identified in real time using GNSS data. The initial pose is then published and subscribed to by the localizer node. The localizer node optimizes the matching score computed by normal distribution transform using Newton's nonlinear optimization technique to determine the position and orientation of the autonomous vehicle.
[047] Localizing an autonomous vehicle based on a LiDAR map presents challenges, particularly when attempting to localize from distant intermediate points within the 3D map. The difficulty arises because the algorithm typically starts matching from the map's origin. This poses an issue when the vehicle begins from an intermediate point, leading to difficulties in aligning the current data with the map. To address this issue, we propose a solution involving the creation of equidistant unique nodes on the point cloud map through LiDAR and GNSS data fusion. Subsequently, with real-time GNSS data, the nearest node is identified, and its corresponding initial pose is utilized to initialize localization using LiDAR at intermediate points on the map.
[048] Figure 2 shows flow chart of the proposed system. LiDAR and GNSS are utilized to create a map through LiDAR odometry and mapping techniques, which involve matching edge and plane features to generate a 3D representation of the environment. Along the trajectory of the data collection vehicle used for mapping, unique nodes are generated through the fusion of LiDAR and GNSS data. These nodes contain both local and global positions on the map. Subsequently, this map is employed for real-time vehicle localization.
[049] During live operation, GNSS data is compared using the Haversine formula to determine the nearest node, and the initial pose is then published to the localizer node.
[050] The localizer node compute the matching score between the current point cloud and the map cloud is based on the initial pose information provided by the normal distribution transform. By employing iterative scan matching and optimization techniques, the algorithm aligns the sensor data with the map to accurately estimate the robot's position and orientation. This optimization process refines the score to precisely determine the vehicle's position through iterative iterations employing Newton's nonlinear optimization principles.
[051] The system records LiDAR and GPS data to construct the map. Within this 3D point cloud map generated from LiDAR data, distinct identification nodes are established. These unique ID nodes contain GPS data (latitude and longitude) along with their corresponding spatial points in the map (x, y, and z coordinates). When the vehicle operates in autonomous mode and commences navigation from any point within the map, the system identifies the nearest ID node using real-time GPS data matched with the GPS data of unique nodes in the map. Subsequently, the nearest node is identified, and the corresponding position and orientation of the vehicle with respect to the map frame are provided as the initial pose to the LiDAR localization method. The localization method then commences matching from the received initial pose and gradually corrects and converges to accurately localize the vehicle after several iterations. This correction process is based on matching similar features between the map cloud and the real-time point cloud data received from the LiDAR sensor (fig 2).
[052] The merging LiDAR and GPS data to create distinct identification nodes. By using GPS data to estimate an approximate position, it offers an initial reference point for the LiDAR localization method to accurately align with the point cloud map. By identifying the nearest ID node using real-time GPS data, the system can initiate autonomous navigation from any point within the mapped area without requiring a fixed starting point. Through iterative correction based on matching features between the map cloud and real-time LiDAR data, the system continuously refines and converges to accurately localize the vehicle, even in dynamic environments.
[053] For LiDAR based autonomous navigation, the NDT (Normal Distributions Transform) localizer method is used to solve the localization problem. The NDT localizer method facilitates precise localization in robotics by comparing sensor data with a pre-existing map. Through iterative scan matching and optimization techniques, the method aligns the sensor data with the map to estimate the robot's position and orientation accurately. But one of the challenges faced is, if the initial estimate of the robot's pose is significantly inaccurate, the method may converge to a local minimum during optimization, resulting in incorrect localization. This may happen when the vehicle is far from the origin of the LiDAR HD map.
[054] A distinctive identification nodes are generated through the fusion of LiDAR and GPS data. GPS localization aids in approximating the position, providing an initial estimate for the LiDAR localization method (NDT localizer) to precisely align the point cloud.
[055] The methodology includes following steps:
[056] 1.) Creating unique data nodes:
[057] For LiDAR-based autonomous vehicle navigation, a high-definition (HD) LiDAR map is used. The (x,y) coordinates as waypoints along the desired path. In GNSS fused approach, additionally record the latitude and longitude values provided by the GNSS sensor. Finally, the data includes the z-axis height and the quaternion (q1, q2, q3, q4) for orientation, provided by the NDT localizer at each node.
[058] 2.) Finding nearest unique node:
[059] If the vehicle starts at any point on the HD map other than the origin, the NDT localizer cannot determine its initial position using the pre-recorded data alone. To address this challenge, our approach leverages real-time data from the GNSS sensor. A search method is employed to identify the closest pre-recorded node (waypoint) based on the GNSS data.
[060] 3.) Publishing appropriate initial pose:
[061] Once the search method identifies the nearest waypoint with GNSS data (unique node), a ROS topic containing its information is created. This published topic is then subscribed by the NDT localizer method to set the initial pose of the vehicle.
[062] 4.) Refined Pose Estimation for NDT localizer:
[063] The NDT localization method now initializes matching with the initial pose provided by the previous step. It iteratively adjusts the translation and rotation values and recalculates the score until it converges, indicating the best alignment between the live scan and the map. This adjustment automatically corrects positional errors if any. This provides the precise position and orientation of the vehicle to start the autonomous navigation from any intermediate points on map.
[064] Figure 3 provides creation of unique nodes and figure 4 provides finding nearest unique node and publishing initial pose. The LiDAR and GPS sensor are mounted on the top of the vehicle and connected with the on board compute system in the vehicle. The method runs in the robot operating system.
[065] The system identifies a unique node ID, where each ID incorporates the corresponding local position and orientation relative to the LiDAR map frame. Utilizing this data, an initial pose message of type "geometry_msgs/PoseWithCovarianceStamped" is generated, comprising header information, pose, and orientation. Subsequently, the localizer node subscribes to this initial pose topic to commence the point cloud matching procedure against the map for localizing the vehicle. The localizer node optimizes the matching score using both the normal distribution transform and Newton's nonlinear optimization techniques to refine the position and orientation of the autonomous vehicle. Eventually, the vehicle's precise localization is achieved.
[066] The system can be used in any autonomous vehicles for autonomous navigation precisely at centimeter level accuracy. The localization system can adapt in any vehicle or robot which makes the proposed system a unique solution for autonomous navigation. Moreover, the LiDAR and GPS-based localization systems are versatile enough to operate seamlessly in both indoor and outdoor environments.
[067] The system requires a low-cost GPS sensor to initialize the localization of a 3D point map created using LiDAR. The LiDAR sensor is not an additional sensor required for autonomous navigation, as the autonomous vehicle already uses it for obstacle detection and tracking.
[068] The precise localization is provided by GPS-based systems relies on setting up a base station for RTK connection with satellites, necessitating additional infrastructure setup. Furthermore, the range of RTK is limited. However, the in-vehicle deployment of LiDAR and low-cost GPS sensor-based localization significantly reduces this complexity. By leveraging onboard sensors, such as LiDAR, and affordable GPS technology, this approach offers a streamlined solution for achieving accurate localization without the need for extensive external infrastructure.
[069] Numerous modifications and adaptations of the system of the present invention will be apparent to those skilled in the art, and thus it is intended by the appended claims to cover all such modifications and adaptations which fall within the true spirit and scope of this invention.
, Claims:WE CLAIM:
1. A LiDAR and GNSS based system for precise localization of autonomous vehicle comprises LiDAR sensor (1) to perceive the environment around the vehicle in 3D, GNSS sensor (2) providing the global latitude and longitude coordinates of a point and vehicle with a computer system (3) for mounting the sensors and processing the information provided by the sensor through the cloud server wherein unique nodes are generated on the 3D map at equidistant intervals between each node and this map is employed for the localization of autonomous vehicles, with the nearest node identified in real time using GNSS data, initial pose is published and subscribed to by the localizer node which optimizes the matching score computed by normal distribution transform using Newton's nonlinear optimization technique to determine the position and orientation of the autonomous vehicle.
2. The method for precise localization of autonomous vehicle includes following steps:
a) recording LiDAR and GPS data to construct the map and within this 3D point cloud map generated from LiDAR data, distinct identification nodes are established.
b) When the vehicle operates in autonomous mode and commences navigation from any point within the map, the system identifies the nearest ID node using real-time GPS data matched with the GPS data of unique nodes in the map.
c) Subsequently, the nearest node is identified, and the corresponding position and orientation of the vehicle with respect to the map frame are provided as the initial pose to the LiDAR localization method.
d) The localization method then commences matching from the received initial pose and gradually corrects and converges to accurately localize the vehicle after several iterations.
e) This correction process is based on matching similar features between the map cloud and the real-time point cloud data received from the LiDAR sensor
f) using GPS data to estimate an approximate position, it offers an initial reference point for the LiDAR localization method to accurately align with the point cloud map
g) by identifying the nearest ID node using real-time GPS data, the system can initiate autonomous navigation from any point within the mapped area without requiring a fixed starting point and through iterative correction based on matching features between the map cloud and real-time LiDAR data, the system continuously refines and converges to accurately localize the vehicle, even in dynamic environments.
3. The method for precise localization of autonomous vehicle, as claimed in claim 2, wherein the unique ID nodes contain GPS data (latitude and longitude) along with their corresponding spatial points in the map (x, y, and z coordinates).
4. The method for precise localization of autonomous vehicle, as claimed in claim 2, wherein the creation of unique data nodes includes following steps:
a) The (x,y) coordinates as waypoints along the desired path.
b) additionally record the latitude and longitude values provided by the GNSS sensor.
c) Finally, the data includes the z-axis height and the quaternion (q1, q2, q3, q4) for orientation, provided by the NDT localizer at each node.
| # | Name | Date |
|---|---|---|
| 1 | 202441033559-STATEMENT OF UNDERTAKING (FORM 3) [27-04-2024(online)].pdf | 2024-04-27 |
| 2 | 202441033559-FORM 1 [27-04-2024(online)].pdf | 2024-04-27 |
| 3 | 202441033559-DRAWINGS [27-04-2024(online)].pdf | 2024-04-27 |
| 4 | 202441033559-DECLARATION OF INVENTORSHIP (FORM 5) [27-04-2024(online)].pdf | 2024-04-27 |
| 5 | 202441033559-COMPLETE SPECIFICATION [27-04-2024(online)].pdf | 2024-04-27 |
| 6 | 202441033559-RELEVANT DOCUMENTS [18-11-2025(online)].pdf | 2025-11-18 |
| 7 | 202441033559-POA [18-11-2025(online)].pdf | 2025-11-18 |
| 8 | 202441033559-FORM 13 [18-11-2025(online)].pdf | 2025-11-18 |