Abstract: 7. ABSTRACT Mapping for autonomous systems, like self-driving vehicles, relies on stereo vision systems for environment perception. However, stereo vision struggles with texture-less surfaces like walls or glass. Traditional solutions use lasers to project textures onto these surfaces for detection. This invention introduces a novel approach by integrating radar technology into the sensor fusion process. By combining radar data with stereo vision, the system reconstructs texture-less planes invisible to the stereo camera alone. Through map creation, clustering, planar regression modeling, and projection, detailed 3D representations are achieved. Edge detection and contour matching ensure precise spatial alignment, enhancing the system's ability to perceive and navigate environments with texture-less surfaces. This innovation improves the reliability and safety of autonomous systems. The figure associated with abstract is Fig. 1.
DESC:4. DESCRIPTION
Technical Field of the Invention
The present invention related to environmental mapping. More particularly, focusing integration of RADAR and camera data for planar reconstruction, along with the focus on real-time processing and dynamic scene perception.
Background of the Invention
Autonomous systems, including self-driving cars and UAVs, rely heavily on accurate perception systems to navigate effectively and safely in dynamic environments. Mapping is fundamental to their operation, providing detailed representations of the surroundings crucial for localization, path planning, and decision-making. Traditionally, mapping combines sensor data from LiDAR, cameras, and GPS to create digital maps. However, these methods encounter difficulties when mapping texture less surfaces lacking distinct visual features or depth cues.
Stereo vision, inspired by human binocular vision, is commonly employed in autonomous systems to provide depth perception and object detection. By analyzing images from two slightly offset cameras, stereo vision systems can infer depth information and detect objects in the environment. Yet, stereo vision struggles with texture less surfaces such as blank walls or featureless objects, posing challenges for accurate perception, particularly in urban or indoor settings where such surfaces are common.
To address these challenges, traditional approaches have used laser-based methods to project artificial textures onto texture less surfaces. These textures act as reference points for stereo vision systems, aiding in the detection and recognition of otherwise difficult surfaces. However, laser-based solutions come with drawbacks, including high cost, complexity, and potential interference with other sensors or environmental conditions.
The invention seeks to overcome these limitations by integrating RADAR data with camera data to enhance planar reconstruction for real-time environmental mapping. RADAR provides complementary capabilities to stereo vision, particularly in scenarios where visual information alone may be insufficient. By combining RADAR data with stereo vision, the system can accurately reconstruct texture less planes invisible to the stereo camera due to the lack of visual features. This fusion enables the system to overcome the limitations of traditional stereo vision approaches and enhance perception capabilities in various real-world scenarios.
Moreover, this innovative approach eliminates the need for laser-based methods, reducing complexity, cost, and potential points of failure in autonomous systems. By leveraging RADAR technology and advanced processing techniques, the invention aims to create detailed and comprehensive 3D representations of the environment, facilitating precise navigation and decision-making for autonomous systems in diverse and challenging environments.
Brief Summary of the Invention
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
It is a primary objective of the invention is to enhance planar reconstruction by integrating RADAR and camera data, improving accuracy and robustness for real-time environmental mapping.
It is yet another object of the invention is to address limitations of conventional planar reconstruction methods by leveraging RADAR data, particularly in challenging environmental conditions or scenarios with sparse visual features.
It is yet another object of the invention, the integration of RADAR enhances the effectiveness and resilience of the reconstruction process, contributing to improved spatial analysis and immersive experiences.
It is yet another object of the invention is to facilitate seamless integration of multi-sensor information by employing a systematic approach to store processed data, including clustered groups, plane coefficients, and IDs, in a designated data buffer.
According to an aspect of the present invention, Method for Planar Reconstruction with RADAR and Camera fusion (100) is disclosed. The method comprising RADAR sensors, high-resolution cameras, map creation module, Clustering Module, Planar Regression Model, Projection Module. 3D Spatial Data Conversion Module.
In accordance with the aspect of the present invention, the process begins with the creation of an map, a volumetric representation of the environment, by merging odometry and RADAR data. This provides a foundation for the subsequent steps in the reconstruction process.
In accordance with the aspect of the present invention, RADAR data undergoes analysis to group nearby points based on spatial proximity, revealing distinct objects or surfaces within the environment. Following clustering, a planar regression model is applied to estimate plane coefficients, defining mathematical equations representing planar surfaces in 3D space. This step contributes to the accurate characterization of planar surfaces, enhancing the overall fidelity of the reconstruction.
In accordance with the aspect of the present invention, the projection module utilizes the derived plane coefficients to project additional points onto identified planes, extending the spatial coverage of the reconstruction. Simultaneously, edge detection algorithms are applied to filtered 2D camera data to identify prominent contours, which serve as boundaries between different regions or surfaces in the environment. Centroids of clustered groups are then matched against these identified contours to establish spatial correspondence.
In accordance with the aspect of the present invention, the retained points, along with their corresponding contour IDs, undergo a conversion process to transform them back into three-dimensional spatial data. This comprehensive conversion, guided by the identified contours and associated IDs, results in a detailed and accurate representation of the environment in three-dimensional space.
The processed data, including clustered groups, plane coefficients, and IDs, are stored in a designated data buffer, facilitating seamless integration of multi-sensor information.
Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, the detailed description and specific examples, while indicating preferred embodiments of the invention, will be given by way of illustration along with complete specification.
Brief Summary of the Drawings
The invention will be further understood from the following detailed description of a preferred embodiment taken in conjunction with an appended drawing, in which:
Fig. 1 (100) illustrates the block diagram of a method for planar reconstruction with RADAR and camera fusion, in accordance with the exemplary embodiment of the present invention.
Fig. 2 illustrates the diagram of a planar reconstruction, in accordance with the exemplary embodiment of the present invention.
Fig. 3 illustrates the diagram of a planar reconstruction, in accordance with the exemplary embodiment of the present invention.
Detailed Description of the Invention
The present disclosure emphasises that its application is not restricted to specific details of construction and component arrangement, as illustrated in the drawings. It is adaptable to various embodiments and implementations. The phraseology and terminology used should be regarded for descriptive purposes, not as limitations.
The terms "including," "comprising," or "having" and variations thereof are meant to encompass listed items and their equivalents, as well as additional items. The terms "a" and "an" do not denote quantity limitations but signify the presence of at least one of the referenced items. Terms like "first," "second," and "third" are used to distinguish elements without implying order, quantity, or importance.
The present disclosure underscores the adaptability and versatility of the invention across various embodiments and implementations. It clarifies that the application is not confined to specific construction details or component arrangements depicted in the drawings. Instead, it emphasizes the descriptive nature of the phraseology and terminology used, stressing that they should be interpreted as informative rather than limiting.
In this invention, the innovation focuses on enhancing the detection and reconstruction of texture-less surfaces in autonomous systems by integrating radar technology with stereo vision systems. Radar, renowned for its ability to detect objects through radio wave reflections, complements stereo vision, particularly in scenarios where visual information alone may be insufficient. By fusing radar data with stereo vision, the system can reconstruct texture-less planes that may not be visible to the stereo camera due to the absence of visual features. Radar sensors provide additional depth and distance information, enabling the system to accurately detect the presence of texture less surfaces and reconstruct their geometries in the 3D environment. This fusion of radar and stereo vision data surpasses the limitations of traditional stereo vision approaches and enhances perception capabilities in various real-world scenarios.
Moreover, integrating radar and stereo vision eliminates the necessity for laser-based methods to project artificial textures onto texture less surfaces. Instead of relying on external sources to create texture, the system leverages the inherent capabilities of radar sensors to detect and reconstruct texture less planes. This approach reduces complexity, cost, and potential points of failure in autonomous systems. By combining odometry and RADAR data, along with advanced processing techniques, the method enables the creation of detailed and comprehensive 3D representations of the environment.
The method involves several key steps, including stereo image optimization and radar data processing. Stereo image optimization encompasses operations such as map creation, clustering, planar regression modeling, projection, edge detection, contour matching, and conversion to 3D spatial data. These operations collectively contribute to enhancing the fidelity and reliability of the reconstructed environment, facilitating precise visualization and navigation for autonomous systems.
In an exemplary embodiment of the invention, the method for planar reconstruction with RADAR and camera fusion involves the utilization of RADAR sensors and high-resolution cameras. The camera-related operations include map creation, clustering, planar regression modeling, and projection. On the other hand, RADAR data processing involves edge detection, contour matching, and conversion to 3D spatial data.
The process begins with the integration of odometry and RADAR data to construct a volumetric representation of the environment known as a map. Odometry data provides insights into the robot's motion and position, while RADAR data offers information about the spatial distribution of surrounding objects. By merging these datasets, a comprehensive 3D model of the environment is generated. To ensure the completeness and accuracy of the map, any gaps or inconsistencies in the RADAR data are addressed by utilizing the last 10 frames of odometry data to fill in missing information.
Once the map is created, the RADAR data undergoes clustering analysis. Clustering involves grouping nearby RADAR points together based on their spatial proximity to identify clusters corresponding to distinct objects or surfaces within the environment. This clustering process forms the foundation for subsequent analysis and reconstruction tasks.
Subsequently, a planar regression model is applied to the clustered points to compute plane coefficients, which define mathematical equations representing planar surfaces in three-dimensional space. The planar regression model employs statistical techniques to estimate these coefficients, enhancing the fidelity of the reconstruction process by accurately characterizing planar surfaces.
Leveraging the derived plane coefficients, additional points are projected onto the identified planes. This projection process extends the spatial coverage of the reconstruction by adding points that conform to the estimated surfaces, thereby expanding the spatial representation of the environment and leading to a more comprehensive and detailed reconstruction.
The retained points, along with their corresponding contour IDs, undergo a comprehensive conversion process to transform them back into three-dimensional spatial data. Guided by the identified contours and associated IDs, this conversion facilitates the reconstruction of a detailed and accurate representation of the environment in three-dimensional space. The resulting 3D spatial data serve as a foundational framework for subsequent analysis, visualization, and interpretation tasks, enabling comprehensive exploration and understanding of the environment's geometric properties and spatial relationships.
Furthermore, processed data, including clustered groups, plane coefficients, and IDs, are stored in a designated data buffer for efficient organization and retrieval. Each cluster or group is assigned a unique identifier to facilitate organization and retrieval. Additionally, centroid IDs and cluster IDs obtained from the edge detection processing of 2D camera data are integrated into the data buffer, enabling seamless integration of multi-sensor information.
Now referring to drawings,
Fig. 1 illustrates a block diagram of the method for planar reconstruction with RADAR and camera fusion. Fig. 2 illustrates the diagram of a planar reconstruction, and Fig. 3 illustrates the diagram of a planar reconstruction, in accordance with the exemplary embodiment of the present invention. The map is created by merging odometry and RADAR data, and clustering analysis is applied to RADAR data to identify distinct objects. A planar regression model is then employed to estimate plane coefficients, contributing to the accurate characterization of planar surfaces. Projection of additional points onto identified planes extends spatial coverage. Simultaneously, edge detection algorithms process 2D camera data, identifying contours, and matching centroids of clustered groups against these contours to establish spatial correspondence. The retained points, along with corresponding contour IDs, undergo a conversion process, transforming them into detailed three-dimensional spatial data. Processed data, including clustered groups, plane coefficients, and IDs, are stored in a designated data buffer for efficient organization and retrieval.
The detailed operations involved in the method include map creation, clustering, planar regression modeling, projection, edge detection, contour matching, and conversion to 3D spatial data. Each operation contributes to the overall objective of enhancing the reconstruction of texture-less surfaces in autonomous systems through the fusion of RADAR and camera data.
The invention's operational framework comprises several key processes:
1. Map Creation (Operation 230): Integrating odometry and RADAR data initiates the construction of a comprehensive 3D model of the environment. Odometry data informs about the robot's motion, while RADAR data provides insights into object distribution. The last frames of odometry data help address any RADAR data gaps or inconsistencies, ensuring map completeness and accuracy.
2. Clustering (Operation 300): After map creation, RADAR data undergoes clustering analysis, grouping nearby points based on spatial proximity. This identifies clusters corresponding to distinct environmental objects or surfaces, forming the basis for subsequent analysis and reconstruction tasks.
3. Planar Regression Model (Operation 310): Following clustering, a planar regression model computes plane coefficients, defining mathematical equations for planar surfaces in 3D space. Statistical techniques estimate these coefficients, enhancing reconstruction fidelity by accurately characterizing planar surfaces.
4. Projection (Operation 320): Using derived plane coefficients, additional points are projected onto identified planes, extending spatial coverage and enhancing environment representation. Aligning new points with identified planes leads to a more comprehensive and detailed reconstruction.
In parallel, stereo image optimization occurs:
1. Edge Detection and Contour Matching (Operations 220-240): Edge detection algorithms identify prominent contours in filtered 2D camera data, delineating boundaries between different environmental regions or surfaces. Centroids of clustered groups are matched with identified contours to establish spatial correspondence, ensuring accurate spatial alignment for precise reconstruction.
2. Conversion to 3D Spatial Data (Operations 260-280): Retained points, along with corresponding contour IDs, undergo a conversion process to transform them into detailed 3D spatial data. This guided conversion facilitates the reconstruction of a detailed and accurate representation of the environment in 3D space, enabling comprehensive exploration of its geometric properties and spatial relationships.
The data buffer serves the purpose of storing processed data, including clustered groups, plane coefficients, and IDs, for efficient organization and retrieval. Each cluster or group is assigned a unique identifier, and centroid IDs and cluster IDs from 2D camera data processing are integrated, facilitating seamless integration of multi-sensor information.
In summary, the invention represents a significant advancement in the field of autonomous systems by addressing the limitations of traditional stereo vision approaches and providing a more robust and accurate method for reconstructing texture-less surfaces. By integrating RADAR technology with stereo vision systems, the invention offers enhanced perception capabilities and facilitates precise navigation and decision-making in diverse and challenging environments.
,CLAIMS:5. CLAIMS
I/We Claim:
1. A method for planar reconstruction with RADAR and camera fusion (100), comprising:
RADAR sensors; high-resolution cameras; the latter including map creation; clustering (200); planar regression modelling (210); and projection (220);
an map created by merging odometry and RADAR data, providing a volumetric representation of the environment;
said clustering analysis performed on RADAR data to identify distinct objects or surfaces within the environment;
said planar regression model then applied to the clustered points to estimate plane coefficients, improving the fidelity of the reconstruction process;
projecting additional points onto identified planes based on the derived coefficients, extending the spatial coverage of the reconstruction;
edge detection algorithms applied on filtered 2D camera data to identify contours, and matching centroids of clustered groups against these contours to establish spatial correspondence.
2. The method (100) as claimed in claim 1, wherein the integration of RADAR and camera data enables real-time planar reconstruction suitable for dynamic environments.
3. The method (100) as claimed in claim 1, wherein the clustering analysis involves grouping nearby RADAR points together based on spatial proximity, thereby identifying clusters of points corresponding to distinct objects or surfaces within the environment.
4. The method (100) as claimed in claim 1, wherein the planar regression model employs statistical techniques to estimate plane coefficients, providing valuable insights into the geometric properties of the environment.
5. The method (100) as claimed in claim 1, wherein the edge detection algorithms are applied to filtered 2D camera data to identify prominent contours, which delineate boundaries between different regions or surfaces in the environment.
6. The method (100) as claimed in claim 1, wherein the conversion process involves transforming retained points, along with their corresponding contour IDs, back into three-dimensional spatial data, facilitating a detailed and accurate representation of the environment.
7. The method (100) as claimed in claim 1, wherein the data buffer stores processed data, including clustered groups, plane coefficients, and IDs, with each cluster or group assigned a unique identifier for efficient organization and retrieval.
6. DATE AND SIGNATURE
Dated this on 22nd February, 2025
| # | Name | Date |
|---|---|---|
| 1 | 202441012783-PROVISIONAL SPECIFICATION [22-02-2024(online)].pdf | 2024-02-22 |
| 2 | 202441012783-FORM FOR STARTUP [22-02-2024(online)].pdf | 2024-02-22 |
| 3 | 202441012783-FORM FOR SMALL ENTITY(FORM-28) [22-02-2024(online)].pdf | 2024-02-22 |
| 4 | 202441012783-FORM 1 [22-02-2024(online)].pdf | 2024-02-22 |
| 5 | 202441012783-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-02-2024(online)].pdf | 2024-02-22 |
| 6 | 202441012783-EVIDENCE FOR REGISTRATION UNDER SSI [22-02-2024(online)].pdf | 2024-02-22 |
| 7 | 202441012783-DRAWINGS [22-02-2024(online)].pdf | 2024-02-22 |
| 8 | 202441012783-Proof of Right [14-03-2024(online)].pdf | 2024-03-14 |
| 9 | 202441012783-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 10 | 202441012783-ENDORSEMENT BY INVENTORS [14-03-2024(online)].pdf | 2024-03-14 |
| 11 | 202441012783-DRAWING [22-02-2025(online)].pdf | 2025-02-22 |
| 12 | 202441012783-COMPLETE SPECIFICATION [22-02-2025(online)].pdf | 2025-02-22 |