Abstract: 7. ABSTRACT The proposed apparatus, integrating stereo vision (101, 102) and mmWave RADAR technologies (104) with a central processor (114), generates dense 3D spatial data for enhanced perception in challenging environments. By strategically combining these sensor modalities, the system compensates for individual weaknesses, such as stereo vision's susceptibility to adverse weather and RADAR's limitations in object recognition. Stereo vision (100) excels in depth perception and object recognition but falters in poor visibility, while RADAR provides robust performance in adverse conditions but lacks detailed object recognition. The fusion of these technologies leverages their strengths, enabling precise mapping, obstacle detection, and surveillance beyond visual obstructions. The modular design of the apparatus allows for flexibility and scalability, accommodating various configurations, placements and sensor combinations tailored to specific applications. This comprehensive solution demonstrates adaptability and resilience, with potential applications spanning autonomous navigation, surveillance, and environmental mapping. The figure associated with abstract is Fig. 1.
Description:4. DESCRIPTION
Technical Field of the Invention
The present invention related to environmental mapping. More particularly, focusing on sensor fusion for 3D spatial data generation and mapping in autonomous navigation systems.
Background of the Invention
Mapping technologies have evolved significantly, experiencing a fundamental shift in how spatial information is perceived and interacted with. This transformation is driven by the strategic integration of advanced sensors and data sources, enhancing the capabilities of mapping technologies in generating precise spatial models. The innovation at hand centres on 3D mapping technology tailored for Autonomous Navigation, involving the representation of spatial environments through techniques that utilise various types of 3D spatial data. This approach entails capturing detailed spatial information in three dimensions, where each data element encapsulates information about the physical environment, including spatial coordinates and potential additional attributes like colour or intensity, depending on the specific requirements of the application.
It is important to note that the density of the 3D spatial data emerges as a critical factor influencing map accuracy, reflecting the quantity of information within a designated unit of space. A higher density of 3D spatial data results in more intricate and nuanced maps, providing a more comprehensive dataset for autonomous systems to navigate and make informed decisions. This progress fosters high-precision mapping and navigation, catering to a diverse range of industries such as robotics, virtual reality, and geospatial analysis.
Within the landscape of 3D mapping technologies, LiDAR (Light Detection and Ranging) has emerged as a leading player, acclaimed for its capability to generate detailed and accurate 3D maps. Operating on the principle of laser light emission and reflection, LIDAR systems measure distances and create points by precisely recording the time taken for laser beams to reflect off surfaces. However, traditional LiDAR technologies exhibit inherent limitations that have spurred the quest for innovative solutions. One notable constraint lies in their performance in scenarios involving transparent surfaces, most notably glass. Traditional LiDAR struggles to accurately perceive and map objects beyond transparent barriers, posing challenges in environments where such surfaces are prevalent. Additionally, the efficacy of LiDAR diminishes in conditions characterised by low light or fog, hindering its practicality in real-world applications where diverse environmental challenges are commonplace.
As such, in the realm of mapping technologies, several mechanisms have been devised to address evolving mapping requirements. Notable among these is US10578742B2, which presents a LIDAR-based mapping method. This method involves acquiring initial data frames from LIDAR scans and Inertial Measurement Unit (IMU) navigation data. By extracting features and employing transformation matrices, it updates local coordinate frames to create global maps. While effective, this method relies heavily on LIDAR technology, which, as discussed, may struggle with transparent surfaces and adverse weather conditions, limiting its applicability in dynamic environments.
Another approach, as outlined in US5173947A, introduces an apparatus utilizing voxel-based processing cells to generate 3D maps. These cells store condition indicators corresponding to different sets of coordinates, facilitating the creation of comprehensive spatial models. However, this method relies on voxel-based representations, which may lack the detail and accuracy required for precise environmental mapping, especially in scenarios involving transparent surfaces or challenging lighting conditions.
A further advancement, described in US20210192762A1, presents a method leveraging machine learning to generate updated 2D depth maps from 3D radar heat maps and stereo images. While this approach offers improved depth perception, it falls short in providing comprehensive 3D maps necessary for dynamic mapping capabilities. Additionally, its reliance on 2D representations limits its effectiveness in scenarios requiring precise spatial understanding in three dimensions.
In contrast, the innovation proposed in this document introduces a multi-sensor apparatus tailored for environmental mapping and autonomous navigation. By integrating cameras, mmWave RADAR, and advanced processing algorithms, this apparatus overcomes the limitations of traditional mapping technologies. Unlike methods reliant solely on LIDAR or voxel-based processing, this apparatus offers a holistic approach by fusing data from multiple sensors to create comprehensive 3D spatial models. This fusion process ensures redundancy and enhanced accuracy, vital for reliable navigation in dynamic environments.
Moreover, the apparatus's ability to visualize combined 3D spatial data through mesh generation provides tangible representations for analysis and navigation. This feature sets it apart from methods focusing solely on 2D representations, enabling precise, real-time mapping and obstacle detection crucial for autonomous systems.
Additionally, the modular design of the apparatus allows for real-time adjustment of sensor configurations, ensuring adaptability to evolving environmental factors and operational requirements. This flexibility distinguishes it from static mapping methods, enabling dynamic mapping capabilities crucial for autonomous navigation systems.
Furthermore, the mmWave RADAR technology employed in the apparatus penetrates adverse weather conditions, ensuring reliable performance regardless of environmental challenges. This capability addresses a significant limitation of traditional LiDAR-based methods, making the apparatus suitable for real-world applications in various industries.
While prior art methods offer insights into mapping technologies, they often exhibit limitations in addressing the dynamic spatial mapping requirements of autonomous navigation systems. The innovation presented in this document bridges these gaps by offering a comprehensive multi-sensor approach that combines the strengths of different data sources. By overcoming the limitations of traditional methods and providing enhanced accuracy, adaptability, and reliability, this innovation represents a significant advancement in environmental mapping and autonomous navigation technology.
Brief Summary of the Invention
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
The presented invention introduces a multi-sensor apparatus and method designed to revolutionize environmental mapping and autonomous navigation. Traditional methods often rely on single-sensor systems, which may encounter limitations in accuracy and reliability, especially in dynamic and unpredictable environments. To overcome these challenges, the multi-sensor apparatus integrates sophisticated sensor fusion techniques to achieve precise mapping, accurate obstacle detection, and reliable autonomous navigation.
The primary objectives of the multi-sensor apparatus and method are to enhance accuracy and reliability, enable adaptability and versatility, facilitate comprehensive spatial representation, and provide tangible visualization of environmental data. The apparatus comprises a modular housing unit, cameras, mmWave RADAR, a central processor, and extendable cables. Cameras capture visual data, while mmWave RADAR generates RADAR data. The central processor processes the data, employing advanced algorithms to optimize the fusion process. Extendable cables allow for flexible configurations and adaptable sensor placement, ensuring redundancy and an expanded field of view.
The integrated cameras replicate human binocular vision, providing precise depth perception and object recognition. However, they may face challenges in adverse weather conditions and low-light environments. To complement the capabilities of cameras, mmWave RADAR technology is integrated into the apparatus. RADAR ensures consistent performance regardless of environmental factors but may lack precision in fine-grained object recognition.
The core innovation lies in the sensor fusion techniques employed by the system. By integrating data from multiple sensors, including cameras and RADAR, the system achieves a comprehensive spatial representation of the environment. The fusion process concatenates 3D spatial data obtained from both cameras and RADAR into a unified dataset, resulting in a detailed and accurate representation of the surroundings. Sophisticated algorithms align and synchronize the captured data, ensuring accurate spatial integration.
Once the 3D spatial data is fused, the system visualizes the combined data through mesh generation. This visualization provides a tangible representation of the merged data, enabling users to inspect, analyse, and generate maps of their surroundings with precision. Rendering techniques enhance the clarity and detail of the visual representation, facilitating comprehensive analysis and decision-making.
In summary, the multi-sensor apparatus and method offer a solution to challenges in environmental mapping and autonomous navigation. With its integration of advanced sensor technologies, sophisticated fusion techniques, and comprehensive spatial representation capabilities, the apparatus sets a new standard for environmental sensing and autonomous navigation systems.
Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, the detailed description and specific examples, while indicating preferred embodiments of the invention, will be given by way of illustration along with complete specification.
Brief Summary of the Drawings
The drawings accompanying the detailed description of the invention illustrate various aspects of the multi-sensor apparatus and method for environmental mapping and autonomous navigation. Here's a brief summary of the key figures:
Fig. 1.1 and Fig. 1.2: Illustrate the front and isometric views, respectively, of a single module sensor system with housing, providing an overview of the apparatus's physical configuration.
Fig. 2.1 and Fig. 2.2: Show the front and isometric views, respectively, of a single module sensor system without housing, highlighting the internal components and layout.
Fig. 3.1 and Fig. 3.2: Depict a single module with a multi-RADAR configuration, both with and without housing, showcasing variations in sensor setup.
Fig. 4.1 and Fig. 4.2: Present front and isometric views of a single module with a multi-RADAR configuration without housing, providing additional perspectives on the sensor arrangement.
Fig. 5a: Displays a field view of a single module with an obstacle in the RADAR field, demonstrating the apparatus's obstacle detection capabilities.
Fig. 5b: Shows a field view of multiple modules with obstacles in the RADAR field of view, highlighting the scalability and coverage of the system.
Fig. 6, Fig. 7, and Fig. 8: Illustrate the apparatus mounted on different types of robots, including an indoor navigation drone, a legged robot, and a mobile robot, showcasing the versatility of the invention across various platforms.
Fig. 9a., Fig 9b: Presents front and top views of the sensor field of views with RADAR spatial data projected on a wall, simulating real-time data acquisition through RADAR and cameras.
Fig. 10: Depicts the apparatus with extendable cables, demonstrating the system's adaptability to different configurations and sensor placements.
Fig. 11: Shows a flowchart of the data acquisition pipeline, detailing the process of capturing and processing raw data from RADAR and stereo camera pairs.
Fig. 12: Illustrates the data processing prerequisite, outlining the initial filtering and formatting steps before further processing.
Fig. 13: Presents a flowchart of the mesh generation pipeline, describing the concatenation process to generate mesh from processed RADAR and vision 3D data.
Fig. 14.1, Fig. 14.2, and Fig. 14.3: Demonstrate the apparatus's glass detection capabilities, including setup, RADAR data, and comparison with LIDAR data.
Fig. 15.1 and Fig. 15.2: Showcase the apparatus's ability to detect objects behind thin walls, with setup and RADAR data presentation.
These drawings provide a comprehensive visual representation of the invention's components, configurations, and capabilities, supplementing the detailed description provided in the accompanying text.
Detailed Description of the Invention
The present disclosure emphasises that its application is not restricted to specific details of construction and component arrangement, as illustrated in the drawings. It is adaptable to various embodiments and implementations. The phraseology and terminology used should be regarded for descriptive purposes, not as limitations.
According to an exemplary embodiment of the present invention, a multi-sensor apparatus for environmental mapping and autonomous navigation is disclosed. The multi-sensor apparatus comprises a modular housing unit, cameras, mmWave RADAR, a central processor, and extendable cables. The cameras of the multi-sensor apparatus are configured for capturing visual data. The mmWave RADAR of the multi-sensor apparatus is configured for generating RADAR data. The central processor of the multi-sensor apparatus is configured for processing the visual data and RADAR data. The extendable nature of the multi-sensor apparatus modules through extension cables are configured to allow flexible configurations and adaptable sensors placement.
In accordance with an exemplary embodiment of the present invention, the multi-sensor apparatus comprises communication modules configured to transmit processed data to external devices for further analysis or control purposes.
In accordance with an exemplary embodiment of the present invention, the multi-sensor apparatus comprises a power management system configured to optimize energy consumption and prolong the operational lifespan of the apparatus.
In accordance with an exemplary embodiment of the present invention, the cameras and mmWave RADAR of the multi-sensor apparatus may be positioned within the housing unit in multi-modular configurations to ensure redundancy and expanded field of view. The multi-sensor apparatus performs a fusion process on the 3-D spatial data obtained from both cameras and RADAR through the concatenation of the two sets of spatial data into a unified dataset, resulting in a more comprehensive and detailed representation of the environment.
In accordance with an exemplary embodiment of the present invention, the multi-sensor apparatus visualizes the combined 3D spatial data, now representing the environment as captured by both cameras and RADAR, through mesh generation, providing a tangible representation of the merged data for inspection, analysis, and map generation.
In accordance with an exemplary embodiment of the present invention, the fusion process of the multi-sensor apparatus includes a data alignment algorithm executed by the central processor to synchronize the captured data from cameras and mmWave RADAR for accurate spatial integration.
In accordance with an exemplary embodiment of the present invention, the fusion process of the multi-sensor apparatus includes outlier removal algorithms to eliminate noise and enhance the accuracy of the combined 3D spatial data.
In accordance with an exemplary embodiment of the present invention, the mesh generation of the multi-sensor apparatus for visualizing the combined 3D spatial data employs rendering techniques to enhance the clarity and detail of the representation.
In accordance with an exemplary embodiment of the present invention, the extendable cables of the multi-sensor apparatus allow for real-time adjustment of sensor placement based on surrounding environmental factors such as intrusive obstacles and operational requirements.
In accordance with an exemplary embodiment of the present invention, the mmWave RADAR technology of the multi-sensor apparatus is configured to penetrate adverse weather conditions, ensuring consistent performance regardless of environmental factors.
In accordance with an exemplary embodiment of the present invention, the cameras of the multi-sensor apparatus are calibrated to replicate human binocular vision, providing precise depth perception and fine-grained object recognition.
In accordance with an exemplary embodiment of the present invention, the modular housing unit of the multi-sensor apparatus comprises protective enclosures within the modular housing unit for the cameras and mmWave RADAR to ensure operational integrity in varying environmental conditions.
In accordance with an exemplary embodiment of the present invention, a method for environmental mapping and autonomous navigation using the multi-sensor apparatus is disclosed. The method comprising steps of:
capturing visual data using cameras configured within a modular housing unit;
generating RADAR data using mmWave RADAR technology integrated within the modular housing unit;
processing the visual data and RADAR data using a central processor configured within the modular housing unit;
performing a fusion process on the 3-D spatial data obtained from both cameras and RADAR through the concatenation of the two sets of spatial data into a unified dataset;
visualizing the combined 3D spatial data, representing the environment as captured by both cameras and RADAR, through mesh generation;
utilizing the fused 3D spatial data for precise mapping, accurate obstacle detection, and reliable autonomous navigation.
Referring to the drawings now,
Fig’s 1-10 illustrates the apparatus that introduces sensor fusion techniques, integrating cameras (100, 102), mmWave RADAR technologies (104), and a central processor (114) for 3-D spatial data generation. This fusion strategically addresses individual limitations, enhancing overall perception accuracy by compensating for each technology's weaknesses. For example, in challenging weather conditions, where stereo vision may encounter difficulties, the RADAR ensures robust performance. Conversely, stereo vision excels in fine-grained object recognition, complementing the RADAR's broader environmental awareness.
We go into further detail below:
Stereo Vision, as a sensor technology, offers distinct advantages in terms of precise depth perception and fine-grained object recognition. Leveraging binocular vision, it accurately gauges the distance to objects and discerns intricate details, making it ideal for applications requiring meticulous depth information and object identification. However, Stereo Vision faces challenges in adverse weather conditions and low-light environments. Its reliance on visual cues makes it less effective when visibility is compromised, limiting its applicability in scenarios with poor weather or lighting. Hence, while Stereo Vision excels in specific circumstances, its effectiveness diminishes under unfavorable conditions.
Conversely, RADAR emerges as a robust alternative, particularly in adverse weather and light conditions. With its ability to penetrate fog, rain, and darkness, RADAR ensures consistent performance regardless of environmental factors. Moreover, it provides accurate distance measurements and velocity data, enabling efficient surveillance even beyond obstacles like thin walls and corners. Despite these strengths, RADAR falls short in fine-grained object recognition, lacking the precision of Stereo Vision in discerning detailed features. Thus, while RADAR offers reliability in challenging environments, its efficacy in object identification is comparatively limited.
The fusion of Stereo Vision and RADAR capitalizes on the strengths of both technologies while mitigating their individual weaknesses. By integrating these sensors, the system achieves dense 3D spatial data generation, enabling precise mapping and obstacle detection. This fusion of sensors enhances versatility by enabling operation in low-light, and adverse weather conditions such as fog or rain, overcoming individual sensor limitations.
The combined system can detect transparent surfaces such as glass, expanding its applicability. This is showcased in Figures 14.1 to 14.3, where glass is visualized through point data. 14.1and 14.2 shows this demo in action. With the mapping data collected by the apparatus (117) travelling from point A to point B, we can see dense spatial data with chair (138) and chair (140) appearing as clusters (146) and (148) respectively and glass plane (142) being represented by the cluster within rectangle (150). In Figure 14.3 specifically, a comparison with traditional LIDAR 144 demonstrates the superior accuracy of the fusion system in pinpointing the glass. LIDAR data, as represented by (152), is scattered in nature compared to the dense fused data. An additional feature of the apparatus is its ability to provide surveillance beyond obstacles like walls and detect objects and movement when out of camera visibility, as illustrated in Figure 15.1 and 15.2. Moving from point A to point B, detection is continued even with the wall acting as a visual blockade, radar points (154) representing the human in plain sight and radar points 156 detecting the human behind the wall.
Hence, the synergistic performance of Stereo Vision and RADAR offers a comprehensive solution across a wide range of scenarios, showcasing adaptability and resilience in challenging environments.
The collaborative output of stereo vision and mmWave RADAR technologies generates dense 3D data. This serves as a comprehensive spatial representation, facilitating precise mapping, accurate obstacle detection, and reliable autonomous navigation. Method 300 for data acquisition is detailed in fig. 11 and fig. 12. We first detail the data acquisition methodology for stereo cameras and then we discuss RADAR such that a skilled person in the field may understand its working.
The stereo vision component incorporates a calibrated pair of high-resolution cameras replicating human binocular vision. Taking a wall (134) as an example in fig 9a. and 9b, we can visualize the camera field of view (130) on the wall surface. Operation 310 - 316 (excluding 112) involves initial checks for camera readiness and left and right side image filtering. Upon confirming the camera frame's validity in operation 340, the stereo vision component proceeds with subsequent operations:
Operation 360 captures images in RGB data using the calibrated pair of cameras and converts the captured RGB data to grayscale through the following equation:
RGB [A] to Gray: y ?----- 0.299 . R + 0.587 . G + 0.114 . B
Operation 370 Stereo rectification is performed using camera intrinsic and extrinsic parameters. Here, Extrinsic parameters define the camera's position in the three-dimensional scene, while intrinsic parameters determine the optical center's position and the camera's focal length. For a given image, based on rectification parameters, a correction map is generated.
Here u and v are the 2D image points and x and y are the calculated corresponding 3D points:
Using this correction map, a remapping of the original image is done resulting in the destination image for our rectification process.
Operation 380 utilizes stereo images to create a detailed disparity map which it then translates into a depth map.
In the context of the present operation, the Left Image Intensity Function, denoted as: IL(x,y) represents the intensity of the left image at pixel (x,y).
The right Image Intensity Function denoted as IR(x+d,y), represents the intensity of the right image at pixel (x+d,y) where d is the disparity.
The matching cost function, C(x,y,d), is defined as the absolute intensity difference between corresponding pixels in the left and right images:
??(??,??,??) = |????(??,??) - ????(?? + ??,??)|
For a range of possible disparities d, the matching cost is calculated at each pixel, and the aggregate matching costs are computed to mitigate noise and enhance disparity estimation.
??????????????????(??, ??, ??) = ????????'(??????????????????(?? - 1, ??, ??') + ??(??, ??, ??))
The final disparity map D(x,y) is determined by selecting the disparity that minimizes the aggregated cost:
??(??, ??) = ??????????????(??????????????????(??, ??, ??))
To convert the obtained disparity map into a depth map, a reprojecting step is performed using the Q matrix. This involves transforming the disparity values into corresponding depth values effectively generating a 3D representation of the scene.
Depth perception is crucial for autonomous systems, such as robots or vehicles, aiming for reliable navigation and obstacle avoidance. Without accurate depth perception, autonomous systems might struggle to interpret their surroundings, leading to potential navigation errors, inefficient path planning, or collisions with obstacles.
Operation 390, data undergoes texture-based outlier removal. Depth calculation for low-texture surfaces is often challenging, resulting in depth points with inaccurate values. To address this issue, we employ a high-texture pass filter, derived from the source image, to identify and remove outliers from the generated depth map.
Operation 400 then utilizes the filtered depth map to generate comprehensive 3D spatial data, enhancing the stereo vision component's capabilities in mapping and understanding the environment.
Finally, Operation 410 transmits the depth map, spatial data, and rectified frames to the data buffer for further processing. This structured sequence ensures the efficient capture, processing, and translation of visual data by the stereo vision component.
Emitting millimeter-wave signals, the mmWave RADAR sensors provide accurate distance measurements and velocity data by measuring reflections off objects. This data, combined with spatial coordinates acquired through reflections, significantly contributes to the overall 3-D spatial data generation process.
As a data acquisition prerequisite for the RADAR, a series of operations are carried out as seen in Fig 12:
Operation 312: Upload Configuration to RADAR
In the initial step, the configuration settings specific to the mmWave RADAR are uploaded. This involves transmitting parameters such as operating frequency, sweep patterns, and modulation settings to the RADAR device. Configuring the RADAR is crucial for optimizing its performance and tailoring it to the specific requirements of the application or environment.
Operation 317: Retrieve Binary Data
Following the configuration upload, the mmWave RADAR begins its operation, emitting signals and capturing the reflected signals from the surroundings as data points as shown in fig. 9a where radar field of view 118 is shown to encapsulate wall 134 to give us radar data points such as 132. The RADAR system processes these signals and generates binary data representing the raw RADAR measurements. This binary data encapsulates information about the objects, obstacles, or features within the RADAR's field of view.
Operation 318: Format the Data in 3D Spatial Data Format
To enhance the usability of the RADAR data, the binary information obtained in the previous step is formatted into a 3D spatial data format. Before converting raw RADAR data to 3D points, however, several preprocessing steps refine the initial measurements. The raw data, composed of Analog-to-Digital Converter (ADC) packets, undergoes Fourier transforms to calculate critical parameters like range, azimuth, and elevation angles, facilitating spatial interpretation. Clutter removal techniques then eliminate unwanted signals, enhancing data accuracy. Constant False Alarm Rate (CFAR) thresholding distinguishes signals from background noise, aiding target detection in diverse conditions. The refined data is subsequently transmitted to the PC for further analysis.
This refined 3D spatial data goes through filtering for greater accuracy as seen in Fig. 11: Operations 420 - 460 involve removing outliers along the x, y, and z-axis along with any intensity outliers that may be present. before generating that map using the filtered 3-D spatial data. RADAR's effectiveness in challenging scenarios, such as unexpected weather conditions, makes it a valuable component. RADAR enables surveillance beyond obstacles, enhancing environmental awareness and obstacle detection.
The 3-D spatial data obtained from both cameras and RADAR undergoes a fusion process through the concatenation of the two sets of spatial data into a unified dataset as seen in Fig.13: method 500 - operations 510-540. It can be represented as such:
StereoVision data = { (x1,y1,z1),(x2,y2,z2),...,(xn,yn,zn) }
RADAR 3D Data ={ (x1',y1',z1'),(x2',y2',z2'),...,(xm',ym',zm') }
Concatenated Data =
{(x1,y1,z1),(x2,y2,z2),...,(xn,yn,zn),(x1',y1',z1'), (x2',y2',z2'),...,(xm',ym',zm’)}
This merging process results in a more comprehensive and detailed representation of the environment, leveraging the strengths of both camera and RADAR technologies. Finally in operation 550, The combined 3D spatial data, now representing the environment as captured by both cameras and RADAR, is further visualized through mesh generation. This visualization provides a tangible representation of the merged data, allowing users to inspect and analyze spatial information with precision and generate a map of their surroundings.
The sequence of operations and information presented in this description is provided for explanatory purposes only and may not accurately reflect the simultaneous or parallel nature of certain processes. The depiction of steps in a linear order is for ease of understanding and does not imply a strict chronological sequence in the actual implementation of the described methods. In practice, certain operations, especially those related to data acquisition and processing in RADAR and camera, may occur concurrently or in parallel rather than strictly following the written order. The purpose of this portrayal is to enhance clarity and facilitate comprehension. Readers are advised to consider the dynamic and interconnected nature of real-world implementations, recognizing that certain activities may overlap or occur simultaneously.
Description of Preferred Embodiments:
In certain preferred embodiments, the benefits of modularity go beyond physical configurations, with the modular housing units being designed to be extendable in nature. As depicted in Fig. 10, individual modules (107) may extend from the central processor (114) using extension cables (136) enabling user-friendly design configurations for enhanced mapping. This design facilitates efficient mapping of larger areas while bypassing any obtrusive objects or immovable parts close to the apparatus that may block the sensors.
In some preferred embodiments, the apparatus may exist as a single module while in others it may be a collection of modules based on field of view necessity. N number of modules may be considered depending on the various processing capabilities of the used processing unit.
In scenarios where the number of radar sensors exceeds two, additional odometry data may be generated to enhance localization accuracy and further augment the mapping process. This supplementary odometry information serves to refine the spatial understanding of the environment.
In some embodiments, the sensor housing system may contain only a combination of plurality of cameras and radar, while in others additional sensors may be integrated to optimize performance and add functionalities including but not limited to location sensors (e.g., GPS sensors, mobile device transmitters for location triangulation), proximity sensors (e.g., ultrasonic sensors, lidar, time-of-movement cameras), inertial sensors (e.g., accelerometers, gyroscopes, IMUs), altitude sensors, pressure sensors (e.g., barometers), audio sensors (e.g., microphones), and field sensors (e.g., magnetometers, electromagnetic sensors). These sensors can be combined in any suitable number and combination, ranging from one to multiple sensors. Therefore, the housing unit is not restricted to the listed module components and may be integrated with these additional sensors based on specific application requirements.
In some preferred embodiments, sensor placement is strategically executed to establish redundancy and expand the field of view through deliberate overlap of RADAR when multiple modules are connected to the central processor, thereby improving blind spot visibility and fortifying system reliability. This can be seen in fig. 5a where we first see an example of how blindspots are generated when an obstacle (120) blocks the radar’s field of view (118). The obscured region is marked by the shaded portion. Upon the placement of other modules, we are able to utilize radar from many perspectives to limit our blindspots, as denoted in Fig 5b. The blind spot is observed to be reduced drastically allowing for more comprehensive mapping.
In certain preferred embodiments, the adaptability of cameras is emphasised, allowing for their interchangeable use to cater to specific operational requirements. This adaptability is particularly advantageous when addressing varying environmental conditions, such as those encountered in outdoor or indoor settings, as well as in low-light situations. The cameras employed in these embodiments may encompass a broad spectrum, including but not limited to visible, infrared, or ultraviolet light ranges. Such flexibility ensures optimal performance and efficiency across diverse scenarios, enhancing the overall functionality and effectiveness of the system.
In some preferred embodiments, the module/modules can exist independently as a handheld apparatus while in others, the sensor system can be mounted, or attached, or placed on other objects, vehicles, or surfaces that are either static or capable of motion.
When incorporated onto vehicles, the system's deployment may vary depending on the intended purpose and the size of the vehicle. These vehicles encompass a wide spectrum, ranging from manned to unmanned platforms, each chosen according to specific operational requirements and constraints. This versatility allows for integration onto vehicles of varying scales and functionalities, enhancing the adaptability and applicability of the sensor system across different domains. The vehicles onto which the sensor system can be mounted span an extensive array, inclusive of but not limited to:
i. Land Vehicles: These encompass automobiles, trucks, buses, and off-road vehicles, among others. The sensor system's integration onto land vehicles facilitates applications such as, but not limited to, autonomous driving, navigation assistance, and obstacle detection.
ii. Aerial Vehicles: This category includes unmanned aerial vehicles (UAVs) and drones. Mounting the sensor system on such aerial platforms enables tasks such as, but not limited to, aerial reconnaissance, surveillance, and mapping.
iii. Personal Mobility Devices: This category encompasses bicycles, scooters, wheelchairs, and other personal transportation devices. Integration onto such devices enhances safety features such as navigation assistance, and environmental awareness for users.
For example, as seen in Fig.6, where the apparatus in its multi-modular structure (122) may be placed on indoor mapping drones (124) to facilitate precise indoor mapping for autonomous navigation and urban planning features. This may further be used to provide automated delivery services and automated transportation of goods. Similarly, the apparatus may also be placed on other robots, such as but not limited to, legged robots 126 (as seen in Fig. 7) and mobile robots 128 (as seen in Fig. 8) to facilitate the same purposes depending on the need of the person.
The disclosed embodiments are provided for illustrative purposes to convey the technical solutions of the present invention. It is crucial to emphasise that the protection scope of this invention extends beyond the specific embodiments presented herein. Any person skilled in the relevant art can readily envision numerous equivalent modifications or substitutions within the technical scope of this invention. Such modifications or substitutions, even if not explicitly detailed, are deemed to be encompassed within the broad protection scope of the present invention.
Any provided applications are only a few examples and not the entire scope, or use of said invention.
, Claims:5. CLAIMS
We Claim:
1. A multi-sensor apparatus for environmental mapping and autonomous navigation, comprising:
a modular housing unit (107), cameras (100), mmWave RADAR (104), a central processor (114), and extendable cables (136);
wherein the cameras (100) are configured for capturing visual data, the mmWave RADAR (104) is configured for generating RADAR data, the central processor (114) is configured for processing the visual data and RADAR data, and each module is equipped with extendable cables (136) to allow flexible configurations and adaptable sensors placement;
Characterized in that,
the cameras (100) and mmWave RADAR (104) can be positioned within housing units of the apparatus in multi-modular configurations such as (107, 122) to ensure redundancy and expanded field of view;
the apparatus performs a fusion process on the 3-D spatial data obtained from both cameras and RADAR through the concatenation of the two sets of spatial data into a unified dataset, resulting in a more comprehensive and detailed representation of the environment; and
the apparatus visualizes the combined 3D spatial data, now representing the environment as captured by both cameras and RADAR, through mesh generation, providing a tangible representation of the merged data for inspection, analysis, and map generation.
2. The multi-sensor apparatus as claimed in claim 1, wherein the fusion process includes a data alignment algorithm executed by the central processor (114) to synchronize the captured data from cameras (100) and mmWave RADAR (104) for accurate spatial integration.
3. The multi-sensor apparatus as claimed in claim 1, wherein the fusion process includes outlier removal algorithms to eliminate noise and enhance the accuracy of the combined 3D spatial data.
4. The multi-sensor apparatus as claimed in claim 1, wherein the mesh generation for visualizing the combined 3D spatial data employs rendering techniques to enhance the clarity and detail of the representation.
5. The multi-sensor apparatus as claimed in claim 1, wherein the extendable cables (136) allow for real-time adjustment of sensor placements based on surrounding environmental factors and operational requirements.
6. The multi-sensor apparatus as claimed in claim 1, wherein the mmWave RADAR technology (104) is configured to penetrate adverse weather conditions, ensuring consistent performance regardless of environmental factors.
7. The multi-sensor apparatus as claimed in claim 1, wherein the cameras (100) are calibrated to replicate human binocular vision, providing precise depth perception and fine-grained object recognition.
8. The multi-sensor apparatus as claimed in claim 1, wherein the modular housing unit (107) comprises protective enclosures within the modular housing unit (107) for the cameras (100) and mmWave RADAR (104) to ensure operational integrity in varying environmental conditions.
9. The multi-sensor apparatus as claimed in claim 1, wherein the apparatus comprises communication modules configured to transmit processed data to external devices for further analysis or control purposes.
10. The multi-sensor apparatus as claimed in claim 1, wherein the apparatus comprises a power management system configured to optimize energy consumption and prolong the operational lifespan of the apparatus.
11. The multi-sensor apparatus as claimed in claim 1, wherein the apparatus permits the interchangeable configuration of cameras within a module, offering adaptability for diverse applications, whether in indoor or outdoor scenarios and varying lighting conditions throughout the day.
12. The multi-sensor apparatus as claimed in claim 1, wherein the apparatus acts as a universally adaptable sensor system for 3D mapping, utilizing RADAR and camera fusion, applicable to deployment on stationary, mobile, autonomous, and non-autonomous systems, including vehicles, stationary platforms, or as a standalone sensor.
13. The multi-sensor apparatus as claimed in claim 1, wherein the apparatus is configured to provide real-time 3D spatial data and map generation, facilitating dynamic obstacle detection within real-world environments.
14. A method for environmental mapping and autonomous navigation using the multi-sensor apparatus as claimed in claim 1, wherein the method comprising steps of:
a. capturing visual data using cameras configured within a modular housing unit;
b. generating RADAR data using mmWave RADAR technology integrated within the modular housing unit;
c. processing the visual data and RADAR data using a central processor configured within the modular housing unit;
d. performing a fusion process on the 3-D spatial data obtained from both cameras and RADAR through the concatenation of the two sets of spatial data into a unified dataset;
e. visualizing the combined 3D spatial data, representing the environment as captured by both cameras and RADAR, through mesh generation;
f. utilizing the fused 3D spatial data for precise mapping, accurate obstacle detection, and reliable autonomous navigation.
| # | Name | Date |
|---|---|---|
| 1 | 202441012789-FORM FOR SMALL ENTITY(FORM-28) [22-02-2024(online)].pdf | 2024-02-22 |
| 2 | 202441012789-FORM FOR SMALL ENTITY [22-02-2024(online)].pdf | 2024-02-22 |
| 3 | 202441012789-FORM 1 [22-02-2024(online)].pdf | 2024-02-22 |
| 4 | 202441012789-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-02-2024(online)].pdf | 2024-02-22 |
| 5 | 202441012789-EVIDENCE FOR REGISTRATION UNDER SSI [22-02-2024(online)].pdf | 2024-02-22 |
| 6 | 202441012789-DRAWINGS [22-02-2024(online)].pdf | 2024-02-22 |
| 7 | 202441012789-COMPLETE SPECIFICATION [22-02-2024(online)].pdf | 2024-02-22 |
| 8 | 202441012789-FORM-9 [29-02-2024(online)].pdf | 2024-02-29 |
| 9 | 202441012789-Proof of Right [14-03-2024(online)].pdf | 2024-03-14 |
| 10 | 202441012789-FORM-26 [14-03-2024(online)].pdf | 2024-03-14 |
| 11 | 202441012789-FORM 3 [14-03-2024(online)].pdf | 2024-03-14 |
| 12 | 202441012789-FORM 3 [14-03-2024(online)]-1.pdf | 2024-03-14 |
| 13 | 202441012789-ENDORSEMENT BY INVENTORS [14-03-2024(online)].pdf | 2024-03-14 |
| 14 | 202441012789-STARTUP [22-03-2024(online)].pdf | 2024-03-22 |
| 15 | 202441012789-FORM28 [22-03-2024(online)].pdf | 2024-03-22 |
| 16 | 202441012789-FORM 18A [22-03-2024(online)].pdf | 2024-03-22 |
| 17 | 202441012789-FER.pdf | 2024-04-29 |
| 18 | 202441012789-OTHERS [31-05-2024(online)].pdf | 2024-05-31 |
| 19 | 202441012789-FER_SER_REPLY [31-05-2024(online)].pdf | 2024-05-31 |
| 20 | 202441012789-DRAWING [31-05-2024(online)].pdf | 2024-05-31 |
| 21 | 202441012789-COMPLETE SPECIFICATION [31-05-2024(online)].pdf | 2024-05-31 |
| 22 | 202441012789-US(14)-HearingNotice-(HearingDate-28-02-2025).pdf | 2025-01-29 |
| 23 | 202441012789-FORM-26 [24-02-2025(online)].pdf | 2025-02-24 |
| 24 | 202441012789-Correspondence to notify the Controller [24-02-2025(online)].pdf | 2025-02-24 |
| 25 | 202441012789-Annexure [24-02-2025(online)].pdf | 2025-02-24 |
| 26 | 202441012789-Written submissions and relevant documents [13-03-2025(online)].pdf | 2025-03-13 |
| 27 | 202441012789-PatentCertificate26-08-2025.pdf | 2025-08-26 |
| 28 | 202441012789-IntimationOfGrant26-08-2025.pdf | 2025-08-26 |
| 1 | SearchHistoryE_02-04-2024.pdf |
| 2 | AmendedSearchAE_06-06-2024.pdf |