Abstract: The robot combines quantum dot cameras which are known for their high sensitivity in the low light conditions necessary in the search of survivor and hazards behind the debris. To achieve accurate localization and accurate mapping UKF-based SLAM algorithm is used for which is important for estimation of position of the robot in the uneven ground and for accurate mapping of the environment. Locomotive drive systems allow the robot to navigate difficult terrains rather effectively and stably, while environmental conditions such as temperature, humidity and gases are constantly assessed before the robot performs any operation. On-board computing is powered by NVIDIA Jetson Xavier NX; this makes the system very efficient in utilizing the available power in its operations. Integration of wireless communication facilities through LTE allows for the remote control of the robots as well as the transfer of data back and forth with the command centers and first aid assistance enhancing on the disaster areas. This all-encompassing approach is proposed to enhance the post-earthquake response by employing the state-of-the-art technology with a modular/singular robotic system that will facilitate the rescue and reconnaissance tasks and functions with improved operational and decision-making acumen. 3 claims and 2 figures
Description:Field of the Invention
This invention is pertinent to autonomous robotics specifically focusing on Simultaneous Localization and Mapping (SLAM) Technology. It’s aimed at improving disaster rescue situations by enabling autonomous robots to navigate, map and collect vital data in disaster-hit areas there by supporting the rescue operations and improving situational awareness.
Background of the Invention
Rescue robots assist in operations where the environment is too hostile for the human occupation and assist in the search and rescue of victims and performing functions that could harm the people involved. These robots can track areas which it is difficult or dangerous for people to penetrate including buildings that have been under attack, poisonous areas, and areas with extreme climate. The history of rescue robots started in the beginning with the use of mine and military robots, whose primary function was to operate in risky environments. In operation, improvements of the robots, sensors and AI support have given them the abilities to self-navigate, self-map difficult environments, and perform delicate operations. All these advancements have made them very useful in disasters incidence, because in such situations these tools enhance lives and reduce risks.
The field of SLAM (Simultaneous Localization and Mapping) was first introduced in late 1980’s and early 1990 using techniques that would enable robots, automatic systems to construct a map of the environment in which they operate while at the same time determining their location. With the development in sensor technology, smaller and more integrated systems in the early decade of 2000s included together multi-modal SLAM incorporating data from several sorts of sensors, for instance, cameras, LIDARs, and IMUs, enhanced robustness and precision of localization and mapping. By now, the multi-modal SLAM research has reached a mature status; there exist a vast amount of published scientific contributions and experimental studies, which have considered the problem in robotics and autonomous vehicles.
The innovation in US11347237 describes how the proposed system mentions the sue of cameras and LiDAR which utilizes the maximum advanced cameras for enhanced environmental perception. This patent used sophisticated SLAM algorithms for eliminating gaussian noise while simultaneously using SLAM techniques for its localization. This patent utilizes camera but the standard and basic ones. The LiDAR 2D or 3D object detection in the robot’s environment.
US20230194306A1 details ahierarchical sensor data processing technique, where the fusion of cues from multiple sensors is done at different levels in accordance with SLAM guidelines for managing the complexity of multi-sensor fusion. This approach uses a closed-loop detection to avoid error accumulation and refines maps, a common procedure in SLAM. In common with the most SLAM systems, this approach combines sensor data (point cloud, image, IMU, GNSS) to extend and improve its mapping and localization performance.The patent lists unique strategies for calibrating and aligning different components of sensor data, which may differ from conventional SLAM techniques. To some extent, these methods are really critical for the accuracy of the system and can be a differentiating factor.
US11858138B2 describes how the approach for SLIP and LIP have been implemented. SLIP and LIP stands for spring loaded inverted pendulum and linear inverted pendulum model which are generally used for controlling posture balance and landing. After the landing the LIP model will be used to control the center of the mass of robot to set a height. This approach used Proportional-Derivative (PD) control which is standard method for maintaining the posture of robots. It operates by adjusting angles. It minimizes errors between desired and actual positions.
US9908240B1 describes how the robotic system utilizes variety of sensors. These include cameras for environmental awareness which is crucial for SLAM. It mentions the use of vision systems. RADAR, LIDAR SONAR and GPS are used for capturing information about the environment. This patent discusses use of advanced vision sensors and cameras. These are essential for detailed environmental mapping. They are necessary for obstacle detection. This is similar to what quantum dot cameras would provide.CN107255795B describes how the methods purpose is to enhance the accuracy of positioning of mobile robots in settings that are indoors. This is important for activities that will involve navigation to specific point Since HEC is accurate, but not precise, it is very good for tasks that involve navigation to a particular point. Effective interaction with surroundings is achieved to a considerably higher extent.The method entails online performance monitoring in the EKF algorithm. It defines the time to switch to EFIR in order to obtain higher accuracy. This type of filtering helps in increasing reliability in this sense of applicability. This is particularly valid for the indoor environments that exhibit changes in their conditions.
Summary of the Invention
The invention stands as a small, highly maneuverable disaster response robot that has the potential to dramatically improve search and rescue operations in disaster zones such as earthquake devastated areas. It compresses a new generation of versatile and robust SLAM (Simultaneous Localization and Mapping) coordinated together with the manipulator arm and leg end. Through integrating data from a number of cameras, LiDAR sensors, IMU units, and tactile sense organs mounted on a commercial mobile manipulator, through a network of algorithms, the robot is able to plot accurate real-time maps, and safely traverse through rubble.
The manipulator arm is a feature that can be used to scoop the debris as well as for any search and rescue operations such as saving trapped victims while the legged transforming robot mobility system gives stability and flexibility to move on rough terrains. Equipped with solid physical-sensing capabilities as well as self-propelled and semi-self-propelled functionalities based on sensor fusion and AI technologies, the robot avails effective real-time information for human operators on the situation and environment, thus hastening the pace of disaster management and boosting safety and efficiency of disaster relievers.
Brief Description of Drawings
The invention will be described in detail with the reference to the exemplary embodiment shown in the figures wherein:
Figure-1: Architecture and brief working of the proposed system
Figure-2: Diagrammatic representation of SLAM architecture
Detailed Description of the Invention
The invention is a modern disaster response robot designed to optimize the search and rescue missions in areas heavily affected by natural disasters like earthquakes. This PR system incorporates a range of state-of-the-art technologies which includes multi-modal SLAM, a dexterous manipulator arm, and a stable legged mobility platform to overcome the numerous obstacles and challenges entailed by such a dynamic environment.
The main component of this invention is the multi-modal SLAM feature, which uses cameras, LiDAR, IMUs, and tactile sensors to create real-time maps of the environment. These sensors allow the robot to constantly update its location and awareness of the environment even in changing conditions and even when the view is obscured. The main hardware interfaces for SLAM consist of camera inputs with high resolution quantum dot cameras, sensors with LiDAR for detecting depth, IMUs used for orientation/motion detection, and tactile sensors for interaction feedback. There are SLAM algorithms, sensor fusion algorithms and data processing modules or units and these are driven by high performing but energy efficient processors such as the NVIDIA Jetson Xavier NX. This arm is built to have flexibility in as many axes as possible to replicate the flexibility of a human arm. It consists of mechatronic elements like actuators, motors, joints and end-effector like grippers or more application dependent tools. The software level supports highly sophisticated algorithms in the field of control engineering to achieve unerring and adaptive manipulations. The control system, which might be based on an embedded system, e.g., ARM Cortex might, control the manipulator arm’s movement and response. Also, the equipped AI on the robot has a machine learning feature that enables it to analyze how it can handle the arm to overcome obstacles and adapt to different scenes.
The invention’s legged locomotion system is one of the essential components that define it as different from wheeled and tracked robots. The main embodiments are the legs that have multiple hinged joints, the active elements – the actuators and passive elements in form of gyroscopes and accelerometers responsible for the stability. These legs are fully powered with actuators that make the robot to step over an obstacle or even climb up inclined surfaces. The software that enables legged locomotion is the kinematic and dynamic models, the planning algorithms of the gait, as well as the feedback control systems. HW components consist of quantum dot cameras, Light Detection and Ranging, Inertial Measurement Unit, and tactile sensors. The software also includes various sophisticated sensor fusion procedures that improve the sensed environment.
This system runs on real-time operating systems (RTOS) and by deploying artificial intelligence algorithms, including convolutional neural networks (CNN for object recognition and sensor analysis the robot is capable of smooth functioning in conditions of low light.One of the significant functions of the invention is to give real-time communication and scenario information to the human operator or the rescue teams. The communication’s hardware includes wireless modules like Wi-Fi modem, LTE modem, satellite modem, and high-definition camera for the visualization of the task. It comprises communication protocols application, data encryption algorithms and user interface application software that are used in the remotely located operator consoles. This system enables the robot to report vital information, for instance, the layout of the surrounding environment and the positions of dangers to the command post to improve the coordination and productivity of the rescue missions.
The given specification for the equipment of multi-modal SLAM system involves high resolution quantum dot cameras of model QD VISION QD CAM with 4K Ultra HD, 60FPS with FOV and 120 degrees and with USB 3. 0 connectivity. LiDAR (RPLIDARA2) distance is 200 meters and LiDAR accuracy is to 0. The plane’s tilt angle is 2 degrees, rotation speed is within the range of 10-20 Hz, with the accuracy of ±2 cm. Inertial Measurement Units OF MODEL Bosch BNO055 (IMUs) give some necessary data with gyroscopes at ±2000°/sec, accelerometers at ±16g, magnetometers at ±8 Gauss and data rate of 1 kHz. Tactile sensors which are TEKSCAN FLEXIFORCE A20 and TACTILE CAPACITIVE TOUCH SENSOR have sensitivity that ranges from 0. to 10 N, a spatial resolution of 1 mm and a data rate of 1 kHz for detecting the physical interaction with the environment. Manipulator arm is supplied by brushless DC motors with the maximum torque 30 Nm, and max speed 500 RPM, seven axes and 360-degree motion
Based on the reference given in fig (1) the flowchart describes a process of starting and updating SIMULTANEOUS LOCALIZATION AND MAPPING (SLAM) by providing it with diverse data received through sensors. This is then followed by the initialization step, the UKF and the map. Once initialized, the system receives sensor data, which is then categorized by sensor type: Such as LiDAR, Camera or IMU. Depending on the type of sensor under consideration, different operations are performed on the data. From the LiDAR data, data is processed and used to set new map and pose of the SLAM system. In a like manner, camera data is processed to update the map and the poses estimates. The pose estimate is revised based on IMU data and other sensors through data processing and data fusion. This structured method that allows the SLAM system to map the environment it is in and localize itself using the inputs from the sensors.
The figure 2. highlights some of the components of the control architecture of a quadruped robot and their interactions. Initiated by the Goal Pose Generator, it takes a goal pose or velocity command when the robot is required to reach a certain pose/velocity and delivers an intermediate goal pose to the Locomotion Planner. Specifically, the Footstep Generator and Footstep Optimizer in the Locomotion Planner generate the footstep sequence according to the slope plane angle and send it to the Free Gait Core and then, the Pose Optimizer and Swing Leg Planner decide the demonstrated motion plan according to the terrain data. The Range Measurements Modul converts range measures and the current state of the robot into a terrain map and foothold scores, which serve as the input for the Map Processing Modul to define surface. The State Estimator also keeps updating the state of the robot on the basis of the information sensed from the robots’ surroundings.
3 claims and 2 figures
Equivalents
The present invention,Multisensor fusion and quantum dot cameras could be a system that may use Gaussian Sum Filters or Ensemble Kalman Filters, rather than particle or Kalman filters, for performing data fusion and state estimation. Instead of the quantum dot camera, it may incorporate thermal imaging cameras or LiDAR sensors as its scanning device. However, these would serve to attain the same disaster monitoring function, meaning they would functionally be equivalent but otherwise implemented differently. , Claims:The scope of the invention will be defined by the following claims:
Claim:
1. The advanced multi sensor slam for dynamic programming in earthquake scenarios consisting of,
a. The robot incorporates quantum dot cameras for efficiency particularly in the aspect of sensitivity to light, necessary when looking for survivors or lethal threats after the earthquake.
b. With powerful wheels or tracks for movement, the robot moves conveniently even in difficult terrains, to reach locations that are hard to reach or risky.
2. According to claim 1, a set of environmental sensors is used for gathering the necessary data on temperature, humidity, and different gases taking into consideration safety and environmental conditions of thedisaster areas.The use of NVIDIA Jetson Xavier NX guarantees that there is powerful onboard computing that can handle heavy computations of algorithms without compromising the use of power.
3. According to claim 1, the technological features like LTE, INTERCOM help with communication for remote control and interaction with bases and people on the scene.
SLAM technology that is backed up by UKF assures accurate localization and mapping that is needed to navigate and create a map of the environment that has instabilities or debris on the floor.
| # | Name | Date |
|---|---|---|
| 1 | 202541014974-REQUEST FOR EARLY PUBLICATION(FORM-9) [21-02-2025(online)].pdf | 2025-02-21 |
| 2 | 202541014974-FORM-9 [21-02-2025(online)].pdf | 2025-02-21 |
| 3 | 202541014974-FORM FOR STARTUP [21-02-2025(online)].pdf | 2025-02-21 |
| 4 | 202541014974-FORM FOR SMALL ENTITY(FORM-28) [21-02-2025(online)].pdf | 2025-02-21 |
| 5 | 202541014974-FORM 1 [21-02-2025(online)].pdf | 2025-02-21 |
| 6 | 202541014974-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [21-02-2025(online)].pdf | 2025-02-21 |
| 7 | 202541014974-EVIDENCE FOR REGISTRATION UNDER SSI [21-02-2025(online)].pdf | 2025-02-21 |
| 8 | 202541014974-EDUCATIONAL INSTITUTION(S) [21-02-2025(online)].pdf | 2025-02-21 |
| 9 | 202541014974-DRAWINGS [21-02-2025(online)].pdf | 2025-02-21 |
| 10 | 202541014974-COMPLETE SPECIFICATION [21-02-2025(online)].pdf | 2025-02-21 |