Sign In to Follow Application
View All Documents & Correspondence

System And Method For Integrating Machine Learning Into Open Source Robotics Operating System 2 (Ros2)

Abstract: The present disclosure relates to a system (102) and a method (400B) for integrating Machine Learning (ML) into open-source Robotics Operating System 2 (ROS2). The system (102) comprises several modules, including a data acquisition module (226) for collecting sensor data from one or more ROS2 nodes (104), a learning module (228) for selecting and applying one or more learning techniques to training data sets to generate one or more ML models (106), a training module (230) for training ML models (106) using processed sensor data, a deployment module (234) for encapsulating trained ML models (106) into deployable units, and an ensemble learning module (236) for combining outputs from each of the one or more trained ML models (106) to improve prediction accuracy.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 April 2025
Publication Number
20/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Amrita Vishwa Vidyapeetham
Amrita Vishwa Vidyapeetham, Bengaluru Campus, Kasavanahalli, Carmelaram P.O., Bengaluru - 560035, Karnataka, India.

Inventors

1. KOCHUVILA, Sreeja
18B, SLP Nest, Owners Court West, Kasavanahalli, Carmelaram Post, Bengaluru, Karnataka - 560035, India.
2. VARGHESE T., Glace
Thekkiniyath Thekkumpeedika (House), Konikkara P.O, Thrissur, Kerala - 680306, India.
3. KUMAR, Navin
21-A, 3rd Main, 10th Cross, Shreyas Colony, J P Nagar, Bengaluru, Karnataka - 560078, India.

Specification

Description:TECHNICAL FIELD
[0001] The present disclosure relates to the field of robotics and Artificial Intelligence (AI). More particularly, the present disclosure relates to a system and a method for integrating Machine Learning (ML) into an open-source Robotics Operating System 2 (ROS2).

BACKGROUND
[0002] Robotics Operating Systems (ROS) are open-source frameworks widely used in the research and development of robots and autonomous vehicles. Initially developed by the research community, ROS has evolved through contributions from engineers and researchers, resulting in different versions such as ROS1 and ROS2. ROS2 is a widely adopted open-source framework for developing a plurality of robotic applications.
[0003] Artificial Intelligence (AI) and Machine Learning (ML) are critical for advancing the functionality of robotic systems, enabling them to perform complex tasks such as autonomous navigation, real-time decision-making, and adaptive learning. Existing ML package available in ROS1 is incompatible with ROS2 due to significant architectural differences of ROS2. Additionally, most of the ROS1 version is absolete or has very less life while ROS2 is still without AI/ML packages.
[0004] Therefore, there is a need for a comprehensive system, facilitating the development of advanced robotic applications.

OBJECTS OF THE PRESENT DISCLOSURE
[0005] An object of the present disclosure is to provide a system and a method for seamlessly integrating Machine Learning into an open source ROS2.
[0006] Another object of the present disclosure is to provide the system with an inbuilt ML package to seamlessly integrate with one or more ROS2 nodes and simplify the incorporation of one or more ML models into robotic systems.
[0007] Another object of the present disclosure is to provide the system with the inbuilt ML package using a modular approach, allowing a user to independently utilize learning, training, evaluation, and classification modules, thereby providing flexibility and encouraging reusability and customization in application development.
[0008] Another object of the present disclosure is to facilitate training of the one or more ML models using one or more processed sensor data compatible with one or more selected learning techniques and to dynamically adjust one or more internal parameters of each of the one or more ML models during training to optimize their predictive performance and minimize one or more errors.
[0009] Another object of the present disclosure is to include an ensemble learning module within the one or more ROS2 nodes to improve the accuracy and robustness of the one or more trained ML models compared to a single model approach.
[0010] Another object of the present disclosure is to integrate a model evaluation and classification module within the one or more ROS2 nodes, providing a comprehensive evaluation framework that includes real-time performance monitoring and automated test runs, thereby reducing the risk of deploying the one or more underperforming ML models.
[0011] Another object of the present disclosure is to support a deep learning-based exploration/exploitation technique to explore the environment, support large scale data training, transfer learning, even creation of generative models for many applications like warehouse automation, navigation, SLAM, work/space sharing robots and so on leading to better model performance and robust feature representations.
[0012] Another object of the present disclosure is to significantly enhance the functionality, efficiency, and adaptability of the one or more ROS2 nodes, making the one or more trained ML models accessible for a plurality of robotic applications.

SUMMARY
[0013] Aspects of the present disclosure relate to a field of robotics and Artificial Intelligence (AI). More particularly, the present disclosure pertains to a system and a method for integrating Machine Learning (ML) into an open-source Robotics Operating System 2 (ROS2).
[0014] An aspect of the present disclosure pertains to a system for integrating ML into an open-source ROS2. The system includes a data acquisition module, executed by one or more processors comprised in the system, to collect one or more sensor data from one or more ROS2 nodes. The one or more sensor data may be collected from a plurality of sensing devices associated with the one or more ROS2 nodes, wherein the plurality of sensing devices includes one or a combination of at least one of: a camera, a LiDAR, an ultrasonic sensor, a force sensor, or one or more other robotics hardware. The system further includes a learning module, executed by the one or more processors, to select one or more learning techniques to apply on one or more training data sets stored in the one or more ROS2 nodes to generate one or more ML models. The one or more learning techniques may include one or more supervised learning techniques, one or more unsupervised learning techniques, or one or more reinforcement learning techniques. The one or more learning techniques may be selected based on one or more predetermined criteria, including accuracy, a precision, a computational efficiency and one or more other performance requirements. The system includes a training module, executed by the one or more processors, to train each of the one or more ML models by feeding one or more processed sensor data to predict an output. The system includes a deployment module, executed by the one or more processors, to encapsulate the one or more trained ML models into one or more deployable units compatible with the one or more ROS2 nodes. Further, the system includes an ensemble learning module, executed by the one or more processors, to combine the predicted output generated by the one or more trained ML models to generate a more accurate prediction than each of the one or more trained ML models.
[0015] In some embodiments, the system may include a model evaluation module, executed by the one or more processors, to evaluate the performance of each of the one or more trained ML models against one or more test data. The one or more test data may include one or more real-time data and/or one or more simulated data. The system may include a classification module, executed by the one or more processors, to classify the one or more real-time data based on the predicted output generated by the one or more trained ML models.
[0016] In some embodiments, the system may include a feature extraction module, executed by the one or more processors, to refine the one or more processed sensor data based on a deep learning-based feature extraction technique. The deep learning-based feature extraction technique may empower the one or more ML models to determine one or more features of the one or more processed sensor data and remove complexity within the one or more processed sensor data without a human intervention.
[0017] In some embodiments, the system may include a model optimization module, executed by the one or more processors, to enable a user to fine-tune each of the one or more ML models with one or more learning techniques through a ROS2 parameter server. The ROS2 parameter server may be configured to dynamically adjust one or more internal parameters of each of the one or more ML models to further improve prediction accuracy and minimize one or more errors.
[0018] In some embodiments, the ensemble learning module may be configured to improve accuracy of each of the one or more trained ML models by incorporating the combined predicted output as a feedback for selecting the one or more learning techniques to generate the one or more ML models.
[0019] In some embodiments, the system is based on a modular architecture providing distinct independent components that are reusable and customizable.
[0020] Another aspect of the present disclosure pertains to a method for integrating Machine Learning (ML) into an open-source Robotics Operating System 2 (ROS2).
[0021] Various objects, features, aspects, and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which numerals represent like components.

BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
[0023] FIG. 1 illustrates an example block diagram of an environment depicting a schematic representation of the interaction between a Machine Learning (ML) model and one or more Robotics Operating System 2 (ROS2) nodes, in accordance with an embodiment of the present disclosure.
[0024] FIG. 2 illustrates an example block diagram of a proposed system, in accordance with an embodiment of the present disclosure.
[0025] FIG. 3 illustrates an example block diagram depicting a ROS2 node, in accordance with an embodiment of the present disclosure.
[0026] FIG. 4A illustrates an example flow diagram of a proposed system for integrating ML into an open-source ROS2, in accordance with an embodiment of the present disclosure.
[0027] FIG. 4B illustrates an example flow chart for implementing the proposed method, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[0028] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosures as defined by the appended claims.
[0029] In an aspect, the present disclosure pertains to a system and a method for integrating Machine Learning (ML) into an open-source Robotics Operating System 2 (ROS2).
[0030] ML has become an indispensable component in modern robotics, enabling advanced functionalities such as real-time decision-making, autonomous navigation, object detection, and adaptive control. The proposed system aims to enhance the capabilities of the ROS2 by enabling one or more ROS2 nodes to incorporate one or more learning techniques and techniques to meet the growing demand for AI and ML applications in robotics. One or more ML models are developed, trained, and tested to provide a comprehensive ML package tailored for the ROS2. The inbuilt ML package enables seamless integration of a plurality of ML modules with each of the one or more ROS2 nodes, facilitating one or more tasks such as data collection, data pre-processing, feature extraction, data visualization, model selection, model training, model evaluation, model deployment, ensemble learning, model monitoring, and maintenance, etc.
[0031] In some embodiments, the system for integration of the ML package into the one or more ROS2 nodes leverages a modular approach that allows a user to independently utilize a plurality of modules and provides flexibility in application development, encourages reusability, and customization for a plurality of robotic applications without redundant coding. Various embodiments of the present disclosure will be explained in detail with reference to FIGs. 1-5.
[0032] Referring to FIG. 1, an example block diagram 100 of an environment depicting a schematic representation of the interaction between a Machine Learning (ML) model 106 and one or more Robotics Operating System 2 (ROS2) nodes 104 within a system 102. Each block labeled as the ROS2 Node (as depicted in FIG. 3 of the present disclosure) represents individual nodes in a ROS ecosystem, which may be responsible for specific tasks such as collecting one or more sensor data, pre-processing the one or more sensor data, controlling actuators, fault detection and recovery or performing computations. The nodes are numbered (for example: 104-1, 104-2…104-N) to indicate the one or more ROS2 nodes 104 potentially handling different applications, for example path planning, Simultaneous Localization and Mapping (SLAM), Visual Simultaneous Localization and Mapping (VSLAM), etc. that may interact with the ML model 106.
[0033] In some embodiments, the ML model 106 and the one or more ROS2 nodes 104 may be integrated within the system 102 to enhance the functionality of the ROS2. This integration enables the one or more ROS2 nodes 104 to utilize multiple modules contained within the ML model 106. In an example embodiment, the dashed lines illustrate the data flow and interaction between the ML model 106 and the one or more ROS2 nodes 104.The connections between the ML model 106 and the one or more ROS2 nodes 104 may signify that the ML model 106 exchanges information with each of the one or more ROS2 nodes 104, possibly in a bidirectional manner. Each of the one or more ROS2 nodes 104 may provide an input, such as the one or more sensor data, to the ML model 106. The ML model 106 may send a predicted output and/or feedback back to each of the one or more ROS2 nodes 104. The FIG. 1A highlights the modularity and scalability of the system 102, where the single ML model 106 may interface with the one or more ROS2 nodes 104 simultaneously within a system 102.
[0034] Referring to FIG. 3, an example block diagram 300 depicting a Robotics Operating System 2 (ROS2) node, in accordance with an embodiment of the present disclosure, is illustrated. In some embodiments, each of the one or more ROS2 nodes typically refers to an individual computational unit within a larger robotic system, often implemented in a framework like the ROS2. The one or more ROS2 104 may be both attended and unattended robots that may automate various systems and applications, including, but not limited to, mainframes, web applications, VMs, enterprise applications (e.g., those produced by SAP®, Salesforce®, Oracle®, etc.), and computing system applications (e.g., desktop and laptop applications, mobile device applications, wearable computer applications, etc.
[0035] In an embodiment, each of the one or more ROS2 nodes 104 may include one or more processor(s) 302 implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more processor(s) 302 may be configured to fetch and execute computer-readable instructions stored in a memory 304 of each of the one or more ROS2 nodes 104. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to perform content narration for in-vehicle infotainment. The memory 304 may include any non-transitory storage device, including, for example, a volatile memory such as a Random-Access Memory (RAM), or a non-volatile memory such as an Erasable Programmable Read-Only Memory (EPROM), a flash memory, and the like. The memory 304 may be configured to store the ROS2 318 and various modules, such as a data collection module 320, a packages and capabilities 322, an actuation control module 330, a cloud connectivity module 332, a fault detection and recovery module 328, a real-time communication module 334 etc. associated with the one or more ROS2 nodes 104 to perform one or more robotic tasks.
[0036] In an embodiment, each of the one or more ROS2 nodes 104 may also include an interface(s) 306. The interface(s) 306 may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 306 may also provide a communication pathway for one or more components of the system 102. Examples of such components include, but are not limited to, processor 302 and one or more databases 308, 310, 312, 314, and 316. In some embodiments, the one or more databases 308, 310, 312, 314, 316 may store therein data generated or received by each of the one or more ROS2 nodes 104. For example, the database 308 may be configured to store raw data received from a plurality of sensing devices or other robotics hardware. The database 310 may be configured to store training related data, the database 312 may be configured to store one or more test data. The one or more test data may include one or more real-time data and/or one or more simulated data. Further, the database 314 may be configured to store available data, including various types of information that each of the one or more ROS2 nodes 104 processes, publishes, or subscribes to, for example, a robot’s internal state information, communication data, map or navigation data, simulation data, etc. The database 316 may be configured to store any type of other data.
[0037] In an example embodiment, the data collection module 320 associated with the one or more ROS2 nodes 104 may be configured to collect data from the plurality of sensing devices, such as a camera, a LiDAR, an ultrasonic sensor, a force sensor, or one or more other robotics hardware. The pre-processing module 324 may be configured to pre-process sensor information and store one or more pre-processed data. The sensor fusion module 326 may be configured to combine data from one or more sensors to produce a more accurate and reliable representation of the node's environment or state. The fault detection and recovery module 328 may be configured to ensure robustness and reliability by monitoring for one or more errors, diagnosing problems, and implementing recovery strategies to maintain functionality.
[0038] In an example embodiment, the actuation control module 330 may be configured to ensure the node's actuators operate smoothly, efficiently, and within safety limits. The cloud connectivity module 332 may be configured to enable the node to communicate with cloud services for data exchange, remote monitoring, control, and analytics. The real-time communication module 334 may be configured to ensure low-latency, reliable communication between robotic components, external controllers, or cloud systems. The output and monitoring module 336 may be configured to present node status, performance metrics, and outputs to external interfaces and enables real-time monitoring of the node performance, debugging, and user interaction. The packages and capabilities module 322 may be configured to act as a centralized system to manage available ROS2 packages and their functionalities (capabilities). The packages and capabilities module 322 is essential for dynamic system configuration, runtime capability discovery, and task orchestration, enabling the one or more ROS2 nodes 104 to adapt to its environment or user commands by leveraging the installed packages like a ML package, etc.
[0039] Referring to FIG. 2, an example block diagram 200 of the system 102 for integrating ML into the open-source ROS2 is illustrated. In an embodiment, the system 102 may include one or more processor(s) 202 implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or more processor(s) 202 may be configured to fetch and execute computer-readable instructions stored in a memory 204 of the system 102. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to perform content narration for in-vehicle infotainment. The memory 204 may include any non-transitory storage device, including, for example, a volatile memory such as a Random-Access Memory (RAM), or a non-volatile memory such as an Erasable Programmable Read-Only Memory (EPROM), a flash memory, and the like. The memory 204 may be configured to store various modules associated with the system 102 for integrating ML into the ROS2 nodes.
[0040] The system 102 may include the plurality of modules, including a data acquisition module 226 that may be configured to collect the one or more sensor data from the one or more ROS2 nodes 104. The one or more sensor data are collected from the plurality of sensing devices connected to the one or more ROS2 nodes 104. The system 102 may include a data pre-processing module 242 to generate one or more processed sensor data by transforming the one or more collected sensor data into a format compatible with the one or more ML models 106.
[0041] In an embodiment, the system 102 may include a learning module 228 configured to select and apply one or more learning techniques to one or more training data sets that may be stored in the one or more ROS2 nodes 104 to generate the one or more ML models 106. The learning module 228 may apply at least one of one or more supervised learning techniques, one or more unsupervised learning techniques, or one or more reinforcement learning techniques, based on the type of data available in the ROS2 nodes. The system 102 may include a training module 230 that may train the one or more ML models 106 using the one or more processed sensor data, a deployment module 234 that may encapsulate the one or more trained ML models 106 into one or more deployable units that may be compatible with the one or more ROS2 nodes 104.
[0042] In an example embodiment, the deployment module 234 may include a containerization framework to encapsulate each of the one or more trained machine learning models and facilitate cross-platform deployment within the ROS2 ecosystem. Alternatively, the deployment module 234 includes a rollback functionality to revert to a previous version of each of the one or more trained ML models 106 in the event of a performance degradation or failure of the current model.
[0043] The system 102 may include an ensemble learning module 236 that may be configured to combine outputs from each of the one or more trained ML models 106 to improve prediction accuracy. In an example embodiment, the ensemble learning module 236 may utilize a weighted voting mechanism to combine predictions from the one or more trained machine learning models based on their individual performance metrics. Further, the ensemble learning module 236 may support distributed execution of the one or more trained ML models across multiple ROS2 nodes, allowing for parallel processing and aggregation of predictions from geographically distributed sensors.
[0044] Further, the system 102 may include a model evaluation module 232 that may evaluate the performance of the one or more trained ML models 106 and a classification module may be configured to classify one or more real-time data, and a feature extraction module 238 that may refine the one or more processed sensor data using a deep learning-based technique. The deep learning-based feature extraction technique empowers the one or more ML models 106 to extract one or more precise features of the one or more processed sensor data and remove complexity within them without a human intervention. The feature extraction module 238 may apply one or more signal processing techniques to pre-process the one or more sensor data from vision, LiDAR, and audio sensors, including but not limited to noise reduction, normalization, and feature scaling.
[0045] The system 102 may further include a model optimization module 244 to fine-tune each of the one or more ML models 106 and the one or more learning techniques through a ROS2 parameter server. The system 102 may use techniques such as hyper parameter tuning, cross-validation, and pruning to enhance the model's accuracy and efficiency.
[0046] In another embodiment, the system 102 may include a model selection module 240 that allows a user to select the one or more learning techniques based on one or more predetermined criteria, including an accuracy, a precision, a computational efficiency and one or more other performance requirements. The system 102 may further include a data augmentation module 246 to generate synthetic sensor data to improve the diversity and robustness of the one or more trained machine learning models 106. The system 102 may also include a data visualization module 248 to understand data distributions, relationships, and trends, as well as for evaluating the performance of the one or more trained ML models 106.
[0047] In another embodiment, the system 102 may include a cloud integration module 250 to allow seamless deployment, training, monitoring, and inference of the one or more ML models 106 in a cloud environment. Cloud integration leverages the scalable, flexible, and secure infrastructure of cloud platforms to support various stages of a ML pipeline. The system 102 may include a monitoring and maintenance module 252 to ensure that deployed models operate efficiently, provide accurate predictions, and adapt to changing data or environments.
[0048] In an embodiment, the system 102 may include an interface(s) 206. The interface(s) 206 may include a variety of interfaces. For example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) 206 may facilitate communication of the system 102. The interface(s) 206 may also provide a communication pathway for one or more components of the system 102. Examples of such components include, but are not limited to, a processing engine(s) 208 and a database 224. The database 224 may comprise data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor(s) 202 or the system 102.
[0049] In an embodiment, the processing engine(s) 208 may be implemented as a combination of hardware and software (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) 208. In examples described herein, such combinations of hardware and software may be implemented in several different ways. For example, the software for the processing engine(s) 208 may be processor-executable instructions stored on a non-transitory machine-readable storage medium, and the hardware for the processing engine(s) 208 may include a processing resource (for example, one or more processors) to execute such instructions. In other embodiments, the processing engine(s) 208 may be implemented by an electronic circuitry.
[0050] In some embodiments, the processing engine 208 may include a receiving engine 210, a determination engine 212, a testing engine 214, a feedback engine 216, an optimization engine 218, an integration engine 220, and other engine(s) 222. The other engine(s) 222 may implement functionalities that supplement applications/functions performed by the processor 202. The other engine(s) 222 may include one or more components selected from a detection engine, a monitoring engine, and the like.
[0051] In an embodiment, the processing engine 208 may be associated with the one or more processor(s) 202, and the memory 204 operatively coupled with the one or more processor(s) 202. The memory 204 may include one or more instructions that, when executed, cause the processing engine 208 to integrate the package and capabilities module 322 of each of the one or more ROS2 nodes 104 with the one or more trained ML models 106.
[0052] In some embodiments, the processor 202 via the receiving engine 210 may receive the input from the data collection module 320 associated with the one or more ROS2 nodes 104 using the data acquisition module 226. The receiving engine 210 may also receive processed data from the data pre-processing module 324 associated with the one or more ROS2 nodes 104 using the pre-processing module 242. Once the one or more data are being collected, the user may select and apply the one or more learning techniques, including one or more supervised, unsupervised, or reinforcement learning techniques, to the one or more training datasets using the learning module 228. The determination engine 212 may generate the one or more ML models 106 based on the one or more predetermined criteria, including an accuracy, a precision, a computational efficiency, and one or more other performance requirements. Each of the one or more ML models 106 may be trained by feeding the one or more processed sensor data into one or more formats compatible with ML models 106 as input. The determination engine 212 may determine the pattern and relationship within the data to predict an output. The system 102 via feedback engine 216 may send a predicted output, predictions, or a feedback to the one or more ROS2 nodes 104.
[0053] In an embodiment, the testing engine 214 may conduct comprehensive testing and validation within the one or more ROS2 nodes 104, including one or more test runs, and evaluate performance of each of the one or more ML models 106 using the one or more real-time and the one or more simulated data using the model evaluation module 232. Testing and validation reduce the risk of deploying underperforming models in production environments, offering better confidence in the model’s reliability and safety. The system 102 using the ensemble learning module 236 enhances the robustness of each of the one or more trained ML models 106 by validating predictions of each of the individual trained models and combining outputs from multiple trained models to improve accuracy.
[0054] In an example embodiment, the inclusion of ensemble learning techniques such as bagging, boosting, and stacking provides improved accuracy and robustness over single model approaches. Bagging reduces variance and improves model stability by training multiple models in parallel on different subsets of the data and averaging their predictions. Boosting, however, reduces bias and variance by training multiple models sequentially, where each of the one or more ML models 106 corrects the one or more errors of its predecessor. Stacking combines multiple models (of different types) by using a meta-model to learn how to best combine their predictions.
[0055] In an embodiment, the feedback engine 216 may send the combined output as the feedback for selecting the one or more learning techniques to continuously improve system performance by adjusting the one or more ML models 106 based on the one or more prediction errors.
[0056] In an embodiment, the optimization engine 216 may enable the user to fine-tune each of the one or more ML models 106 with the one or more learning techniques through a ROS2 parameter server. The ROS2 parameter server may adjust one or more internal parameters of each of the one or more ML models106 to further improve prediction accuracy and minimize the one or more errors in real time. Extensive parameter configurability through the ROS2 parameter server allows the user to fine-tune the one or more models and/or techniques without modifying the underlying code.
[0057] In an embodiment, the optimized and trained one or more ML models 106 are deployed using the deployment module 234. The integration engine 218 may encapsulate each of the one or more trained ML models 106 into deployable units compatible with the one or more ROS2 nodes 104, simplifying deployment into robotic systems. The system 102 using the feature extraction module 238 may extract the one or more precise features from raw sensor data using deep learning-based methods, enabling the identification of complex patterns for accurate predictions. The system 102 may leverage the robust ROS community ecosystem, offering extensive support, active forums, and collaborative development.
[0058] In an example embodiment, the Reinforcement Learning (RL) is a type of machine learning techniques where the model’s agent learns to make decisions by interacting with an environment and focuses on learning one or more values of the environment’s actions or states to make decisions. Q-Learning is an example of the RL technique that learns the value of actions (Q-values) for each state-action pair. It uses a table to represent Q-values and updates them based on observed rewards. Deep Q-Network (DQN) further extends Q-learning using neural networks to approximate Q-values. Table 1 and 2 compares the performance of different reinforcement learning techniques from a reinforcement package for a simple and a complex environment respectively. For this experimentation, fixed episodes of 100 is used for each and every RL algorithms.And, to perform the whole process 3 different turtlebot burger robots were used. The 100 episode is considered because of computation capacity limitation of the system and also, there is no significant improvement in coverage if episode is increased.
Performance Metrics:
Exploration: The process of an agent attempting different actions to discover the environment
CPU Usage: Average CPU usage during execution
Average Memory Usage: Memory consumed by the technique
Total Time Taken (m): Time required for the technique to complete (in minutes)

Table 1
As per the experimental results shown above, considering the simple environment the DQN-Modified technique uses the lowest CPU (4.7%) indicating lowest resource utilization and is the efficient technique, completing in 110 minutes with highest exploration and reward.

Table 2
As per the experimental results shown above, considering the complex environment the DQN-Modified technique uses the lowest CPU (4.96%) indicating lowest resource utilization and is the efficient technique, completing in 119 minutes with highest exploration and reward.
[0059] In an embodiment, the one or more ML models 106 may be hosted on a cloud service using the cloud integration module 250. The package and capabilities module 322 associated with the one or more ROS2 nodes 104 may interact with the cloud service via one or more Application Programming Interfaces (APIs) using the cloud connectivity module 332 over a communication network. In some embodiments, the communication network may be a wireless communication means, such as telecommunication networks, wireless-fidelity, Bluetooth, wireless local area networks, near-field communication, satellite networks, and the like, but not limited thereto. Examples of wired communication means may include optical fiber cables, electrical cables, wires, and the like.
[0060] In an embodiment, the system 102 may include a toolkit with the inbuilt ML package specifically designed to integrate seamlessly with ROS2 318, ensuring smooth communication with the one or more ROS nodes 104 and simplifying the incorporation of the one or more ML models 106 into robotic systems. This native integration may reduce development time and complexity, which is not addressed by other existing solutions. Further the ML package may be optimized for real-time data processing, making it suitable for applications where low latency is crucial, such as autonomous navigation and real-time decision-making.
[0061] FIG. 4A illustrates an exemplary flow chart 400A depicting the implementation of the proposed system 102 for integrating Machine Learning (ML) into the open-source Robotics Operating System 2 (ROS2), in accordance with embodiments of the present disclosure. The blocks/steps of the flowchart 400A may be implemented by any of the processing engines 208.
[0062] Referring to FIG. 4A, at step 402A, the system 102 may collect raw data in the form of the one or more sensor data from various sources such as the one or more sensor, other databases, the LiDAR, or one or more simulation environments. The one or more sensor data serves as a foundation for training and evaluating the one or more ML models 106.
[0063] At step 404A, the system 102 may clean and prepare the raw data by handling one or more missing values, removing one or more outliers, normalizing, and standardizing features. The system 102 may convert the sensor data into a format suitable for training the one or more ML models 106 (e.g., structured numerical data, images, etc.) to generate the one or more processed sensor data.
[0064] At step 406A, the system 102 may choose the one or more appropriate learning techniques based on the nature of the data and a desired outcome to apply it on the one or more training data sets and to generate the one or more ML models 106.
[0065] At step 408A, the system 102 may use the processed sensor data to train the one or more generated ML models 106. This step involves feeding input data in the form of the one or more processed sensor data to each of the one or more ML models 106, allowing it to learn one or more patterns, and one or more relationships within the data and predict the output.
[0066] At step 410A, the system 102 may evaluate the performance of each of the one or more trained ML models 106 by testing the model on unseen data in the form of one or more test data to evaluate its performance using one or more metrics such as accuracy, precision, etc. This step ensures that each of the one or more ML models 106 generalizes well and is not overfitting to the training data which may lead to the model being unable to fit additional data or predict output accurately.
[0067] At step 412A, the system 102 may validate the predicted output to evaluate the accuracy of each of the one or more trained ML models 106. The system 102 may also incorporate a feedback mechanism that leverages ensemble learning results in subsequent iterations. The ensemble learning process combines the predicted output generated by the one or more trained ML models 106 to generate a more accurate prediction than each of the one or more trained ML models 106. Thus, the feedback mechanism refines the technique/model selection and training process, ensuring continuous improvement in the prediction accuracy and robustness of each of the ML models 106. Consequently, the predictions from the ensemble learning module 236 play a role in managing future learning and refinement, enabling ongoing enhancement of each model's performance.
[0068] At step 414A, the system 102 may adjust the one or more internal parameters of each of the one or more ML models 106 in real time based on one or more changing conditions/errors or feedback, enabling adaptive optimization to further improve prediction accuracy and minimize the one or more errors.
[0069] At step 416A, the system 102 may embed each of the one or more optimized and trained ML models 106 into the one or more ROS2 nodes 104 for real-world applications. These ROS2 nodes may utilize the model to perform tasks such as perception, navigation, or decision-making in a robotic or automation system.
[0070] FIG. 4B illustrates a flow chart of an example method 400B for integrating ML into the open-source Robotics Operating System 2 (ROS2), in accordance with embodiments of the present disclosure. In some embodiments, the method 400A may be implemented by a system (such as the system 102) in the one or more ROS2 nodes 104.
[0071] Referring FIG. 4B, at step 402B, the method 400B may include collecting, by a processor (such as processor 202 of FIG. 2), one or more sensor data from one or more ROS2 nodes 104. The one or more sensor data may be collected from the plurality of sensing devices associated with the one or more ROS2 nodes 104, for example, one or a combination of at least one of: a camera, a LiDAR, an ultrasonic sensor, a force sensor or one or more other robotics hardware. Further, the one or more processed sensor data is generated by transforming the one or more collected sensor data into a format compatible with the one or more ML models 106.
[0072] At step 404B, the method 400A may include selecting the one or more learning techniques to apply on the one or more training data sets stored in the one or more ROS2 nodes 104 to generate one or more ML models 106.
[0073] At step 406B, the method 400A may include training each of the one or more ML models 106 by feeding the one or more processed sensor data to predict an output. During, training the model learns to recognize patterns and correlations in the one or more processed sensor data.
[0074] At step 408B, the method 400A may include encapsulating the one or more trained ML models 106 into one or more deployable units compatible with the one or more ROS2 nodes 104, ensuring seamless deployment and integration
[0075] At step 410B, the method 400A may include combining the predicted output generated by the one or more trained ML models 106 to generate accurate prediction than each of the one or more trained ML models 106. The combined predicted output is fed back through a feedback mechanism for selecting the one or more learning techniques based on criteria including the computational efficiency, the accuracy, etc. to generate accurate ML model than the previous ones.
[0076] The proposed system 102 provides a comprehensive ML package tailored for the ROS2 318. Further, the proposed system 102 provides numerous advantages over existing methods, including native compatibility with ROS2 318, real-time processing capabilities, modular architecture, support for ensemble learning, advanced feature extraction techniques, and a user-friendly configuration interface. By integrating the ML package and enhancing the capabilities of the ROS2 framework, the system 102 significantly broadens the scope of ROS2's applications in fields requiring AI-driven solutions, including autonomous vehicles, industrial automation, healthcare robotics, smart warehousing, agriculture, security, retail, construction, consumer electronics, environmental monitoring, telepresence, entertainment, elderly care, retail analytics, and Robotics-as-a-Service (RaaS).

ADVANTAGES OF THE PRESENT DISCLOSURE
[0077] The present disclosure ensures seamless communication of an ML package with the one or more ROS nodes and easy integration into robotic systems.
[0078] The present disclosure provides continuous improvement of one or more trained machine learning models by applying an ensemble learning approach that combines the outputs from the multiple trained machine learning models to improve overall prediction accuracy and robustness.
[0079] The present disclosure provides a system optimized for real-time data processing in a robotic environment, making it suitable for applications where low latency is crucial, such as autonomous navigation and real-time decision-making.
[0080] The present disclosure provides a system that encourages reusability and customization, making it easier to adapt to a plurality of robotic applications without redundant coding.
[0081] The present disclosure offers several advantages over existing methods, including seamless integration with ROS2, real-time data processing optimization, modular architecture, support for ensemble methods, comprehensive testing and validation tools, enhanced feature extraction capabilities, ease of use, and extensive configurability.
, Claims:1. A system (102) for integrating Machine Learning (ML) into an open-source Robotics Operating System 2 (ROS2) (318), comprising:
a data acquisition module (226), executed by one or more processors associated with the system (102), to collect sensor data from one or more ROS2 nodes (104);
a learning module (228), executed by the one or more processors, to select one or more learning techniques to apply on training data sets stored in the one or more ROS2 nodes (104) to generate one or more ML models (106);
a training module (230), executed by the one or more processors, to train each of the one or more ML models (106) by feeding processed sensor data to predict an output;
a deployment module (234), executed by the one or more processors, to encapsulate the one or more trained ML models (106) into one or more deployable units compatible with the one or more ROS2 nodes (104); and
an ensemble learning module (236), executed by the one or more processors, to combine the predicted output generated by each of the one or more trained ML models (106) to generate a final output.
2. The system (102) as claimed in claim 1, wherein the processed sensor data is generated by transforming the collected sensor data into a format compatible with the one or more ML models (106).
3. The system (102) as claimed in claim 1, further comprising:
a model evaluation module (232), executed by the one or more processors, to evaluate performance of each of the one or more trained ML models (106) against test data, wherein the test data comprises at least one of: real-time data and simulated data; and
a classification module, executed by the one or more processors, to classify the real-time data based on the predicted output.
4. The system (102) as claimed in claim 1, further comprising a feature extraction module (238), executed by the one or more processors, to refine the processed sensor data based on a deep learning-based feature extraction technique.
5. The system (102) as claimed in claim 1, wherein the sensor data is collected from a plurality of sensing devices associated with the one or more ROS2 nodes (104), and wherein the plurality of sensing devices comprise any one or a combination of: a camera, a LiDAR, an ultrasonic sensor, a force sensor, and one or more other robotics hardware.
6. The system (102) as claimed in claim 1, wherein the one or more learning techniques comprise: one or more supervised learning techniques, one or more unsupervised learning techniques, and one or more reinforcement learning techniques, and wherein the one or more learning techniques are selected based on a predetermined criteria including at least one of: an accuracy, a precision, a computational efficiency, and one or more other performance requirements.
7. The system (102) as claimed in claim 1, wherein the ensemble learning module (236) is configured to improve accuracy of each of the one or more trained ML models (106) by incorporating the combined predicted outputs as feedback for selecting the one or more learning techniques.
8. The system (102) as claimed in claim 1, further comprising a model optimization module (244), executed by the one or more processors, to enable a user to fine-tune each of the one or more ML models (106) with the one or more learning techniques through a ROS2 parameter server, and wherein the ROS2 parameter server is configured to dynamically adjust one or more internal parameters of each of the one or more ML models (106) to improve prediction accuracy and minimize one or more errors.
9. The system (102) as claimed in claim 1, wherein the system (102) is based on a modular architecture providing distinct independent components that are reusable and customizable.
10. A method (400B) for integrating Machine Learning (ML) into an open-source Robotics Operating System 2 (ROS2), comprising:
collecting, by one or more processors associated with a system (102), sensor data from one or more ROS2 nodes (104);
selecting, by the one or more processors, one or more learning techniques to apply on training data sets stored in the one or more ROS2 nodes (104) to generate one or more ML models (106);
training, by the one or more processors, each of the one or more ML models (106) by feeding processed sensor data to predict an output;
encapsulating, by the one or more processors, the one or more trained ML models (106) into one or more deployable units compatible with the one or more ROS2 nodes (104); and
combining, by the one or more processors, the predicted outputs generated by each of the one or more trained ML models (106) to generate a final output.

Documents

Application Documents

# Name Date
1 202541038730-STATEMENT OF UNDERTAKING (FORM 3) [22-04-2025(online)].pdf 2025-04-22
2 202541038730-REQUEST FOR EXAMINATION (FORM-18) [22-04-2025(online)].pdf 2025-04-22
3 202541038730-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-04-2025(online)].pdf 2025-04-22
4 202541038730-FORM-9 [22-04-2025(online)].pdf 2025-04-22
5 202541038730-FORM FOR SMALL ENTITY(FORM-28) [22-04-2025(online)].pdf 2025-04-22
6 202541038730-FORM 18 [22-04-2025(online)].pdf 2025-04-22
7 202541038730-FORM 1 [22-04-2025(online)].pdf 2025-04-22
8 202541038730-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-04-2025(online)].pdf 2025-04-22
9 202541038730-EVIDENCE FOR REGISTRATION UNDER SSI [22-04-2025(online)].pdf 2025-04-22
10 202541038730-EDUCATIONAL INSTITUTION(S) [22-04-2025(online)].pdf 2025-04-22
11 202541038730-DRAWINGS [22-04-2025(online)].pdf 2025-04-22
12 202541038730-DECLARATION OF INVENTORSHIP (FORM 5) [22-04-2025(online)].pdf 2025-04-22
13 202541038730-COMPLETE SPECIFICATION [22-04-2025(online)].pdf 2025-04-22
14 202541038730-Proof of Right [16-07-2025(online)].pdf 2025-07-16
15 202541038730-FORM-26 [16-07-2025(online)].pdf 2025-07-16