Abstract: SYSTEM AND METHOD FOR DYNAMICALLY BALANCING WORKLOADS ACROSS MULTIPLE CLOUD ENVIRONMENTS ABSTRACT A system (100) for dynamically balancing workloads across multiple cloud environments using real-time performance metrics is disclosed. The system (100) comprises a monitoring unit (102) configured to collect performance metrics such as network latency, traffic volume, server health, and available compute resources from a plurality of cloud providers (106a-106n). A processing unit (104), in communication with the monitoring unit (102), analyzes the performance metrics using a machine learning-based model to predict performance degradation or failures. Based on the analysis, the system (100) dynamically reallocates workloads across the cloud environments to maintain optimal performance and availability. The system (100) further includes a multi-cloud integration framework (108) for interfacing with heterogeneous cloud APIs and a visualization interface (110) for real-time monitoring and control. Claims: 10, Figures: 3 Figure 1 is selected.
Description:BACKGROUND
Field of Invention
[001] Embodiments of the present invention generally relate to a field of cloud computing and particularly to a system and a method for dynamically balancing workloads across multiple cloud environments.
Description of Related Art
[002] In recent years, organizations have increasingly adopted multi-cloud strategies to achieve greater flexibility, reliability, and performance in their digital infrastructure. A multi-cloud setup allows applications and services to run across different cloud providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). While this approach offers benefits like avoiding vendor lock-in and improving fault tolerance, it also introduces new complexities in workload management.
[003] Traditional load balancers are generally designed to operate within a single cloud provider’s infrastructure. These conventional systems are typically static in nature, relying on preconfigured routing rules or manual intervention to balance application traffic. As a result, they are limited in their ability to respond dynamically to sudden changes in network conditions, server health, traffic spikes, or resource availability. This lack of adaptability often leads to inefficient utilization of computing resources, unexpected downtime, performance bottlenecks, and increased operational costs.
[004] Current load balancing solutions such as AWS Elastic Load Balancing (ELB), Azure Traffic Manager, and Google Cloud Load Balancer provide efficient distribution of traffic within their respective cloud environments. However, these systems are designed to function only within boundaries of a single cloud platform. Several third-party solutions attempt to address multi-cloud workload distribution. Tools like Cloud Bolt and F5 Networks provide management interfaces and traffic control mechanisms, but they lack the ability to make autonomous decisions based on live performance data. These tools still require significant manual configuration for proactive optimization. The limitations of existing solutions make it difficult for the organizations to achieve high availability and optimal performance in a multi-cloud architecture.
[005] There is thus a need for an improved and advanced system and a method for dynamically balancing workloads across multiple cloud environments that can administer the aforementioned limitations in a more efficient manner.
SUMMARY
[006] Embodiments in accordance with the present invention provide a system for dynamically balancing workloads across multiple cloud environments. The system comprising a monitoring unit configured to collect real-time performance metrics from cloud providers, wherein the performance metrics comprise data selected from a network latency, a traffic volume, a server health status, available compute resources, or a combination thereof. The system further comprising a processing unit in communication with the monitoring unit, wherein the processing unit is configured to analyze the collected performance metrics using a machine learning-based model; predict potential performance degradation or failures in one or more of the cloud environments based on the analyzed metrics; and dynamically reallocate workloads among the plurality of cloud providers based on the predicted performance outcomes.
[007] Embodiments in accordance with the present invention further provide a method for dynamically balancing workloads across multiple cloud environments. The method comprising steps of collecting real-time performance metrics from cloud providers using a monitoring unit; analyzing the collected performance metrics using a machine learning-based model; predicting potential failures in one or more cloud environments based on the analyzed performance metrics; and dynamically reallocating workloads among the cloud providers.
[008] Embodiments of the present invention may provide a number of advantages depending on their particular configuration. First, embodiments of the present application may provide a system for dynamically balancing workloads across multiple cloud environments that may be capable of reducing service downtime and improving application performance by proactively identifying and responding to potential performance issues.
[009] Next, embodiments of the present application may provide a system for dynamically balancing workloads across multiple cloud environments that is adaptive and intelligent, leveraging machine learning algorithms trained on historical and real-time performance data to make optimal decisions about workload distribution.
[0010] Next, embodiments of the present application may provide a system for dynamically balancing workloads across multiple cloud environments that is self-improving, as it updates the machine learning-based model based on performance feedback from previous reallocation decisions, leading to more accurate future predictions.
[0011] Next, embodiments of the present application may provide a system for dynamically balancing workloads across multiple cloud environments that is administrator-friendly, featuring a real-time visualization interface to monitor cloud performance analytics and workload distribution status, enhancing operational transparency and control.
[0012] These and other advantages will be apparent from the present application of the embodiments described herein.
[0013] The preceding is a simplified summary to provide an understanding of some embodiments of the present invention. This summary is neither an extensive nor exhaustive overview of the present invention and its various embodiments. The summary presents selected concepts of the embodiments of the present invention in a simplified form as an introduction to the more detailed description presented below. As will be appreciated, other embodiments of the present invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The above and still further features and advantages of embodiments of the present invention will become apparent upon consideration of the following detailed description of embodiments thereof, especially when taken in conjunction with the accompanying drawings, and wherein:
[0015] FIG. 1 depicts a block diagram of a system for dynamically balancing workloads across multiple cloud environments, according to an embodiment of the present invention;
[0016] FIG. 2 illustrates components of a processing unit for the system for dynamically balancing workloads across the multiple cloud environments, according to an embodiment of the present invention; and
[0017] FIG. 3 illustrates a flowchart for a method for dynamically balancing workloads across the multiple cloud environments, according to an embodiment of the present invention.
[0018] The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description or the claims. As used throughout this application, the word "may" is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including but not limited to. To facilitate understanding, like reference numerals have been used, where possible, to designate like elements common to the figures. Optional portions of the figures may be illustrated using dashed or dotted lines, unless the context of usage indicates otherwise.
DETAILED DESCRIPTION
[0019] The following description includes the preferred best mode of one embodiment of the present invention. It will be clear from this description of the invention that the invention is not limited to these illustrated embodiments but that the invention also includes a variety of modifications and embodiments thereto. Therefore, the present description should be seen as illustrative and not limiting. While the invention is susceptible to various modifications and alternative constructions, it should be understood, that there is no intention to limit the invention to the specific form disclosed, but, on the contrary, the invention is to cover all modifications, alternative constructions, and equivalents falling within the scope of the invention as defined in the claims.
[0020] In any embodiment described herein, the open-ended terms "comprising", "comprises”, and the like (which are synonymous with "including", "having” and "characterized by") may be replaced by the respective partially closed phrases "consisting essentially of", “consists essentially of", and the like or the respective closed phrases "consisting of", "consists of”, the like.
[0021] As used herein, the singular forms “a”, “an”, and “the” designate both the singular and the plural, unless expressly stated to designate the singular only.
[0022] FIG. 1 depicts a block diagram of a system 100 for dynamically balancing workloads across multiple cloud environments, according to an embodiment of the present invention. The system 100 may be capable of dynamically reallocating workloads based on predicted performance degradation, service failures, or resource constraints, without requiring manual intervention. The system 100 may be configured for real-time monitoring and automated decision-making that may make the system 100 suitable for DevOps and enterprise environments.
[0023] According to the embodiments of the present invention, the system 100 may incorporate non-limiting hardware components to enhance processing speed and system efficiency. The system 100 may comprise a monitoring unit 102, a processing unit 104, cloud providers 106a-106n (collectively referred to as the cloud providers 106, or individually as a cloud provider 106), a multi-cloud integration framework 108, and a visualization interface 110. In an embodiment of the present invention, these hardware components may be integrated with computer-executable instructions and software modules specifically designed to address and overcome the limitations of traditional static load balancing systems.
[0024] In an embodiment of the present invention, the monitoring unit 102 may comprise a distributed set of monitoring agents modules deployed across the infrastructure of the cloud providers 106. In an embodiment of the present invention, the monitoring agents that may be deployed to continuously monitor the performance metrics of the cloud providers 106.
[0025] The monitoring unit 102 may be adapted to continuously collect real-time performance metrics from the cloud providers 106. The performance metrics may comprise data selected from a network latency, a traffic volume, a server health status, available compute resources, and so forth. Embodiments of the present invention are intended to include or otherwise cover any suitable type of data for performance metrics, including known, related art, and/or later developed technologies. The performance metrics may be transmitted to the processing unit 104 for further analysis. The monitoring unit 102 may be designed to operate with minimal overhead to ensure non-intrusive data collection, and may support scalable deployment to accommodate varying numbers of cloud environments.
[0026] In an embodiment of the present invention, the processing unit 104 may be configured to receive, analyze, and interpret the performance metrics collected by the monitoring unit 102. The processing unit 104 may include programming modules (as shown in FIG. 2) for executing instructions related to the output of the system 100, such as initiating dynamic workload reallocations in response to predicted resource bottlenecks or latency spikes, generating control commands to optimize compute resource utilization across cloud providers 106, implementing feedback loops to refine machine learning-based models based on historical outcomes, and managing communication with external APIs through the multi-cloud integration framework 108 for real-time task orchestration and service availability adjustments.
[0027] Embodiments of the present invention are intended to include or otherwise cover any suitable output of the processing unit 104, including known, related art, and/or later developed technologies. The suitable output may include, but is not limited to, API instructions to cloud management platforms, migration triggers for application instances, configuration updates to service-level agreements (SLAs), diagnostic logs for system health, or performance dashboards accessible via the visualization interface 110. The output may also encompass machine-readable signals or event-based triggers that facilitate seamless cloud coordination and fault recovery.
[0028] The processing unit 104 may be, but not limited to, a Programmable Logic Control (PLC) unit, a microprocessor, a development board, and so forth. In some embodiments, the processing unit 104 may also comprise system-on-chip (SoC) architectures, field-programmable gate arrays (FPGAs), single-board computers, embedded controllers, cloud-native orchestration engines, or virtualized container hosts depending on the deployment architecture. Embodiments of the present invention are intended to include or otherwise cover any type of the processing unit 104, including known, related art, and/or later developed technologies. In an embodiment of the present invention, the processing unit 104 may further be explained in conjunction with FIG. 2.
[0029] The processing unit 104 may employ a machine learning-based model to identify patterns, detect anomalies, and forecast potential performance degradation or system failures within the cloud providers 106. Based on the analytical results, the processing unit 104 may be adapted to make real-time decisions regarding workload reallocation to optimize system availability, latency, and cost-efficiency. The processing unit 104 may further include a model training component that continuously improves the accuracy of predictions by incorporating feedback from previous workload balancing outcomes, thereby enabling adaptive and intelligent decision-making over time.
[0030] In an embodiment of the present invention, the cloud providers 106 may represent a diverse set of public or private cloud environments, including but not limited to platforms such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and other infrastructure-as-a-service (IaaS) providers. The cloud providers 106 may host application instances, services, and computational tasks that are subject to reallocation as directed by the processing unit 104. Each of the cloud providers 106 may expose Application Programming Interfaces (APIs) or similar interfaces to enable interaction with the monitoring unit 102 and the multi-cloud integration framework 108. The system 100 may be designed to interact with the APIs in a vendor-agnostic manner to facilitate seamless cross-cloud communication and workload migration.
[0031] In an embodiment of the present invention, the multi-cloud integration framework 108 may be configured to provide a standardized interface for interacting with heterogeneous cloud environments. The multi-cloud integration framework may be configured to enable the processing unit 104 to interface with heterogeneous cloud provider Application Programming Interfaces (APIs).
[0032] The multi-cloud integration framework 108 may abstract underlying differences in protocols, data formats, authentication methods, and control mechanisms of the cloud providers 106. Based on the abstraction of the underlying differences, the multi-cloud integration framework 108 may enable the processing unit 104 to execute unified control commands irrespective of the cloud infrastructure. The multi-cloud integration framework 108 may further support plugin-based extensions or adapters to integrate with newly emerging cloud platforms for enabling a long-term scalability and flexibility of the system 100.
[0033] In an embodiment of the present invention, the visualization interface 110 may be configured to present real-time and historical data related to the cloud performance metrics, the workload distribution, anomaly alerts, and system recommendations. The visualization interface 110 may further be configured to display real-time cloud performance analytics and workload distribution status to administrators. The visualization interface 110 may include an interactive dashboard (not shown) through which system administrators may monitor the health of applications, view prediction outcomes, and configure policy settings such as thresholds for triggering workload reallocations. The visualization interface 110 may also support manual override capabilities, enabling human intervention when needed. The visualization interface 110 may be accessible via web browsers, remote access tools, and so forth, for comprehensive visibility and control of the multi-cloud operations.
[0034] FIG. 2 illustrates components of the processing unit 104 for the system 100 for dynamically balancing the workloads across the multiple cloud environments, according to an embodiment of the present invention. In an embodiment of the present invention, the processing unit 104 may comprise computer-executable instructions in the form of programming modules including a data capturing module 200, a performance analysis module 202, a prediction module 204, a decision-making module 206, a feedback learning module 208, a workload orchestration module 210, and a cloud API interface module 212.
[0035] In an embodiment of the present invention, the data capturing module 200 may be configured to receive, structure, and/or timestamp the real-time performance metrics such as network latency, central processing unit (CPU) load, memory utilization, throughput, and health indicators collected by the monitoring unit 102. The data capturing module 200 may be configured to normalize incoming data from various cloud environments for uniform processing. Upon normalizing the data, the data capturing module 200 may be configured to generate an analysis signal and may transmit the generated analysis signal to the performance analysis module 202.
[0036] In an embodiment of the present invention, the performance analysis module 202 may be configured to be activated upon receiving the analysis signal from the data capturing module 200. In an embodiment of the present invention, the performance analysis module 202 may further be configured to analyze the normalized metrics to detect deviations from predefined thresholds, trends in resource usage, and early indicators of performance degradation. The performance analysis module 202 may be configured to classify system behavior patterns and relay the evaluated metrics to the prediction module 204. Additionally, the performance analysis module 202 may be configured to incorporate contextual data, such as scheduled maintenance windows, regional demand spikes, or security alerts, to enhance the precision of its assessments. The performance analysis module 202 may further be configured to prioritize metrics elements based on their criticality or historical correlation with prior system failures. Further, the performance analysis module 202 may be configured to continuously refine its analysis parameters using feedback from the prediction module 204 and the workload reallocation outcomes to improve future detection accuracy.
[0037] Upon relaying the evaluated metrics, the performance analysis module 202 may be configured to generate a prediction signal and may transmit the generated prediction signal to the prediction module 204.
[0038] The prediction signal may encapsulate a structured dataset summarizing the current performance state, detected anomalies, trend forecasts, and a confidence score associated with the analysis. The performance analysis module 202 may be configured to timestamp the prediction signal and include metadata such as an originating cloud environment, a type of workload affected, and a severity level of anticipated degradation. In some embodiments of the present invention, the performance analysis module 202 may also be configured to tag the prediction signal with a priority flag to enable expedited processing by the prediction module 204 during high-load conditions or critical operational windows.
[0039] In an embodiment of the present invention, the prediction module 204 may be configured to be activated upon receiving the prediction signal from the performance analysis module 202. In an embodiment of the present invention, the prediction module 204 may further be configured to apply the machine learning-based model that may be trained on the historical cloud performance data. The machine learning-based model may be configured to enable the prediction module 204 to forecast potential performance issues such as resource saturation, increased latency, or impending system failure. The prediction module 204 may be configured to use both historical data and real-time analysis results to estimate the likelihood and impact of various failure scenarios.
[0040] The machine learning-based model may be, but not limited to, a supervised learning model such as a Random Forest, Gradient Boosted Trees, or a Neural Network trained on labeled historical datasets representing normal and abnormal cloud operating conditions. In some embodiments of the present invention, the machine learning-based model may be an unsupervised model such as Autoencoders, Isolation Forests, and so forth. The prediction module 204 may further be configured to implement time-series forecasting techniques such as Long Short-Term Memory networks (LSTM), a Prophet to detect performance degradation trends over time, and so forth.
[0041] For instance, during a routine monitoring cycle, the performance analysis module 202 detects increased memory usage and growing response times in Cloud Provider A. These metrics are relayed as a prediction signal to the prediction module 204. The machine learning model, trained on a dataset containing multiple previous degradation events across various cloud providers, recognizes the combination of elevated memory usage and response latency as a pattern leading to service throttling within 30 minutes in 85% of past cases. The prediction module 204 therefore forecasts a high likelihood of performance degradation in Cloud Provider A and generates an alert with a corresponding confidence score of 0.91. Based on this output, the system may proceed to trigger the workload reallocation process to a more stable cloud environment. The prediction module 204 may further rank the urgency of potential failure events based on business impact metrics or service-level agreements (SLAs), thereby supporting the decision-making process for workload migration strategies. The prediction module 204 may also provide feedback to continuously refine the model using outcomes from executed reallocations, thus enabling adaptive learning over time.
[0042] Upon forecasting the potential performance issues, the prediction module 204 may generate a decision signal and may transmit the generated decision signal to the decision-making module 206.
[0043] In an embodiment of the present invention, the decision-making module 206 may be configured to be activated upon receiving the decision signal from the prediction module 204. In an embodiment of the present invention, the decision-making module 206 may be configured to determine optimized workload redistribution plans based on predictive insights received from the prediction module 204. The decision-making module 206 may consider predefined service-level objectives (SLOs), current system constraints, and performance forecasts to derive an efficient reallocation strategy.
[0044] In an embodiment of the present invention, the feedback learning module 208 may be configured to gather feedback regarding the effectiveness of previously executed workload reallocation decisions. The feedback learning module 208 may incorporate real-world outcomes into the training dataset used by the prediction module 204 to improve future forecasting accuracy and system adaptiveness.
[0045] In an embodiment of the present invention, the workload orchestration module 210 may be configured to execute the workload reallocation strategies generated by the decision-making module 206. The workload orchestration module 210 may initiate live migration or scaling of application instances across different cloud environments, ensuring state integrity and minimal disruption during the transition. The workload orchestration module 210 may enable the reallocation of the workloads such as a live migration of application instances or tasks to alternate cloud providers 106a-106n with optimal availability.
[0046] For example, following a decision signal indicating performance risk in Cloud Provider A, the decision-making module 206 may be configured to transmit workload migration instructions to the workload orchestration module 210. The workload orchestration module 210 may be configured to reallocate a microservice-based application to Cloud Provider C, which may be identified as having optimal compute availability and reduced latency. The workload orchestration module 210 may be configured to initiate live migration of Docker containers using Kubernetes federation mechanisms, preserving state through volume mounts and session replication. The workload orchestration module 210 may be configured to perform pre-migration health validation, establish data synchronization channels, and seamlessly transition the application workload with minimal service impact. Additionally, the workload orchestration module 210 may be configured to log migration details and update telemetry for monitoring purposes of the Cloud Provider C.
[0047] In an embodiment of the present invention, the cloud API interface module 212 may be configured to facilitate communication between the system 100 and multiple heterogeneous cloud environments via provider-specific application programming interfaces (APIs). The cloud API interface module 212 may manage credential handling, request formatting, session validation, and transactional logging for secure and compliant execution of orchestration commands.
[0048] In an exemplary scenario of the present invention, the system 100 may be required for dynamic provisioning of resources in Cloud Provider C. In such embodiment of the present invention, the workload orchestration module 210 may be configured to invoke the cloud API interface module 212. The cloud API interface module 212 may be configured to authenticate with Cloud Provider C using a secure token exchange protocol, format the orchestration request according to the provider’s API specification, and issue the resource deployment command. The cloud API interface module 212 may be configured to monitor the response for confirmation of resource availability and operational readiness. Upon successful execution, the cloud API interface module 212 may be configured to collect instance metadata, validate provisioning status, and return a response to the orchestration module. Furthermore, the cloud API interface module 212 may be configured to log all interaction data, including request payloads, timestamps, error codes, and outcomes for compliance and diagnostic purposes.
[0049] FIG. 3 illustrates a flowchart for a method 300 for dynamically balancing workloads across the multiple cloud environments, according to an embodiment of the present invention.
[0050] At step 302, the system 100 may collect the real-time performance metrics from the cloud providers 102a-102n using the monitoring unit 102.
[0051] At step 304, the system 100 may analyze the collected performance metrics using the machine learning-based model.
[0052] At step 306, the system 100 may predict the potential failures in the one or more cloud environments of the cloud providers 102a-102n based on the analyzed performance metrics.
[0053] At step 308, the system 100 may dynamically reallocate the workloads among the cloud providers 102a-102n. The system 100 may further update the machine learning-based model with performance feedback to improve future decisions.
[0054] While the invention has been described in connection with what is presently considered to be the most practical and various embodiments, it is to be understood that the invention is not to be limited to the disclosed embodiments, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims.
[0055] This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined in the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements within substantial differences from the literal languages of the claims. , Claims:CLAIMS
I/We Claim:
1. A system (100) for dynamically balancing workloads across multiple cloud environments, the system (100) comprising:
a monitoring unit (102) configured to collect real-time performance metrics from cloud providers (106a-106n), wherein the performance metrics comprises data selected from a network latency, a traffic volume, a server health status, available compute resources, or a combination thereof; and
a processing unit (104) in communication with the monitoring unit (102), characterized in that the processing unit (104) configured to:
analyze the collected performance metrics using a machine learning-based model;
predict potential performance degradation or failures in one or more of the cloud environments based on the analyzed metrics; and
dynamically reallocate workloads among the plurality of cloud providers (106a-106n) based on the predicted performance outcomes.
2. The system (100) as claimed in claim 1, wherein the monitoring unit (102) comprises monitoring agents deployed across the plurality of cloud providers (106a-106n) to continuously monitor the performance metrics.
3. The system (100) as claimed in claim 1, wherein the processing unit (104) utilizes the machine learning-based model trained on historical cloud performance data.
4. The system (100) as claimed in claim 1, wherein the machine learning-based model is configured to perform anomaly detection to trigger workload reallocation prior to service degradation.
5. The system (100) as claimed in claim 1, wherein the processing unit (104) is configured to update the machine learning-based model based on performance feedback from previously executed workload reallocations.
6. The system (100) as claimed in claim 1, wherein the reallocation of workloads comprises a live migration of application instances or tasks to alternate cloud providers (106a-106n) with optimal availability.
7. The system (100) as claimed in claim 1, comprising a multi-cloud integration framework (108) that enables the processing unit (104) to interface with heterogeneous cloud provider Application Programming Interfaces (APIs).
8. The system (100) as claimed in claim 1, further comprising a visualization interface (110) configured to display real-time cloud performance analytics and workload distribution status to administrators.
9. A method (300) for dynamically balancing workloads across multiple cloud environments, the method comprising:
collecting real-time performance metrics from cloud providers (102a-102n) using a monitoring unit (102);
analyzing the collected performance metrics using a machine learning-based model;
predicting potential failures in one or more cloud environments based on the analyzed performance metrics; and
dynamically reallocating workloads among the cloud providers (102a-102n).
10. The method (300) as claimed in claim 9, comprising a step of updating the machine learning-based model with performance feedback to improve future decisions.
Date: May 19, 2025
Place: Noida
Nainsi Rastogi
Patent Agent (IN/PA-2372)
Agent for the Applicant
| # | Name | Date |
|---|---|---|
| 1 | 202541048342-STATEMENT OF UNDERTAKING (FORM 3) [20-05-2025(online)].pdf | 2025-05-20 |
| 2 | 202541048342-REQUEST FOR EARLY PUBLICATION(FORM-9) [20-05-2025(online)].pdf | 2025-05-20 |
| 3 | 202541048342-POWER OF AUTHORITY [20-05-2025(online)].pdf | 2025-05-20 |
| 4 | 202541048342-OTHERS [20-05-2025(online)].pdf | 2025-05-20 |
| 5 | 202541048342-FORM-9 [20-05-2025(online)].pdf | 2025-05-20 |
| 6 | 202541048342-FORM FOR SMALL ENTITY(FORM-28) [20-05-2025(online)].pdf | 2025-05-20 |
| 7 | 202541048342-FORM 1 [20-05-2025(online)].pdf | 2025-05-20 |
| 8 | 202541048342-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [20-05-2025(online)].pdf | 2025-05-20 |
| 9 | 202541048342-EDUCATIONAL INSTITUTION(S) [20-05-2025(online)].pdf | 2025-05-20 |
| 10 | 202541048342-DRAWINGS [20-05-2025(online)].pdf | 2025-05-20 |
| 11 | 202541048342-DECLARATION OF INVENTORSHIP (FORM 5) [20-05-2025(online)].pdf | 2025-05-20 |
| 12 | 202541048342-COMPLETE SPECIFICATION [20-05-2025(online)].pdf | 2025-05-20 |