Sign In to Follow Application
View All Documents & Correspondence

A Method For Intelligent Task Scheduling And Resource Allocation In Cloud Edge Computing Environments

Abstract: A METHOD FOR INTELLIGENT TASK SCHEDULING AND RESOURCE ALLOCATION IN CLOUD-EDGE COMPUTING ENVIRONMENTS The present invention provides a method for intelligent task scheduling and resource allocation in cloud-edge computing environments such as Internet of Things (IoT) ecosystems, smart cities, mobile edge computing (MEC), and infrastructures integrated with 5G/6G networks. The method introduces a hybrid framework combining artificial intelligence (AI) techniques, including reinforcement learning and federated learning, with bio-inspired optimization algorithms such as Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Grey Wolf Optimization (GWO). This integration enables dynamic, real-time scheduling and allocation of resources based on network variability, workload changes, and security conditions. The invention ensures energy efficiency, low-latency execution, and data privacy by leveraging federated learning for distributed coordination without sharing raw data. It supports adaptive and secure decision-making across heterogeneous, large-scale, and resource-constrained environments, overcoming the limitations of traditional static models and centralized processing.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 June 2025
Publication Number
24/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. MR. PADALA SRAVAN
SCHOOL OF COMPUTER SCIENCE & ARTIFICIAL INTELLIGENCE, ANANTHASAGAR, HASANPARTHY (PO), WARANGAL-506371
2. DR. MOHAMMED ALI SHAIK
SCHOOL OF COMPUTER SCIENCE & ARTIFICIAL INTELLIGENCE, ANANTHASAGAR, HASANPARTHY (PO), WARANGAL-506371

Specification

Description:FIELD OF THE INVENTION
This invention relates to a method for intelligent task scheduling and resource allocation in cloud-edge computing environments.
BACKGROUND OF THE INVENTION
Distributed computing infrastructure systems like IoT and the mix of smart cities with mobile edge computing (MEC) and 5G/6G networks create essential difficulties for resource management together with response latency reduction and energy efficiency requirements. The current scheduling techniques together with congestion management rules fail to adjust quickly to changing workloads while being inflexible toward different security concerns and network variability. Standalone AI and bio-inspired algorithms need high computational resources or exhibit limitations in environment generalization. To close these gaps a single intelligent system needs implementation for near real-time decisions and efficient resource management, secure distributed operations.
1. Deep Reinforcement Learning (DRL) techniques are effective for adaptive task scheduling in cloud and edge systems, as they learn optimal strategies in real-time. Models like DL-DRL use a two-level structure to manage task priorities and resource allocation, while Q-learning offers a simpler approach for less complex environments. Advanced models like Heterogeneous GNN-based DRL enhance scalability by leveraging network structure and node diversity, making them suitable for complex, distributed systems.
2. Federated Learning-based scheduling is a privacy-preserving and decentralized task assignment approach perfectly suited for dispersed edge-cloud systems. Rather than transmitting raw data, devices simply train their own models and transfer updates whilst keeping data private. Projecting the model weights also enables real-time decision-making across geographically distributed nodes, so it has enormous advantages in environments like IoT and smart cities. It saves energy making energy-constrained systems typically save roughly 30% energy and considerably enhances resource utilization. In general, it was a fantastic balance of performance, efficiency, and security for data in large scale distributed computing.
3. Dynamic Programming (DP)-based models such as DPEETS are effective for scheduling tasks with energy efficiency and deadlines. They achieve meaningful gains in energy savings and latency performance, making them appropriate for static, predictable environments. However, they do not offer adaptability, which limits their performance in dynamic real-time cloud-edge systems.
1. KR102734080B1: The application is for a new AI optimization model to optimize efficiency of data processing in a cloud environment, including a multi-level lightweight framework of AI models that can choose AI models of different workloads based on the types of processing tasks. A dynamic resource allocation mechanism would adaptively allocate the cloud resources and a task scheduling framework to assist in task execution and optimal completion of the data processing jobs. Additionally, a prediction model synchronizing component would continuously allow learning of the models and synchronize to the AI model within each cloud server in real-time reference to data processing that includes processing multi-GPUs; allows for the stream processing of data in real-time; enables interoperability to efficiently manage workloads and avoid collapsed load conditions to thus improvement and responsiveness to cloud operations and continued scalability of data-intensive workloads.
2. US9436512B2: This patent describes energy-efficient job scheduling in heterogeneous chip multiprocessors based on dynamic program behavior using the Prim model.
Criteria Traditional Scheduling (Rule-Based / Static) AI-Only Models (DRL, FL, etc.) Bio-Inspired Algorithms (PSO, ACO, etc.) Proposed Hybrid AI + Bio-Inspired System
Adaptability Low – cannot respond to real-time workload changes High – learns from environment and adapts dynamically Moderate – some adaptive behavior through heuristics Very High – adapts using reinforcement learning and evolves optimization strategies over time
Energy Efficiency Poor – static allocation leads to overuse of resources Good – RL models can optimize energy-aware policies Good – bio-inspired techniques can minimize energy use Excellent – energy-aware decision-making using a multi-objective optimization strategy
Scalability Low – not designed for large-scale or distributed networks Moderate – DRL/FL scale but need significant resources High – low-resource footprint enables scalability High – optimized for distributed edge-cloud environments with lightweight learning mechanisms
Privacy Preservation None – centralized control and data exchange Good – federated learning enables local training Not applicable – heuristic models don’t handle privacy Excellent – federated learning with local training and secure aggregation ensures privacy compliance
Computation Cost Low – simple but inflexible High – deep models require GPUs and long training Low – efficient for lightweight systems Moderate – balances compute load with efficiency via hybridization
Real-Time Performance Poor – cannot adapt on-the-fly Variable – depends on model size and tuning Moderate – heuristics converge quickly Strong – uses real-time RL feedback with lightweight optimization for fast scheduling decisions
Security and Resilience Low – limited threat detection or response Moderate – anomaly detection models possible Low – not designed for cybersecurity High – supports secure scheduling and adaptive defense mechanisms through hybrid learning
Generalization to New Environments Very Poor – rule-based and non-adaptive Moderate – retraining required for new domains Low – needs parameter retuning High – combines learning and metaheuristics for better generalization and self-improvement

SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The invention relates to an intelligent hybrid scheduling system for distributed computing infrastructures, such as those found in IoT ecosystems, smart cities, mobile edge computing (MEC) that have integration with 5G/6G networks. Such environments require low-latency response times, high energy efficiency, and secure management of resources—issues where traditional and independent artificial intelligence (AI) or bio-inspired algorithms lack adaption and scalability, which affects both performance levels and computational intensity. The invention provides a novel framework for incorporating AI driven by learning approaches (e.g., reinforcement and federated learning) and bio-inspired optimization methods (e.g., Particle Swarm Optimization (PSO), ant Colony Optimization (ACO), or Grey Wolf Optimization (GWO)) as a singular framework that can task scheduling and resource allocations dynamically and in real-time. This dynamic learning capability enhances adaption to variability in network conditions, changes in workload, and security threats, making it suitable for heterogeneous, large scale, and latency-sensitive environments. This invention does not overlook important areas of previous literature, such as scaling, coordination preserving privacy, or energy aware decision making, that enabled methods to operate effectively within resources constrained edge nodes and support distributed trustworthy coordination across multiple agents balancing security and privacy.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention relates to an intelligent hybrid scheduling system for distributed computing infrastructures, such as those found in IoT ecosystems, smart cities, mobile edge computing (MEC) that have integration with 5G/6G networks. Such environments require low-latency response times, high energy efficiency, and secure management of resources—issues where traditional and independent artificial intelligence (AI) or bio-inspired algorithms lack adaption and scalability, which affects both performance levels and computational intensity. The invention provides a novel framework for incorporating AI driven by learning approaches (e.g., reinforcement and federated learning) and bio-inspired optimization methods (e.g., Particle Swarm Optimization (PSO), ant Colony Optimization (ACO), or Grey Wolf Optimization (GWO)) as a singular framework that can task scheduling and resource allocations dynamically and in real-time. This dynamic learning capability enhances adaption to variability in network conditions, changes in workload, and security threats, making it suitable for heterogeneous, large scale, and latency-sensitive environments. This invention does not overlook important areas of previous literature, such as scaling, coordination preserving privacy, or energy aware decision making, that enabled methods to operate effectively within resources constrained edge nodes and support distributed trustworthy coordination across multiple agents balancing security and privacy.
NOVELTY:
This invention introduces a hybrid scheduling and resource management system tailored for distributed computing environments, such as IoT networks, smart cities, MEC, and 5G/6G systems. By integrating federated learning, reinforcement learning, and bio-inspired optimization algorithms, it enables intelligent scheduling that adapts to variable workloads, network congestion, and resource constraints in real time. It preserves data privacy through federated learning while achieving global coordination, ensures energy-efficient and latency-sensitive task execution with lightweight bio-inspired techniques, and facilitates secure, decentralized decision-making across heterogeneous infrastructures. This scalable and innovative framework overcomes limitations of static models and centralized processing, providing a transformative solution for next-generation computing infrastructures.

FIELD OF THE INVENTION
This invention relates to a method for intelligent task scheduling and resource allocation in cloud-edge computing environments.
BACKGROUND OF THE INVENTION
Distributed computing infrastructure systems like IoT and the mix of smart cities with mobile edge computing (MEC) and 5G/6G networks create essential difficulties for resource management together with response latency reduction and energy efficiency requirements. The current scheduling techniques together with congestion management rules fail to adjust quickly to changing workloads while being inflexible toward different security concerns and network variability. Standalone AI and bio-inspired algorithms need high computational resources or exhibit limitations in environment generalization. To close these gaps a single intelligent system needs implementation for near real-time decisions and efficient resource management, secure distributed operations.
1. Deep Reinforcement Learning (DRL) techniques are effective for adaptive task scheduling in cloud and edge systems, as they learn optimal strategies in real-time. Models like DL-DRL use a two-level structure to manage task priorities and resource allocation, while Q-learning offers a simpler approach for less complex environments. Advanced models like Heterogeneous GNN-based DRL enhance scalability by leveraging network structure and node diversity, making them suitable for complex, distributed systems.
2. Federated Learning-based scheduling is a privacy-preserving and decentralized task assignment approach perfectly suited for dispersed edge-cloud systems. Rather than transmitting raw data, devices simply train their own models and transfer updates whilst keeping data private. Projecting the model weights also enables real-time decision-making across geographically distributed nodes, so it has enormous advantages in environments like IoT and smart cities. It saves energy making energy-constrained systems typically save roughly 30% energy and considerably enhances resource utilization. In general, it was a fantastic balance of performance, efficiency, and security for data in large scale distributed computing.
3. Dynamic Programming (DP)-based models such as DPEETS are effective for scheduling tasks with energy efficiency and deadlines. They achieve meaningful gains in energy savings and latency performance, making them appropriate for static, predictable environments. However, they do not offer adaptability, which limits their performance in dynamic real-time cloud-edge systems.
1. KR102734080B1: The application is for a new AI optimization model to optimize efficiency of data processing in a cloud environment, including a multi-level lightweight framework of AI models that can choose AI models of different workloads based on the types of processing tasks. A dynamic resource allocation mechanism would adaptively allocate the cloud resources and a task scheduling framework to assist in task execution and optimal completion of the data processing jobs. Additionally, a prediction model synchronizing component would continuously allow learning of the models and synchronize to the AI model within each cloud server in real-time reference to data processing that includes processing multi-GPUs; allows for the stream processing of data in real-time; enables interoperability to efficiently manage workloads and avoid collapsed load conditions to thus improvement and responsiveness to cloud operations and continued scalability of data-intensive workloads.
2. US9436512B2: This patent describes energy-efficient job scheduling in heterogeneous chip multiprocessors based on dynamic program behavior using the Prim model.
Criteria Traditional Scheduling (Rule-Based / Static) AI-Only Models (DRL, FL, etc.) Bio-Inspired Algorithms (PSO, ACO, etc.) Proposed Hybrid AI + Bio-Inspired System
Adaptability Low – cannot respond to real-time workload changes High – learns from environment and adapts dynamically Moderate – some adaptive behavior through heuristics Very High – adapts using reinforcement learning and evolves optimization strategies over time
Energy Efficiency Poor – static allocation leads to overuse of resources Good – RL models can optimize energy-aware policies Good – bio-inspired techniques can minimize energy use Excellent – energy-aware decision-making using a multi-objective optimization strategy
Scalability Low – not designed for large-scale or distributed networks Moderate – DRL/FL scale but need significant resources High – low-resource footprint enables scalability High – optimized for distributed edge-cloud environments with lightweight learning mechanisms
Privacy Preservation None – centralized control and data exchange Good – federated learning enables local training Not applicable – heuristic models don’t handle privacy Excellent – federated learning with local training and secure aggregation ensures privacy compliance
Computation Cost Low – simple but inflexible High – deep models require GPUs and long training Low – efficient for lightweight systems Moderate – balances compute load with efficiency via hybridization
Real-Time Performance Poor – cannot adapt on-the-fly Variable – depends on model size and tuning Moderate – heuristics converge quickly Strong – uses real-time RL feedback with lightweight optimization for fast scheduling decisions
Security and Resilience Low – limited threat detection or response Moderate – anomaly detection models possible Low – not designed for cybersecurity High – supports secure scheduling and adaptive defense mechanisms through hybrid learning
Generalization to New Environments Very Poor – rule-based and non-adaptive Moderate – retraining required for new domains Low – needs parameter retuning High – combines learning and metaheuristics for better generalization and self-improvement

SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The invention relates to an intelligent hybrid scheduling system for distributed computing infrastructures, such as those found in IoT ecosystems, smart cities, mobile edge computing (MEC) that have integration with 5G/6G networks. Such environments require low-latency response times, high energy efficiency, and secure management of resources—issues where traditional and independent artificial intelligence (AI) or bio-inspired algorithms lack adaption and scalability, which affects both performance levels and computational intensity. The invention provides a novel framework for incorporating AI driven by learning approaches (e.g., reinforcement and federated learning) and bio-inspired optimization methods (e.g., Particle Swarm Optimization (PSO), ant Colony Optimization (ACO), or Grey Wolf Optimization (GWO)) as a singular framework that can task scheduling and resource allocations dynamically and in real-time. This dynamic learning capability enhances adaption to variability in network conditions, changes in workload, and security threats, making it suitable for heterogeneous, large scale, and latency-sensitive environments. This invention does not overlook important areas of previous literature, such as scaling, coordination preserving privacy, or energy aware decision making, that enabled methods to operate effectively within resources constrained edge nodes and support distributed trustworthy coordination across multiple agents balancing security and privacy.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
FIGURE 1: SYSTEM ARCHITECTURE
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention relates to an intelligent hybrid scheduling system for distributed computing infrastructures, such as those found in IoT ecosystems, smart cities, mobile edge computing (MEC) that have integration with 5G/6G networks. Such environments require low-latency response times, high energy efficiency, and secure management of resources—issues where traditional and independent artificial intelligence (AI) or bio-inspired algorithms lack adaption and scalability, which affects both performance levels and computational intensity. The invention provides a novel framework for incorporating AI driven by learning approaches (e.g., reinforcement and federated learning) and bio-inspired optimization methods (e.g., Particle Swarm Optimization (PSO), ant Colony Optimization (ACO), or Grey Wolf Optimization (GWO)) as a singular framework that can task scheduling and resource allocations dynamically and in real-time. This dynamic learning capability enhances adaption to variability in network conditions, changes in workload, and security threats, making it suitable for heterogeneous, large scale, and latency-sensitive environments. This invention does not overlook important areas of previous literature, such as scaling, coordination preserving privacy, or energy aware decision making, that enabled methods to operate effectively within resources constrained edge nodes and support distributed trustworthy coordination across multiple agents balancing security and privacy.
NOVELTY:
This invention introduces a hybrid scheduling and resource management system tailored for distributed computing environments, such as IoT networks, smart cities, MEC, and 5G/6G systems. By integrating federated learning, reinforcement learning, and bio-inspired optimization algorithms, it enables intelligent scheduling that adapts to variable workloads, network congestion, and resource constraints in real time. It preserves data privacy through federated learning while achieving global coordination, ensures energy-efficient and latency-sensitive task execution with lightweight bio-inspired techniques, and facilitates secure, decentralized decision-making across heterogeneous infrastructures. This scalable and innovative framework overcomes limitations of static models and centralized processing, providing a transformative solution for next-generation computing infrastructures.

, Claims:1. A method for intelligent task scheduling and resource allocation in cloud-edge computing environments, the method comprising:
integrating artificial intelligence (AI) learning approaches including reinforcement learning and federated learning, with bio-inspired optimization techniques selected from Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Grey Wolf Optimization (GWO);
dynamically and adaptively allocating computing resources and scheduling tasks in real time, based on varying network conditions, workload changes, and security threats;
preserving data privacy by employing federated learning for distributed model training across edge nodes without sharing raw data; and
enabling energy-efficient and latency-sensitive decision making across heterogeneous computing infrastructures comprising Internet of Things (IoT) devices, smart city systems, mobile edge computing nodes, and 5G/6G networks.
2. The method as claimed in claim 1, wherein the bio-inspired optimization techniques are configured to operate as lightweight agents suitable for energy-constrained edge nodes to ensure energy-aware and resource-efficient operations.
3. The method as claimed in claim 1, wherein reinforcement learning is utilized to optimize task scheduling decisions based on historical task execution patterns and predicted resource availability across the distributed infrastructure.
4. The method as claimed in claim 1, wherein federated learning is configured to enable decentralized coordination between multiple agents while preserving user data privacy and supporting scalable model convergence across the network.
5. The method as claimed in claim 1, wherein the system is adapted to maintain secure management of resources through continuous learning and adaptive threat detection in latency-sensitive and large-scale environments.

Documents

Application Documents

# Name Date
1 202541053276-STATEMENT OF UNDERTAKING (FORM 3) [02-06-2025(online)].pdf 2025-06-02
2 202541053276-REQUEST FOR EARLY PUBLICATION(FORM-9) [02-06-2025(online)].pdf 2025-06-02
3 202541053276-POWER OF AUTHORITY [02-06-2025(online)].pdf 2025-06-02
4 202541053276-FORM-9 [02-06-2025(online)].pdf 2025-06-02
5 202541053276-FORM FOR SMALL ENTITY(FORM-28) [02-06-2025(online)].pdf 2025-06-02
6 202541053276-FORM 1 [02-06-2025(online)].pdf 2025-06-02
7 202541053276-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [02-06-2025(online)].pdf 2025-06-02
8 202541053276-EVIDENCE FOR REGISTRATION UNDER SSI [02-06-2025(online)].pdf 2025-06-02
9 202541053276-EDUCATIONAL INSTITUTION(S) [02-06-2025(online)].pdf 2025-06-02
10 202541053276-DRAWINGS [02-06-2025(online)].pdf 2025-06-02
11 202541053276-DECLARATION OF INVENTORSHIP (FORM 5) [02-06-2025(online)].pdf 2025-06-02
12 202541053276-COMPLETE SPECIFICATION [02-06-2025(online)].pdf 2025-06-02