Sign In to Follow Application
View All Documents & Correspondence

A Hybrid System Of Fog Cloud Scheduling Algorithm Utilizing Data Density Metrics For Optimized Latency Reduction

Abstract: A HYBRID SYSTEM OF FOG-CLOUD SCHEDULING ALGORITHM UTILIZING DATA DENSITY METRICS FOR OPTIMIZED LATENCY REDUCTION The invention discloses a hybrid fog-cloud scheduling system designed to minimize latency in distributed computing environments by leveraging real-time data density metrics. The system comprises a scheduler module that dynamically allocates computational tasks between fog nodes and cloud servers based on data density and latency requirements. A data density metric calculator identifies high-density zones, while a task classifier assigns tasks as latency-sensitive or latency-tolerant. A dynamic task allocator prioritizes latency-critical tasks for fog processing and delegates non-urgent workloads to cloud servers. A continuous feedback loop monitors task performance and refines scheduling strategies using real-time metrics. The method involves calculating data density, classifying tasks, allocating resources dynamically, and adapting decisions using performance feedback. The system is modular, interoperable, and scalable, making it suitable for smart cities, healthcare, industrial automation, and autonomous systems. This invention enhances responsiveness, optimizes bandwidth utilization, and improves overall Quality of Service (QoS) in hybrid fog-cloud infrastructures.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 September 2025
Publication Number
43/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

SR UNIVERSITY
ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Inventors

1. RAKESH REDDY GURRALA
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA
2. DR. SAMPATH KUMAR TALLAPALLY
SR UNIVERSITY, ANANTHSAGAR, HASANPARTHY (M), WARANGAL URBAN, TELANGANA - 506371, INDIA

Specification

Description:FIELD OF THE INVENTION
The present invention relates to distributed computing and intelligent resource management, specifically within hybrid fog-cloud computing environments. It particularly concerns adaptive scheduling algorithms designed to optimize task execution between fog nodes and cloud servers. More precisely, the invention addresses latency-sensitive applications in Internet of Things (IoT) ecosystems, where real-time responsiveness and efficient resource utilization are essential.
BACKGROUND OF THE INVENTION
In recent years, the multiplication of Web of Things (IoT) gadgets has significantly expanded the volume, speed, and assortment of information being produced at the edge of systems. These edge devices—including sensors, cameras, wearable advances, and independent systems—require prompt information handling for mission-critical applications such as savvy transportation, healthcare, and mechanical control. Conventional cloud computing, in spite of the fact that capable in terms of preparing and capacity, presents noteworthy inactivity due to the geological remove between information sources and cloud information centers.
Fog computing was presented as a complementary worldview to address these inactivity and transmission capacity challenges. By bringing computational and capacity assets closer to the edge, mist hubs empower speedier information preparing and real-time responsiveness. Be that as it may, mist foundation has impediments in terms of versatility and asset accessibility, which makes it less appropriate for assignments including large-scale information analytics or long-term capacity.
The integration of fog and cloud computing into a unified hybrid architecture presents a promising approach to balancing the benefits of low latency and high computational capacity. In such hybrid systems, time-sensitive tasks can be processed at the fog layer, while computationally intensive or latency-tolerant tasks are offloaded to the cloud. The efficiency of this division depends heavily on how intelligently tasks are scheduled and distributed across fog and cloud nodes.
Existing scheduling mechanisms often use static rules, round-robin algorithms, or priority-based policies, which fail to consider the dynamic characteristics of data generation at the edge. These approaches are not adaptive to real-time changes in data density, network congestion, or node availability, resulting in suboptimal latency performance and inefficient resource utilization.
Data density, defined as the concentration of data generation events in a particular spatial or temporal domain, is a critical metric that can provide deep insights into real-time system demands. Areas with high data density typically require more immediate processing and are prone to congestion and latency spikes. However, conventional scheduling systems do not incorporate data density as a decision-making parameter.
Furthermore, in large-scale IoT deployments, the network environment is highly dynamic-devices frequently join and leave the network, bandwidth fluctuates, and processing loads on fog nodes vary continuously. These factors demand an adaptive and intelligent scheduling framework that can respond in real time to evolving conditions.
Therefore, there is a pressing need for a novel scheduling algorithm in hybrid fog-cloud environments that not only considers the latency requirements and computational demands of tasks but also dynamically adapts based on real-time data density metrics. Such an approach would enhance system responsiveness, reduce end-to-end latency, and improve overall Quality of Service (QoS) in diverse application scenarios.
In recent years, the proliferation of Internet of Things (IoT) devices has dramatically increased the volume, velocity, and variety of data being generated at the edge of networks. These edge devices—including sensors, cameras, wearable technologies, and autonomous systems—require immediate data processing for mission-critical applications such as smart transportation, healthcare, and industrial control. Traditional cloud computing, though powerful in terms of processing and storage, introduces significant latency due to the geographical distance between data sources and cloud data centers.
Fog computing was introduced as a complementary paradigm to address these latency and bandwidth challenges. By bringing computational and storage resources closer to the edge, fog nodes enable faster data processing and real-time responsiveness. However, fog infrastructure has limitations in terms of scalability and resource availability, which makes it less suitable for tasks involving large-scale data analytics or long-term storage.
The integration of fog and cloud computing into a unified hybrid architecture presents a promising approach to balancing the benefits of low latency and high computational capacity. In such hybrid systems, time-sensitive tasks can be processed at the fog layer, while computationally intensive or latency-tolerant tasks are offloaded to the cloud. The efficiency of this division depends heavily on how intelligently tasks are scheduled and distributed across fog and cloud nodes.
Existing scheduling mechanisms often use static rules, round-robin algorithms, or priority-based policies, which fail to consider the dynamic characteristics of data generation at the edge. These approaches are not adaptive to real-time changes in data density, network congestion, or node availability, resulting in suboptimal latency performance and inefficient resource utilization.
Data density, defined as the concentration of data generation events in a particular spatial or temporal domain, is a critical metric that can provide deep insights into real-time system demands. Areas with high data density typically require more immediate processing and are prone to congestion and latency spikes. However, conventional scheduling systems do not incorporate data density as a decision-making parameter.
Furthermore, in large-scale IoT deployments, the network environment is highly dynamic-devices frequently join and leave the network, bandwidth fluctuates, and processing loads on fog nodes vary continuously. These factors demand an adaptive and intelligent scheduling framework that can respond in real time to evolving conditions.
While some recent efforts have introduced machine learning and heuristic techniques into fog-cloud scheduling, they often rely on historical data or require significant training overhead, which limits their applicability in dynamic and real-time environments. Moreover, these techniques generally lack a unified strategy that incorporates both data density and task criticality.
Therefore, there is a pressing need for a novel scheduling algorithm in hybrid fog-cloud environments that not only considers the latency requirements and computational demands of tasks but also dynamically adapts based on real-time data density metrics. Such an approach would enhance system responsiveness, reduce end-to-end latency, and improve overall Quality of Service (QoS) in diverse application scenarios.
The present invention pertains to the field of distributed computing and intelligent resource management, with particular emphasis on hybrid fog-cloud computing environments. It specifically addresses the domain of scheduling algorithms designed to optimize task execution in systems where computational resources are spread across both local (fog) and centralized (cloud) nodes.
More particularly, this invention focuses on latency-sensitive applications in Internet of Things (IoT) ecosystems, where real-time responsiveness and efficient data processing are critical. The invention introduces a novel scheduling mechanism that dynamically analyzes the spatial and temporal density of data generated by edge devices to make informed decisions about task allocation between fog and cloud infrastructures.
The invention resides at the intersection of edge analytics, network-aware computing, and intelligent scheduling, aiming to bridge the latency-performance gap often encountered in hybrid distributed systems. It is highly applicable in areas such as smart cities, autonomous systems, industrial automation, and healthcare monitoring, where large-scale and low-latency data processing is essential.
US20240289345: A method for migrating a computing resource across cloud environments is described. According to the method, a data management system may interface with a first cloud environment and a second cloud environment. The data management system may receive a request to migrate a first computing resource stored in the first cloud environment to the second cloud environment. The data management system may generate, based on the request, a first compute job in the first cloud environment to cause the first cloud environment to extract data from a backup of the first computing resource and transfer the data to the second cloud environment. The data management system may instruct the second cloud environment to generate a second computing resource. The data management system may generate a second compute job in the second cloud environment to cause the second cloud environment to load the extracted data into the second computing resource.
US10992752B2: Systems, methods, and computer-readable media are provided for wireless sensor networks (WSNs), including sensor deployment mechanisms for road surveillance. Disclosed embodiments are applied to design roadside infrastructure with optimal perception for a given geographic area. The deployment mechanisms account for the presence of static and dynamic obstacles, as well as symmetry aspects of the underlying environment. The deployment mechanisms minimize the number of required sensors to reduce costs and conserve compute and network resources, and extended infrastructure the sensing capabilities of sensor networks. Other embodiments are disclosed and/or claimed.
The increasing deployment of IoT devices generates large volumes of data requiring immediate processing. Traditional cloud-based processing, though powerful, introduces significant latency due to physical distance from data sources. Fog computing reduces latency by bringing resources closer to the edge but lacks large-scale computational capacity. Hybrid fog-cloud architectures combine these advantages but depend on efficient scheduling to achieve optimal performance. Existing scheduling algorithms rely on static rules or simplistic heuristics, ignoring real-time data density, network fluctuations, and node availability. This results in poor responsiveness, inefficient resource use, and congestion under high-density data zones. The present invention solves these issues by introducing a hybrid fog-cloud scheduling system that dynamically incorporates data density metrics to allocate tasks intelligently, thereby minimizing latency, optimizing bandwidth, and enhancing Quality of Service (QoS).
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts, in a simplified format, that are further described in the detailed description of the invention.
This summary is neither intended to identify key or essential inventive concepts of the invention and nor is it intended for determining the scope of the invention.
The invention discloses a hybrid fog-cloud scheduling framework designed to reduce latency in distributed IoT environments. It comprises a scheduler module located at the fog-cloud interface that dynamically assigns computational tasks based on real-time spatial and temporal data density metrics and the latency sensitivity of tasks.
The system introduces a Data Density Metric Calculator to measure data generation intensity across zones, a Task Classifier to categorize tasks as latency-sensitive or latency-tolerant, and a Dynamic Task Allocator that maps tasks to fog or cloud resources using a multi-criteria decision algorithm.
A continuous feedback mechanism monitors latency, task completion time, resource utilization, and network conditions, enabling adaptive refinement of scheduling policies. This ensures system adaptability in highly dynamic IoT environments, where device density and workloads fluctuate rapidly.
The modular architecture allows interoperability with heterogeneous fog and cloud infrastructures and scalability across diverse deployment sizes, from smart buildings to city-wide networks. The invention ensures optimized responsiveness and efficient resource management in applications such as healthcare, smart cities, industrial automation, and autonomous systems.
To further clarify advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof, which is illustrated in the appended drawings. It is appreciated that these drawings depict only typical embodiments of the invention and are therefore not to be considered limiting of its scope. The invention will be described and explained with additional specificity and detail with the accompanying drawings.
The invention comprises a multi-layered hybrid computing environment integrating edge devices, fog nodes, and cloud servers. Edge devices act as data generators and include various types of sensors, actuators, smart devices, and embedded systems. Fog nodes serve as intermediate processing units that are geographically closer to the edge, offering low-latency computation. Cloud servers are centralized, high-capacity processing units that support deep analytics and large-scale data processing. The intelligent Scheduler Module orchestrates the task assignment across fog and cloud resources, operating as the core control unit of the system.
BRIEF DESCRIPTION OF THE DRAWINGS
The illustrated embodiments of the subject matter will be understood by reference to the drawings, wherein like parts are designated by like numerals throughout. The following description is intended only by way of example, and simply illustrates certain selected embodiments of devices, systems, and methods that are consistent with the subject matter as claimed herein, wherein:
Fig. 1 is a schematic diagram of the hybrid fog-cloud scheduling system architecture, showing edge devices, fog nodes, cloud servers, and the scheduler module positioned at the fog-cloud interface.
Fig. 2 is a block diagram illustrating the functional workflow of the scheduler module, including the data density metric calculator, task classifier, dynamic task allocator, and feedback loop mechanism for adaptive scheduling.
Fig. 1 illustrates the overall architecture of the hybrid fog-cloud scheduling system. The figure shows edge devices at the bottom layer, including sensors, actuators, and IoT-enabled devices that generate data streams. Above them, fog nodes are depicted as intermediate processing units located close to the edge, providing low-latency computation. At the upper layer, cloud servers are shown as centralized resources offering high-capacity computation and large-scale data storage. Positioned at the fog-cloud interface is the Scheduler Module, comprising subcomponents such as the Data Density Metric Calculator, Task Classifier, Dynamic Task Allocator, Resource Monitor, and Feedback Loop Mechanism. Arrows between layers represent data flow and task scheduling decisions, demonstrating how latency-sensitive tasks are assigned to fog nodes while latency-tolerant tasks are offloaded to the cloud.
Fig. 2 depicts the functional workflow of the scheduling process within the Scheduler Module. It begins with incoming data streams being analyzed by the Data Density Metric Calculator, which computes density values for different zones. These values are passed to the Task Classifier, which categorizes tasks as latency-sensitive or latency-tolerant and assigns a composite priority score. The Dynamic Task Allocator evaluates task priority, fog resource availability, and network conditions to map each task either to fog nodes or cloud servers. A Feedback Loop monitors task performance, including latency, resource utilization, and throughput, and continuously updates the scheduling parameters. This figure emphasizes the stepwise decision-making process that ensures adaptive, real-time scheduling in the hybrid fog-cloud environment.
The figures depict embodiments of the present subject matter for the purposes of illustration only. A person skilled in the art will easily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein.
DETAILED DESCRIPTION OF THE INVENTION
The detailed description of various exemplary embodiments of the disclosure is described herein with reference to the accompanying drawings. It should be noted that the embodiments are described herein in such details as to clearly communicate the disclosure. However, the amount of details provided herein is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
It is also to be understood that various arrangements may be devised that, although not explicitly described or shown herein, embody the principles of the present disclosure. Moreover, all statements herein reciting principles, aspects, and embodiments of the present disclosure, as well as specific examples, are intended to encompass equivalents thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms “a",” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In addition, the descriptions of "first", "second", “third”, and the like in the present invention are used for the purpose of description only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated. Thus, features defining "first" and "second" may include at least one of the features, either explicitly or implicitly.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which example embodiments belong. It will be further understood that terms, e.g., those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
System Overview
The invention comprises a multi-layered hybrid computing environment integrating edge devices, fog nodes, and cloud servers. Edge devices act as data generators and include various types of sensors, actuators, smart devices, and embedded systems. Fog nodes serve as intermediate processing units that are geographically closer to the edge, offering low-latency computation. Cloud servers are centralized, high-capacity processing units that support deep analytics and large-scale data processing. The intelligent Scheduler Module orchestrates the task assignment across fog and cloud resources, operating as the core control unit of the system.
The Scheduler Module resides at the fog-cloud interface and interacts with all nodes in the network. It comprises several subcomponents, including the Data Density Metric Calculator, Task Classifier, Resource Monitor, and Dynamic Task Allocator. These modules collectively manage the real-time analysis of system data, classification of incoming tasks, monitoring of resource availability, and adaptive task scheduling decisions.
Data Density Metric Calculator
The Data Density Metric Calculator is responsible for computing the density of data generated in different spatial zones within a specified time window. Each zone ZiZ_iZi is defined either geographically or logically, depending on the application domain. The data density D(Zi)D(Z_i)D(Zi) is calculated using the formula:
D(Zi)=∑j=1ndjTD(Z_i) = \frac{\sum_{j=1}^{n} d_j}{T}D(Zi)=T∑j=1ndj
Where djd_jdj represents the data packets generated by device jjj in zone ZiZ_iZi, and TTT denotes the observation time window. Zones with higher density values are considered high-priority due to increased data traffic and potential for latency buildup.
The metric supports both spatial and temporal adaptability, enabling the system to detect dynamic hotspots, such as areas with dense vehicular traffic, healthcare emergencies, or industrial anomalies. The real-time computation allows the scheduler to make proactive decisions to prevent congestion and latency spikes.
Task Classification and Prioritization
The Task Classifier component categorizes incoming computational tasks into two primary classes: latency-sensitive and latency-tolerant. Latency-sensitive tasks require real-time or near real-time processing and include applications such as emergency alerts, autonomous vehicle coordination, or live video surveillance. Latency-tolerant tasks include data archiving, long-term analytics, or report generation, which can withstand processing delays.
Each task is assigned a priority weight based on its classification and the data density of the originating zone. A composite score SSS is calculated as:
S=α⋅P+β⋅D(Zi)S = \alpha \cdot P + \beta \cdot D(Z_i)S=α⋅P+β⋅D(Zi)
Where PPP is the task priority based on latency requirement, D(Zi)D(Z_i)D(Zi) is the data density, and α,β\alpha, \betaα,β are tunable parameters representing system policy. The Scheduler uses this score to determine task urgency and allocate computing resources accordingly.
Dynamic Task Allocation
The Dynamic Task Allocator maps tasks to fog or cloud nodes using a multi-criteria decision-making algorithm. It considers the composite score of each task, the current load on fog and cloud nodes, network bandwidth availability, and historical performance data. The mapping process may use techniques such as weighted bipartite graph matching, constraint satisfaction programming, or rule-based heuristics.
For high-density zones with critical latency-sensitive tasks, the algorithm favors allocation to nearby fog nodes. When fog resources are overloaded or unavailable, the scheduler may delay low-priority tasks or reroute them to cloud nodes. This decision-making process is continuously refined using real-time metrics from the system monitor.
Feedback Loop and Adaptation
To maintain optimal system performance, the Scheduler includes a Feedback Loop Mechanism. It collects performance metrics such as average task completion time, fog node utilization, cloud bandwidth consumption, and task rejection rates. These metrics are analyzed using lightweight machine learning models or heuristic adjustments to update scheduling parameters.
The adaptive nature of the feedback loop enables the algorithm to respond to environmental changes such as sudden surges in data volume, node failures, or variations in network latency. Over time, the system evolves to offer better responsiveness and efficient resource usage based on contextual learning.
Interoperability and Scalability
The system is designed to be interoperable with diverse hardware and software ecosystems. It supports integration with existing IoT platforms, containerized fog computing environments (e.g., using Kubernetes or OpenFog), and popular cloud providers via APIs. The modular architecture allows for easy customization and scalability, supporting deployments ranging from localized smart buildings to large-scale smart cities.
Scalability is ensured through hierarchical scheduling and zone-based partitioning of the network. Each zone may have its own localized scheduler that reports to a central controller, allowing distributed processing of scheduling decisions. This hierarchical approach reduces overhead and supports millions of concurrent edge devices without performance degradation.
In conclusion, the proposed hybrid fog-cloud scheduling algorithm leveraging data density metrics offers a novel and effective solution to address the latency challenges prevalent in large-scale, distributed IoT environments. By dynamically analyzing spatial and temporal data density patterns and classifying tasks based on latency sensitivity, the invention ensures intelligent and adaptive allocation of computational resources between fog and cloud layers. This approach not only enhances real-time responsiveness but also improves overall system efficiency, bandwidth usage, and resource optimization. The modular and interoperable design of the system makes it adaptable to a wide range of application domains, including smart cities, industrial automation, healthcare monitoring, and autonomous transportation systems.
Looking forward, the invention holds significant potential for future enhancements. The integration of deep reinforcement learning or federated learning models could enable even more precise and autonomous scheduling decisions without centralized training. Additionally, the framework can be extended to incorporate energy-aware scheduling, enhancing its utility in energy-constrained environments such as remote sensing or wearable health devices. Integration with blockchain for decentralized trust management and data integrity is another promising direction for future development.
Overall, this invention provides a robust foundation for intelligent, latency-aware, and scalable task scheduling in hybrid fog-cloud infrastructures. Its adaptability, real-time responsiveness, and application-specific customization capabilities position it as a forward-looking solution in the rapidly evolving landscape of edge computing and distributed intelligent systems.
The present invention introduces a novel hybrid fog-cloud scheduling algorithm that uniquely leverages data density metrics to dynamically allocate computational tasks between fog and cloud layers with the specific goal of minimizing end-to-end latency. Unlike conventional scheduling mechanisms that rely primarily on resource availability or static load-balancing techniques, this invention incorporates real-time analysis of data density profiles at the edge to:
1. Predict task execution efficiency across network layers.
2. Adaptively offload tasks to fog or cloud nodes based on data intensity and latency sensitivity.
3. Minimize data transmission overhead by executing high-density, latency-critical tasks closer to the data source (fog), and delegating sparse or non-urgent tasks to cloud nodes.
This dual-optimization approach—data-driven scheduling and adaptive distribution—has not been disclosed in the known prior art, which typically lacks context-aware, density-driven task assignment models in hybrid fog-cloud environments.
The invention provides a hybrid fog-cloud scheduling system for minimizing latency in distributed IoT environments. The system consists of edge devices, fog nodes, and cloud servers. Edge devices generate data, fog nodes provide localized processing with low latency, and cloud servers handle large-scale data analytics.
At the center of the system is the Scheduler Module, which operates at the fog-cloud interface. This module coordinates task distribution based on data density, task priority, and resource availability.
The Data Density Metric Calculator computes the density of data generated in each zone within a specified time window. High-density zones are identified as critical due to their likelihood of congestion and latency spikes. This allows the scheduler to prioritize these zones.
The Task Classifier separates incoming tasks into latency-sensitive and latency-tolerant categories. Latency-sensitive tasks include real-time video surveillance, emergency alerts, and autonomous system coordination. Latency-tolerant tasks include data archival, periodic reporting, and non-critical analytics. Each task is assigned a composite priority score based on both data density and latency requirements.
The Dynamic Task Allocator uses this score, along with network and resource conditions, to determine whether tasks should be executed at fog nodes or offloaded to cloud servers. High-priority tasks from high-density zones are preferentially allocated to fog nodes, while lower-priority tasks are offloaded to cloud servers when fog capacity is constrained.
The Feedback Loop Mechanism ensures continuous system adaptation. By monitoring metrics such as latency, bandwidth utilization, and node load, the scheduler adjusts allocation strategies in real time. Lightweight machine learning or heuristic models may refine scheduling parameters dynamically.
The architecture supports scalability by adopting a hierarchical scheduling structure. Localized schedulers manage individual zones, while a central controller oversees global optimization. This reduces communication overhead and allows deployment in environments with millions of devices.
The modular design ensures interoperability with existing IoT frameworks, fog computing platforms, and cloud infrastructures. The system supports integration via APIs and containerized environments, enabling flexibility across diverse applications.
In one embodiment, the system is deployed in a smart city network where high-density traffic intersections generate large streams of vehicular data. Latency-sensitive tasks such as collision detection are processed at fog nodes, while long-term traffic analytics are offloaded to cloud servers.
In another embodiment, the system is applied to healthcare monitoring where wearable devices generate real-time physiological data. Critical anomalies are processed at fog nodes for immediate alerts, while long-term health records are transmitted to the cloud for analytics.
In yet another embodiment, the system is utilized in industrial automation, where IoT sensors monitor machinery. Fog nodes process critical fault detection tasks, while trend analysis for predictive maintenance is performed in the cloud.
The invention improves end-to-end latency, optimizes resource use, and enhances responsiveness across diverse domains. Its novelty lies in the integration of data density metrics into scheduling, enabling context-aware task allocation.
Best Method of Working
The best method of working involves deploying the scheduler module at the fog-cloud interface in a distributed IoT environment. The Data Density Metric Calculator continuously monitors zones and calculates density values in real time. Tasks are classified based on urgency and density, and allocation decisions are made dynamically using the composite priority score.
Performance metrics are monitored through the feedback loop, which updates scheduling strategies as system conditions evolve. This ensures minimal latency and efficient bandwidth use. In practice, latency-sensitive tasks from high-density zones are executed at fog nodes, while computationally intensive but non-urgent tasks are offloaded to the cloud.
This approach is most effective in large-scale, latency-critical deployments such as healthcare, autonomous systems, and smart transportation, where responsiveness and efficiency are paramount.

, Claims:1. A hybrid fog-cloud scheduling system for reducing latency in distributed computing environments, comprising:
• a scheduler module at the fog-cloud interface;
• a data density metric calculator configured to compute spatial and temporal data density values;
• a task classifier configured to categorize tasks as latency-sensitive or latency-tolerant;
• a dynamic task allocator configured to distribute tasks between fog nodes and cloud servers based on task priority, resource availability, and data density;
• a feedback loop mechanism configured to refine scheduling parameters using performance metrics; and
• an interoperable modular architecture supporting heterogeneous fog and cloud infrastructures.
2. The system as claimed in claim 1, wherein the data density metric calculator identifies high-density zones by calculating the ratio of data packets generated over a fixed observation window.
3. The system as claimed in claim 1, wherein the task classifier assigns a composite score derived from latency requirement and data density for task prioritization.
4. The system as claimed in claim 1, wherein the dynamic task allocator allocates latency-sensitive tasks to fog nodes and latency-tolerant tasks to cloud servers.
5. The system as claimed in claim 1, wherein the feedback loop mechanism employs lightweight machine learning or heuristic models for adaptive scheduling.
6. A method for hybrid fog-cloud scheduling to reduce latency in distributed computing environments, comprising the steps of:
• computing data density metrics for spatial and temporal zones;
• classifying tasks as latency-sensitive or latency-tolerant;
• calculating a composite score based on data density and latency requirements;
• dynamically allocating tasks between fog nodes and cloud servers according to score and resource conditions; and
• refining scheduling strategies using feedback from system performance metrics.
7. The method as claimed in claim 6, wherein high-priority tasks are preferentially executed at fog nodes to minimize latency.
8. The method as claimed in claim 6, wherein low-priority tasks are routed to cloud servers to optimize computational efficiency.
9. The method as claimed in claim 6, wherein the scheduling decisions are adapted in real time based on bandwidth availability, fog load, and node performance.
10. The method as claimed in claim 6, wherein scalability is achieved through hierarchical scheduling with zone-level schedulers reporting to a central controller.

Documents

Application Documents

# Name Date
1 202541090184-STATEMENT OF UNDERTAKING (FORM 3) [22-09-2025(online)].pdf 2025-09-22
2 202541090184-REQUEST FOR EARLY PUBLICATION(FORM-9) [22-09-2025(online)].pdf 2025-09-22
3 202541090184-POWER OF AUTHORITY [22-09-2025(online)].pdf 2025-09-22
4 202541090184-FORM-9 [22-09-2025(online)].pdf 2025-09-22
5 202541090184-FORM FOR SMALL ENTITY(FORM-28) [22-09-2025(online)].pdf 2025-09-22
6 202541090184-FORM 1 [22-09-2025(online)].pdf 2025-09-22
7 202541090184-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [22-09-2025(online)].pdf 2025-09-22
8 202541090184-EVIDENCE FOR REGISTRATION UNDER SSI [22-09-2025(online)].pdf 2025-09-22
9 202541090184-EDUCATIONAL INSTITUTION(S) [22-09-2025(online)].pdf 2025-09-22
10 202541090184-DRAWINGS [22-09-2025(online)].pdf 2025-09-22
11 202541090184-DECLARATION OF INVENTORSHIP (FORM 5) [22-09-2025(online)].pdf 2025-09-22
12 202541090184-COMPLETE SPECIFICATION [22-09-2025(online)].pdf 2025-09-22