Sign In to Follow Application
View All Documents & Correspondence

Method And System For Scheduling Execution Of Plurality Of Tasks

Abstract: ABSTRACT METHOD AND SYSTEM FOR SCHEDULING EXECUTION OF PLURALITY OF TASKS The present invention relates to a system (120) and a method (500) for scheduling execution of plurality of tasks is disclosed. The system (120) includes a client module (220) configured to receive a schedule related to execution of a plurality of tasks from a server (115). The system (120) includes a scheduling module (225) configured to schedule execution of the plurality of tasks at predefined time intervals as per the schedule. The system (120) includes an application cache module (230) configured to store information related to triggering of the plurality of tasks. The system (120) includes a monitoring module (230) configured to calculate duration between triggering of the previous task and the current task from the information stored in the application cache module (230). Further, the monitoring module (230) configured to apply an auto-correction to the schedule upon detecting a delay in the triggering of the previous and current tasks. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
06 September 2023
Publication Number
11/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Sandeep Bisht
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Ravindra Yadav
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Ezaj Ansari
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Jyothi Durga Prasad Chillapalli
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR SCHEDULING EXECUTION OF PLURALITY OF TASKS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication networks, more particularly relates to a method and a system for scheduling execution of plurality of tasks.
BACKGROUND OF THE INVENTION
[0002] A communication network is subjected to massive exchange of information over a certain time frame. Multiple users send information commands to network servers which are processed, and appropriate responses are sent to users. To perform satisfactorily, the server health and Input/Output Operations Per Second (IOPS) are checked regularly along with other scheduled maintenances. For example, performing server health includes, utilizing a scheduling module for handling response timeouts when a request exceeds a set duration, and performing server health check, managing data stream exhaustion and dynamically managing Hypertext Transfer Protocol/2.0 (HTTP/2.0) connections with endpoints, and calculating Transaction Per Second (TPS) at the end point.
[0003] The scheduling module is also expected to keep the subscription connection open as long as possible while barring errors in the network, software, hardware, etc. and incrementally process the response. The scheduling module can only parse the response after the connection is closed and cannot be used. In some cases, such as under high traffic surge or degrade in network bandwidth, the scheduling module starts misfiring the checkup commands. As a result, the scheduling module starts lagging from the actual time of firing. A misfiring of the scheduling module also leads to application crashing as it will not be able to receive the incoming traffic rate. Therefore, there is a need to identify the misfiring and lagging errors of the scheduling module.
[0004] Presently there is no available solution for misfiring and lagging errors. There is a requirement of a secular mechanism to identify such misfiring abnormality in the server/network and to solve the issue efficiently by increasing number of threads or by increasing priority of threads or both.
[0005] Therefore, from the above cases, it becomes necessary to implement a system and method to effectively manage and monitor firing of scheduler and any possible abnormality in firing and auto-correction of the abnormality, so as to prevent possibility of server shutdowns while maintaining the server health. However, the current available solutions are not able to offer the optimized scheduler monitoring and auto-correcting system and method with provision to minimize possible server performance degradation.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and a system for scheduling execution of a plurality of tasks.
[0007] In one aspect of the present invention, the method for scheduling execution of a plurality of tasks is disclosed. The method includes the step of receiving, by one or more processors, a schedule related to execution of a plurality of tasks from a server. The method includes the step of scheduling, by the one or more processors, execution of the plurality of tasks at predefined time intervals as per the schedule. The method includes the step of storing, by the one or more processors, in an application cache module information related to triggering of each of the plurality of tasks, the plurality of tasks including a previous task and a current task. The method includes the step of calculating, by the one or more processors, the duration between triggering of the previous task and the current task from the information stored in the application cache module. The duration between triggering of the previous task and the current task exceeds a predetermined threshold, a delay is detected. The method includes the step of applying, by the one or more processors, an auto-correction to the schedule upon detecting a delay in the triggering of the previous task and the current task.
[0008] In one embodiment, the schedule includes a plurality of predefined time intervals provided by a Service Communication Proxy (SCP) for executing the plurality of tasks.
[0009] In another embodiment, the delay in the triggering of the at least one of the previous task and the current task is detected when the duration between the time of trigger of the previous task and the time of trigger of a current task exceeds a scheduled duration by a predetermined threshold.
[0010] In yet another embodiment, data related to a time of trigger of a previous task and a time of trigger of the current task is retrieved from the application cache module.
[0011] In yet another embodiment, upon detecting the delay a thread count of a task is increased by a first predetermined count and a priority of the task is increased by a first predetermined value, and wherein a maximum thread count of the task is thirty and a maximum priority of the task is ten.
[0012] In another aspect of the present invention, the system for scheduling execution of plurality of tasks. The system includes a client module configured to receive a schedule related to execution of a plurality of tasks from a server. The system includes a scheduler for scheduling execution of the plurality of tasks at predefined time intervals as per the schedule. The system includes an application cache module configured to store information related to triggering of each of the plurality of tasks, the plurality of tasks including a previous task and a current task. The system includes a monitoring module configured to calculate from the information stored in the application cache module a duration between triggering of the previous task and the current task. The duration between triggering of the previous task and the current task exceeds the predetermined threshold, a delay is detected. Further, the system includes the monitoring module configured to apply an auto-correction to the schedule upon detecting a delay in the triggering of the previous task and the current task.
[0013] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to receive a schedule related to execution of a plurality of tasks from a server. The processor is configured to schedule execution of the plurality of tasks at predefined time intervals defined in the schedule. The processor is configured to store in an application cache module information related to triggering of each of the plurality of tasks, the triggering of each of the plurality of tasks including a previous task and a current task. The processor is configured to calculate the duration between triggering of the previous task and the current task from the information stored in the application cache module. The duration between triggering of the previous task and the current task exceeds the predetermined threshold, a delay is detected. The processor is configured to apply an auto-correction to the schedule upon detecting a delay in the triggering of the previous task and the current task.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for scheduling execution of plurality of tasks, according to one or more embodiments of the present disclosure;
[0017] FIG. 2 is an exemplary block diagram of a system for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure;
[0018] FIG. 3 is a block diagram of an architecture that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure; and
[0019] FIG. 4 is an exemplary block diagram of the system for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure;
[0020] FIG. 5 is a flow diagram illustrating a method for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure; and
[0021] FIG. 6 is a flow chart illustrating the method for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure.
[0022] The foregoing shall be more apparent from the following detailed description of the invention.

DETAILED DESCRIPTION OF THE INVENTION
[0023] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0024] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0025] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0026] In an embodiment, if a scheduling module is not operating properly then Transaction Per Second (TPS) calculation will be erroneous. For example, during a time interval, the scheduling module may report a high TPS value, but in the subsequent time interval it may report a low TPS value. In this regard, misfiring of the scheduling module leads to abnormality in response timeout timer. For example, when a request was supposed to timeout after 11 seconds, but due to abnormality of the scheduling module, a firing/trigger to time out that particular request get fired after 12 seconds and therefore leading to the request cluttering in a server. The scheduling module is also configured to check the number of connections against each endpoint and in case there are less number of channels than required, new channels are created by the scheduling module for that endpoint to facilitate communication. Lag in scheduling module means delay in addition of channels for endpoints which will stress the currently available connections resulting in increased latency and even request failure.
[0027] To address these problems, in various embodiments of the present invention, a system and a method to schedule execution of the plurality of tasks, to identify if there is some misfiring condition in the scheduling module and to implement auto-correction to avoid the server crash are disclosed.
[0028] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for scheduling execution of plurality of tasks, according to one or more embodiments of the present invention. The environment 100 includes the network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 for scheduling execution of the plurality of tasks. In an embodiment, the user is one of, but not limited to, a network operator or a service provider. The scheduling execution of the plurality of tasks refers to the process of organizing, planning, and managing the order and timing in which the plurality of tasks is executed within the system 120. The scheduling execution of the plurality of tasks is to ensure that the plurality of tasks is completed efficiently, meeting specific requirements such as deadlines, priorities, and resource availability.
[0029] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105.
[0030] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0031] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0032] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0033] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is configured for scheduling execution of the plurality of tasks. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity, as per multiple embodiments of the present invention.
[0034] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0035] FIG. 2 is an exemplary block diagram of the system 120 for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure.
[0036] The system 120 includes a processor 205, a memory 210, a user interface 215, and a database 240. For the purpose of description and explanation, the description will be explained with respect to one or more processors 205, or to be more specific will be explained with respect to the processor 205 and should nowhere be construed as limiting the scope of the present disclosure. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0037] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0038] The user interface 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the user equipment 110, and the database 240.
[0039] The database 240 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 240 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0040] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0041] In order for the system 120 to perform scheduling execution of the plurality of tasks, the processor 205 includes a client module 220, a scheduling module 225, an application cache module 230, and a monitoring module 235 communicably coupled to each other. In an embodiment, operations and functionalities of the client module 220, the scheduling module 225, the application cache module 230, and the monitoring module 235 can be used in combination or interchangeably.
[0042] The client module 220 is configured to receive a schedule related to execution of the plurality of tasks from the server 115. In an embodiment, the schedule includes a plurality of predefined time intervals provided by a Service Communication Proxy (SCP) for executing the plurality of tasks. The plurality of predefined time intervals is specific, pre-scheduled time slots allocated for the execution of the plurality of tasks. The time intervals are predefined based on various factors such as system load, priority of tasks, and service-level agreements (SLAs). The SCP is a critical component in modern communication networks, particularly in cloud-based and service-oriented architectures. The SCP acts as a centralized entity that facilitates and manages communications between various services or network functions. The SCP handles service requests, routing, load balancing, and ensures that communications are secure and efficient. In the scheduling of the plurality of tasks, the SCP is responsible for defining the time intervals during which tasks are executed. The SCP orchestrates the timing and sequence of tasks across different services to optimize overall system performance.
[0043] As per one embodiment, the task scheduling is queued and scheduled according to the predefined time intervals set by the SCP. Each task includes an assigned start and end time within the predefined time intervals, ensuring that they are executed in the correct order and within the allotted time frame. In an embodiment, the SCP assigns different priorities to tasks, with higher-priority tasks being scheduled at more favorable time intervals, which ensures that critical operations are performed, maintaining the overall efficiency and reliability of the system 120.
[0044] The scheduling module 225 is configured for scheduling execution of the plurality of tasks at predefined time intervals as per the schedule. The scheduling module 225 is a dedicated component within the system 120, specifically designed to handle the scheduling of the plurality of tasks. The plurality of tasks refers to multiple tasks that need to be executed within the system 120. The plurality of tasks includes, but is not limited to, processing data, handling communications, and managing resources. The scheduling module 225 ensures that each task is started and completed as per the schedule.
[0045] The application cache module 230 is configured to store information related to triggering of each of the plurality of tasks. In an embodiment, the triggering of each of the plurality of tasks defines a previous task and a current task. The term triggering refers to the events or conditions that cause the plurality of tasks to be initiated or started. The triggering process involve various triggers, such as a time-based trigger (e.g., a task starts at a specific time), an event-based trigger (e.g., a task starts when a specific event occurs), or a condition-based trigger (e.g., a task starts when certain conditions are met). The information stored in the application cache module 230 includes details about what triggered each task, such as for example, the time of triggering, the event or condition that led to the task being initiated. The stored information can be crucial for debugging, auditing, or optimizing the task execution process.
[0046] As per one embodiment, the triggering of the each of the plurality of tasks defines the previous task and the current task. The previous task refers to the task that has already been executed or is in the process of being completed based on the schedule. Information about the triggering of the previous task is stored in the application cache module 230 for reference, which is useful for understanding the sequence of operations or for ensuring continuity in task execution. The current task refers to the task that is currently being executed or is about to be executed based on the schedule. Information about triggering the current task is also stored, allowing the system 120 to keep track of ongoing operations and to ensure that tasks are executed as per the schedule. If the previous task failed, the system 120 uses the cached trigger information to determine why it failed and adjust the current task accordingly.
[0047] As per the above embodiment, the delay in the triggering of the at least one of the previous task and the current task is detected when the duration between the time of trigger of the previous task and the time of trigger of the current task exceeds a scheduled duration by a predetermined threshold value. The scheduled duration is the expected or planned time interval between the triggering of the previous task and the triggering of the current task. The system 120 operates based on the scheduled duration, which dictates how much time should ideally pass between the two tasks. In an embodiment, the predetermined threshold is a specific value that defines the allowable variation or margin of error in the timing between the two tasks. In an embodiment, the predetermined threshold value is five percent.
[0048] The time of trigger refers to the exact moment when each task (previous and current) is initiated or starts. The system 120 records the time of trigger for both the previous and current tasks. The system 120 compares the time difference between the trigger of the previous and current tasks against the scheduled duration. If the time difference exceeds the scheduled duration by more than the predetermined threshold value, it indicates the delay. The information related to the time of trigger of the previous task and the time of trigger of the current task is retrieved from the application cache module 230.
[0049] The monitoring module 235 is configured to calculate a duration between triggering of the previous task and the current task from the information stored in the application cache module 230. The monitoring module 235 is further configured to increase a thread count by a first predetermined count and a priority of the task by a first predetermined value when the duration is greater than the predetermined threshold value from the scheduled duration. In an embodiment, the maximum thread count is thirty and a maximum priority is ten. The monitoring module 235 is configured to apply an auto-correction to the schedule upon detecting the delay in the triggering of the previous task and the current task. The monitoring module 235 continuously monitors the system's operations, particularly the timing of task triggers, to ensure they align with the predefined schedule.
[0050] Further, the monitoring module 235 is configured to detect any discrepancies or delays in the timing between the previous and current tasks and then apply automatic corrections to the schedule to mitigate the impact of these delays. Upon detecting the delay, the monitoring module 235 adjusts the schedule to compensate for the detected delay. The auto-corrections ensure that the system can recover from timing discrepancies without requiring manual intervention. The auto-correction involves various actions, such as rescheduling tasks, skipping or merging tasks, and resource allocation. In an exemplary embodiment, the previous task is 1.2 sec, and the current task is 1 sec. In this regard, the deviation is about 20 percent. If the deviation is within the predetermined threshold value, then the plurality of tasks is executed normally. If the deviation is more than the predetermined threshold value, then the plurality of tasks is required to execute auto correction process. The deviation of the plurality of tasks is reduced by increasing the scheduler thread counts. In the current scenario, the deviation of the plurality of tasks is about 20 percent. After increasing the scheduler thread counts, the deviation will be reduced to about 15 percent. The same process is performed until the deviation of plurality of tasks reaches about 5 percent. If the deviation reaches about 5 percent, then the plurality of tasks is normally executed.
[0051] By auto-correcting the schedule of the execution of the plurality of tasks, the system 120 is able to, advantageously, prevent server 115 dynamic storage from cluttering, prevent lagging of scheduled tasks by auto-correction. Further, the system reduces amount of redundant/duplicate data being stored for similar event resulting in efficient IOPS (input/output operations per second- is a measurement unit for storage system performance based on drive speed and workload type) and realizes efficient management of system memory by means of executing required jobs in timely manner and improves processing speed of the system 120.
[0052] FIG. 3 is a block diagram of an architecture 300 that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure.
[0053] The architecture 300 of the system 120 includes the client module 220, the scheduling module 225 for scheduling the plurality of tasks at the predefined time interval, the application cache module 230 to store information of the scheduled jobs/task like time of trigger for previous task, and the monitoring module 235 for monitoring the scheduler as per information from the application cache module 230.
[0054] The client module 220 is configured to interact with the server 115 and a network layer to gather data related to any received requests as well as various related information like processing status of the requests, errors occurred in processing if any and basic health of the server 115. The scheduling module 225 is configured for scheduling the predefined task at regular intervals which can be configured by the SCP. In an embodiment, the scheduling module 225 is configured to perform different types of job/tasks that it needs to schedule at a desired rate. All the jobs/tasks and their scheduling intervals and relevant information are shared and stored in the application cache module 230.
[0055] The application cache module 230 is configured to keep a record of trigger/firing performed by the scheduler such as “last trigger time” and “current trigger time”. The monitoring module 235 is configured to monitor the performance of the scheduler for any abnormality and solves the issue efficiently by increasing the number of scheduler threads or by increasing priority of scheduler threads or both. In preferred embodiments, the monitoring module 235 is a sub-module of the scheduling module 225. The monitoring module 235 is configured to collect the information from the application cache module 230 related to the last trigger time and the current trigger time.
[0056] Further, the monitoring module 235 is configured to calculate the duration between that the last trigger time and the current trigger time. In an example, if the last trigger time is at 12:00 PM and the current trigger time is at 12:05 PM, the duration difference would be 5 minutes. The information of the last trigger time and the current trigger time can be used to assess whether the frequency of events is within expected ranges or to trigger certain actions if the duration exceeds predefined thresholds. From the difference in past and present data, the monitoring module 235 determines at what time the scheduling module 225 should be firing for the scheduled jobs/tasks and if there is any delay in performing the same or not. Then, if there is any delay/lag then the optimal auto-correction is applied to minimize the lagging.
[0057] In various embodiments, the system 120 having the scheduling module 225 maintains the current trigger time and last trigger time. Using the current trigger time and the last trigger time, the actual duration between the ‘scheduled job execution time’ is calculated and compared with the predefined time interval set by the SCP. If the deviation is more than the predetermined threshold value, then the scheduling module 225 increases the thread count by the first predetermined count and also increases priority of scheduler thread by about 1 up to maximum of about 10. In the next iteration, scheduler of the client module 220 again checks the last trigger time and the current trigger time and the duration is calculated. If the deviation is within the predetermined threshold value, then it is considered normal otherwise again thread count is increased by 4 (max up to 30) and also increases priority of scheduler thread by about 1 up to maximum of about 10. The scheduler of the present system 120 is configured to interact with the SCP which has the provision to set the predefined time interval based on the past and present performance and is also capable of monitoring the performance of the scheduling module 225 to determine if there is any delay.
[0058] FIG. 4 is an exemplary block diagram of the system 120 for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure.
[0059] The system 120 includes a Virtual Machine (VM) 405, an application cache module 230, a protocol stack module 415, the monitoring module 420, and a network layer 425.
[0060] The application cache module 230 and the protocol stack module 415 run on the VM 405. The VM 405 is a crucial component of the Java runtime environment that enables at least one of, but not limited to, java applications to run on any device or operating system without modification. The VM 405 provides the environment necessary for at least one of, but not limited to, the java applications to execute by interpreting or compiling a java bytecode into a machine code. The application cache module 230 and the protocol stack module 415 incorporates java-based application executable by the processor 205 with the memory.
[0061] The application cache module 230 is configured to interact with the network 105 to communicate from the one node to another node via a Hypertext Transfer Protocol 2.0 (HTTP 2.0) utilizing the protocol stack module 415. The application cache module 230 is responsible for handling the main operations of the application, such as processing the plurality of tasks, handling user requests, and communicating with other modules or systems. It is to be noted that usage of the HTTP 2.0 protocol for transmitting hypermedia data such as, documents should not be construed as limiting the scope of the present disclosure, as the system is flexible to adopt other similar protocols for data transmission. The HTTP 2.0 protocol is used in the present invention in order to execute the plurality of tasks, advantageously ensuring efficient management of system memory by means of executing required jobs in timely manner.
[0062] The protocol stack module 415 is configured to interact with the network 105 to communicate from the one node to another node via the HTTP 2.0. The protocol stack module 415 is configured to provide the necessary protocols and communication mechanisms required for network operations. Specifically, the protocol stack module 415 implements the HTTP 2.0 protocol, which is an advanced version of the HTTP protocol designed to improve performance and efficiency by supporting features like multiplexing, header compression, and server push. The protocol stack module 415 is configured to provide abstracted Application Peripheral Interface (APIs) for further development of the application around with a plurality of components. In an embodiment, the plurality of components includes, but is not limited to, connection management, log management, transport messages, overload protection, rate limit protection, and the like.
[0063] The system 120 is configured to scheduling execution of the plurality of tasks from users and send alerts via the user interface 215. The user interface 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120.
[0064] The application cache module 230 is configured to interact with the server 115 and the network layer 425 to gather the data related to any received requests from the UE 110 as well as various related information like processing status of the plurality of tasks, errors occurred in processing if any and basic health of the server 115.
[0065] The protocol stack module 415 is configured to manage the server 115 related resources. The protocol stack module 415 includes the monitoring module 420 is configured to calculate the duration between triggering of the previous task and the current task from the information stored in the application cache module 230. Further, the monitoring module 420 is configured to apply the auto-correction to the schedule upon detecting the delay in the triggering of the previous task and the current task.
[0066] FIG. 5 is a flow diagram illustrating a method 500 for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0067] At step 505, the method 500 includes the step of receiving the schedule related to execution of the plurality of tasks from the server 115 by the client module 220. In an embodiment, the schedule includes a plurality of predefined time intervals provided by the Service Communication Proxy (SCP) for executing the plurality of tasks. The plurality of predefined time intervals is specific, pre-scheduled time slots allocated for the execution of the plurality of tasks. The time intervals are predefined based on various factors such as system load, priority of tasks, and service-level agreements (SLAs).
[0068] At step 510, the method 500 includes the step of scheduling execution of the plurality of tasks at predefined time intervals as per the schedule by the scheduling module 225. The plurality of tasks refers to multiple tasks that need to be executed within the system 120. The plurality of tasks includes, but is not limited to, processing data, handling communications, and managing resources. The scheduling module 225 ensures that each task is started and completed as per the schedule.
[0069] At step 515, the method 500 includes the step of storing the information related to triggering of each of the plurality of tasks by the application cache module 230. In an embodiment, the plurality of tasks including the previous task and the current task. The information stored in the application cache module 230 includes details about what triggered each task, such as the time of triggering, the event or condition that led to the task being initiated. The stored information can be crucial for debugging, auditing, or optimizing the task execution process.
[0070] At step 520, the method 500 includes the step of calculating the duration between triggering of the previous task and the current task from the information stored in the application cache module 230 by the monitoring module 235. In an embodiment, the monitoring module 235 is further configured to increase the thread count by the first predetermined count and the priority of the scheduler by the first predetermined value when the duration is greater than the predetermined threshold from the scheduled duration. The maximum thread count is thirty and the maximum priority is ten.
[0071] At step 525, the method 500 includes the step of applying the auto-correction to the schedule upon detecting the delay in the triggering of the previous task and the current task by the monitoring module 235. The monitoring module 235 continuously monitors the system's operations, particularly the timing of task triggers, to ensure they align with the predefined schedule.
[0072] FIG. 6 is a flow chart illustrating the method for scheduling execution of the plurality of tasks, according to one or more embodiments of the present disclosure.
[0073] At step 605, the scheduler initiates to compute and compare the data related to the time of trigger of the previous task and the time of trigger of the current task with the scheduled duration for request timeout job.
[0074] At step 610, the method includes the step of determining whether the data related to the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold. In an embodiment, the predetermined threshold is five percent.
[0075] At step 615a, if the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold, the time of trigger of the previous task and the time of trigger of the current task is computed and compared with the scheduled duration for connection management job. At step 615b, if the time of trigger of the previous task and the time of trigger of the current task is not within the predetermined threshold, the scheduler increases the thread count by first predetermined count and a priority of the scheduler by one when the scheduled duration is greater than the predetermined threshold from the scheduled duration. In an embodiment, the maximum thread count is thirty and a maximum priority is ten.
[0076] At step 620, upon the computing and comparing the time of trigger of the previous task and the time of trigger of the current task with the scheduled duration for connection management job, again initiates the step 615 for determining whether the data related to the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold.
[0077] At step 625a, if the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold, the time of trigger of the previous task and the time of trigger of the current task is computed and compared with the scheduled duration for stream exhaust job. At step 625b, if the time of trigger of the previous task and the time of trigger of the current task is not within the predetermined threshold, the scheduler increases the thread count by the first predetermined count and a priority of the scheduler by one when the scheduled duration is greater than the predetermined threshold from the scheduled duration.
[0078] At step 630, upon the computing and comparing the time of trigger of the previous task and the time of trigger of the current task with the scheduled duration for stream exhaust job, again initiates the step 615 for determining whether the data related to the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold.
[0079] At step 630a, if the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold, the time of trigger of the previous task and the time of trigger of the current task is computed and compared with the scheduled duration for idle connection job. At step 630b, if the time of trigger of the previous task and the time of trigger of the current task is not within the predetermined threshold, the scheduler increases the thread count by the first predetermined count and a priority of the scheduler by one when the scheduled duration is greater than the predetermined threshold from the scheduled duration.
[0080] At step 635, upon the computing and comparing the time of trigger of the previous task and the time of trigger of the current task with the scheduled duration for idle connection job, again initiates the step 615 for determining whether the data related to the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold.
[0081] At step 635a, if the time of trigger of the previous task and the time of trigger of the current task is within the predetermined threshold, the scheduler stops the process. At step 635b, if the time of trigger of the previous task and the time of trigger of the current task is not within the predetermined threshold, the scheduler increases the thread count by the first predetermined count and a priority of the scheduler by one when the scheduled duration is greater than the predetermined threshold from the scheduled duration.
[0082] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 205. The processor 205 is configured to receive a schedule related to execution of a plurality of tasks from a server 115. The processor 205 is configured to schedule execution of the plurality of tasks at predefined time intervals defined in the schedule. The processor 205 is configured to store in the application cache module 230 information related to triggering of each of the plurality of tasks, the triggering of each of the plurality of tasks defines a previous task and a current task. The processor 205 is configured to calculate duration between triggering of the previous task and the current task from the information stored in the application cache module 225. The duration between triggering of the previous task and the current task exceeds a predetermined threshold, a delay is detected. The processor 205 is configured to apply an auto-correction to the schedule upon detecting a delay in the triggering of the previous task and the current task.
[0083] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0084] The present disclosure provides technical advancement for applying the auto-correction to the schedule upon detecting the delay in the triggering of the previous task and the current task. The system and the method to schedule execution of the plurality of tasks, to identify if there is some misfiring condition in the scheduling module and implement auto-correction to avoid the server crash as the scheduling module are not able to process at the required request rate.
[0085] The present disclosure provides advantages for preventing the server 115 dynamic storage from cluttering, preventing lagging of scheduled tasks by auto-correction. Further, the system reduces amount of redundant/duplicate data being stored for similar event resulting in efficient IOPS (input/output operations per second- is a measurement unit for storage system performance based on drive speed and workload type) and realizes efficient management of system memory by means of executing required jobs in timely manner.
[0086] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0087] Environment - 100
[0088] Network-105
[0089] User equipment- 110
[0090] Server - 115
[0091] System -120
[0092] Processor - 205
[0093] Memory - 210
[0094] User interface-215
[0095] Client module– 220
[0096] Scheduling module– 225
[0097] Application cache module– 230
[0098] Monitoring module– 235
[0099] Database– 240
[00100] Virtual machine- 405
[00101] Protocol stack module- 415
[00102] Monitoring module- 420
[00103] Network layer- 425

,CLAIMS:CLAIMS
We Claim:
1. A method (500) of scheduling execution of a plurality of tasks, the method (500) comprising the steps of:
receiving, by one or more processors (205), a schedule related to execution of a plurality of tasks from a server (115);
scheduling, by the one or more processors (205), execution of the plurality of tasks at predefined time intervals defined in the schedule;
storing, by the one or more processors (205), (230) information related to triggering of each of the plurality of tasks in an application cache module (230), wherein the triggering of each of the plurality of tasks defines a previous task and a current task;
calculating, by the one or more processors (205), from the information stored in the application cache module (230) a duration between triggering of the previous task and the current task, wherein the duration between triggering of the previous task and the current task exceeds a predetermined threshold, a delay is detected; and
applying, by the one or more processors (205), an auto-correction to the schedule upon detecting the delay in the triggering of the previous task and the current task.

2. The method (500) as claimed in claim 1, wherein the schedule includes a plurality of predefined time intervals provided by a Service Communication Proxy (SCP) for executing the plurality of tasks.

3. The method (500) as claimed in claim 1, wherein the delay in the triggering of the at least one of the previous task and the current task is detected when the duration between the time of trigger of the previous task and the time of trigger of the current task exceeds a scheduled duration by a predetermined threshold value.

4. The method (500) as claimed in claim 3, wherein data related to a time of trigger of a previous task and a time of trigger of the current task is retrieved from the application cache module (230).

5. The method (500) as claimed in claim 1, wherein upon detecting the delay, a thread count of a task is increased by a first predetermined count and a priority of the task is increased by a first predetermined value, and wherein a maximum thread count of the task is thirty and a maximum priority of the task is ten.

6. A system (120) for scheduling execution of plurality of tasks, the system (120) comprising:
a client module (220) configured to receive a schedule related to execution of a plurality of tasks from a server (115);
a scheduling module (225) for scheduling execution of the plurality of tasks at predefined time intervals defined in the schedule;
an application cache module (230) configured to store information related to triggering of each of the plurality of tasks, wherein the triggering of each of the plurality of tasks defines a previous task and a current task; and
a monitoring module (235) configured to:
calculate from the information stored in the application cache module (230) a duration between triggering of the previous task and the current task, wherein the duration between triggering of the previous task and the current task exceeds a predetermined threshold, a delay is detected; and
apply an auto-correction to the schedule upon detecting the delay in the triggering of the previous task and the current task.

7. The system (120) as claimed in claim 6, wherein the schedule includes a plurality of predefined time intervals provided by a Service Communication Proxy (SCP) for executing the plurality of tasks.

8. The system (120) as claimed in claim 6, wherein the delay in the triggering of the at least one of the previous task and the current task is detected when the duration between the time of trigger of the previous task and the time of trigger of a current task exceeds a scheduled duration by a predetermined threshold value.

9. The system (120) as claimed in claim 8, wherein data related to a time of trigger of a previous task and a time of trigger of the current task is retrieved from the application cache module (230).

10. The system (120) as claimed in claim 6, wherein the monitoring module (240) is further configured to:
increase a thread count of the task by a first predetermined count and a priority of the task by a first predetermined value when the duration is greater than the predetermined threshold value from the scheduled duration, and wherein a maximum thread count is thirty and a maximum priority of the task is ten.

Documents

Application Documents

# Name Date
1 202321060012-STATEMENT OF UNDERTAKING (FORM 3) [06-09-2023(online)].pdf 2023-09-06
2 202321060012-PROVISIONAL SPECIFICATION [06-09-2023(online)].pdf 2023-09-06
3 202321060012-FORM 1 [06-09-2023(online)].pdf 2023-09-06
4 202321060012-FIGURE OF ABSTRACT [06-09-2023(online)].pdf 2023-09-06
5 202321060012-DRAWINGS [06-09-2023(online)].pdf 2023-09-06
6 202321060012-DECLARATION OF INVENTORSHIP (FORM 5) [06-09-2023(online)].pdf 2023-09-06
7 202321060012-FORM-26 [17-10-2023(online)].pdf 2023-10-17
8 202321060012-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321060012-DRAWING [03-09-2024(online)].pdf 2024-09-03
10 202321060012-COMPLETE SPECIFICATION [03-09-2024(online)].pdf 2024-09-03
11 Abstract 1.jpg 2024-09-25
12 202321060012-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321060012-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321060012-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321060012-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321060012-FORM 3 [29-01-2025(online)].pdf 2025-01-29
17 202321060012-FORM 18 [20-03-2025(online)].pdf 2025-03-20