Sign In to Follow Application
View All Documents & Correspondence

Method And System For Implementing Execution Of One Or More Tasks

Abstract: The present disclosure relates to a method and a system for implementing execution of one or more tasks. In one example, the method comprises obtaining, by a transceiver unit [302], one or more policy provisioning events. The method further comprises creating, by a creation unit [304], a set of task query events based on the one or more policy provisioning events. The method further comprises transmitting, by the transceiver unit [302] to a Platform Scheduler, a dynamic query builder based on the set of task query events, wherein the Platform Scheduler is to execute one or more tasks based on the dynamic query builder. [FIG. 5]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR IMPLEMENTING EXECUTION OF ONE OR MORE TASKS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR IMPLEMENTING EXECUTION OF ONE
OR MORE TASKS
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure relate generally to the field of wireless communication systems. More particularly, embodiment of the present disclosure relates to a method and system for implementing execution of one or more tasks. 10
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
15 include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. 3G technology
marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth-generation (5G) technology is being
30 deployed, promising even faster data speeds, low latency, and the ability to connect
multiple devices simultaneously. With each generation, wireless communication
2

technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] The synergy between Capacity Management Platform (CMP) and Platform
5 Scheduler (PS) microservices ensure the seamless execution of task creation,
modification, and deletion events, with any instances of violated events promptly acknowledged to guarantee optimal system performance. This approach effectively manages system resources such as CPU, RAM, and storage utilization, thereby preserving the system's overall operational efficiency. The CMP microservice
10 ensures that the dynamic query builder created during the design phase is
transferred to the PS microservice as a task, presented as a "create task" event, for execution during the implementation phase. Following completion, the task is then removed in the termination phase. This eliminates the need for the CP microservice to manage task execution, saving both time and effort, as the PS microservice
15 autonomously handles the task's execution and removal processes. However, in the
current existing solutions, involves manual effort, thereby leading to a lot of resource overhead and is time-consuming.
[0005] Thus, there exists an imperative need in the art to develop methods and
20 systems to to provide a solution for creating a dynamic query builder, that is
transferred to the PS microservice through a sequence of task events.
SUMMARY
25 [0006] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
30 [0007] An aspect of the present disclosure may relate to a method for implementing
execution of one or more tasks. The method comprises obtaining, by a transceiver
3

unit, one or more policy provisioning events. The method further comprises
creating, by a creation unit, a set of task query events based on the one or more
policy provisioning events. The method further comprises creating transmitting, by
the transceiver unit to a Platform Scheduler, a dynamic query builder based on the
5 set of task query events, wherein the Platform Scheduler is to execute one or more
tasks based on the dynamic query builder.
[0008] In an exemplary aspect of the present disclosure, the method further
comprises storing, by a storage unit, the one or more policy provisioning events
10 post obtaining the one or more policy provisioning request.
[0009] In an exemplary aspect of the present disclosure, for obtaining the one or more policy provisioning events, the method further comprises fetching, by the transceiver unit from a storage unit, the one or more policy provisioning events. 15
[0010] In an exemplary aspect of the present disclosure, the dynamic query builder is transmitted by the transceiver unit to the Platform Scheduler through a sequence of one or more task query events among the set of task query events.
20 [0011] In an exemplary aspect of the present disclosure, the set of task query events
is related to one or more of a creation task, a modification task, a deletion task, and an execution task.
[0012] In an exemplary aspect of the present disclosure, the method further
25 comprises transmitting, by the transceiver unit to the Platform Scheduler, the
dynamic query builder further based on a resource hysteresis information.
[0013] In an exemplary aspect of the present disclosure, the method further
comprises receiving, by the transceiver unit from the Platform Scheduler, a
30 notification related to a breached output, in an event of satisfaction of one or more
breach conditions based on the resource hysteresis information.
4

[0014] Another aspect of the present disclosure may relate to a system for
implementing execution of one or more tasks. The system comprises a transceiver
unit. The transceiver unit is configured to obtain one or more policy provisioning
5 events. The system further comprises a creation unit connected at least to the
transceiver unit. The creation unit is configured to create a set of task query events
based on the one or more policy provisioning events. The transceiver unit is further
configured to transmit, to a Platform Scheduler, a dynamic query builder based on
the set of task query events, wherein the Platform Scheduler is to execute one or
10 more tasks based on the dynamic query builder.
[0015] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for implementing execution of one or more tasks. The instructions include executable code which, when
15 executed by one or more units of a system, causes a transceiver unit of the system
to obtain one or more policy provisioning events. Further, the instructions include executable code which, when executed, causes a creation unit to create a set of task query events based on the one or more policy provisioning events. Further, the instructions include executable code which, when executed, causes the transceiver
20 unit to transmit, to a Platform Scheduler, a dynamic query builder based on the set
of task query events, wherein the Platform Scheduler is to execute one or more tasks based on the dynamic query builder.
OBJECTS OF THE DISCLOSURE
25
[0016] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0017] It is an object of the present disclosure to provide a system and a method for
30 implementing execution of one or more tasks.
5

[0018] It is another object of the present disclosure to provide a solution for creating a dynamic query builder, that is transferred to the PS microservice through a sequence of task events.
5 [0019] It is yet another object of the present disclosure to ensure that task execution
takes place within the PS microservice, effectively managing resource limitations and minimizing overhead within the CP microservice.
DESCRIPTION OF THE DRAWINGS
10
[0020] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale,
15 emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such
20 drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
[0021] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture. 25
[0022] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
6

[0023] FIG. 3 illustrates an exemplary block diagram of a system for implementing execution of one or more tasks, in accordance with exemplary implementations of the present disclosure.
5 [0024] FIG. 4 illustrates an exemplary flow diagram for implementing execution
of one or more tasks, in accordance with exemplary implementations of the present disclosure.
[0025] FIG. 5 illustrates a method flow diagram for implementing execution of one
10 or more tasks, in accordance with exemplary implementations of the present
disclosure.
[0026] The foregoing shall be more apparent from the following more detailed description of the disclosure. 15
DETAILED DESCRIPTION
[0027] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of
20 embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the
25 problems discussed above.
[0028] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
30 the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
7

arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0029] Specific details are given in the following description to provide a thorough
5 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
10
[0030] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or
15 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not included in a figure.
[0031] The word “exemplary” and/or “demonstrative” is used herein to mean
20 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
25 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
30
8

[0032] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
5 of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
10 the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0033] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
15 “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant,
20 tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from unit(s) which are required to implement the features of the present disclosure.
25 [0034] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other
30 types of machine-accessible storage media. The storage unit stores at least the data
9

that may be required by one or more units of the system to perform their respective functions.
[0035] As used herein “interface” or “user interface refers to a shared boundary
5 across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
10
[0036] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more
15 microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0037] As used herein the transceiver unit include at least one receiver and at least
20 one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system and/or connected with the system.
[0038] As discussed in the background section, the current known solutions have
25 several shortcomings. The present disclosure aims to overcome the above-
mentioned and other existing problems in this field of technology by providing method and system of implementing execution of one or more tasks on a Platform Scheduler. The method is implemented by the system that involves various components of a manifestation and orchestration (MANO) architecture. 30
10

[0039] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
[0040] FIG. 1 illustrates an exemplary block diagram representation of 5th
5 generation core (5GC) network architecture, in accordance with exemplary
implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110],
10 an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a
15 User Plane Function (UPF) [128], a data network (DN) [130], wherein all the
components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[0041] Radio Access Network (RAN) [104] is the part of a mobile
20 telecommunications system that connects user equipment (UE) [102] to the core
network (CN) and provides access to different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable wireless communication.
25 [0042] Access and Mobility Management Function (AMF) [106] is a 5G core
network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
30 [0043] Session Management Function (SMF) [108] is a 5G core network function
responsible for managing session-related aspects, such as establishing, modifying,
11

and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
[0044] Service Communication Proxy (SCP) [110] is a network function in the
5 5G core network that facilitates communication between other network functions
by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
[0045] Authentication Server Function (AUSF) [112] is a network function in
10 the 5G core responsible for authenticating UEs during registration and providing
security services. It generates and verifies authentication vectors and tokens.
[0046] Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and
15 authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
[0047] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors
20 such as subscription, requested services, and network policies.
[0048] Network Exposure Function (NEF) [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications. 25
[0049] Network Repository Function (NRF) [120] is a network function that acts as a central repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
12

[0050] Policy Control Function (PCF) [122] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies.
5 [0051] Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication, authorization, and subscription information.
[0052] Application Function (AF) [126] is a network function that represents
10 external applications interfacing with the 5G core network to access network
capabilities and services.
[0053] User Plane Function (UPF) [128] is a network function responsible for
handling user data traffic, including packet routing, forwarding, and QoS
15 enforcement.
[0054] Data Network (DN) [130] refers to a network that provides data services to user equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
20
[0055] The 5GC network architecture also comprises a plurality of interfaces for connecting the network functions with a network entity for performing the network functions. The NSSF [116] is connected with the network entity via the interface denoted as (Nnssf) interface in FIG. 1. The NEF [118] is connected with the network
25 entity via the interface denoted as (Nnef) interface in FIG. 1. The NRF [120] is
connected with the network entity via the interface denoted as (Nnrf) interface in FIG. 1. The PCF [122] is connected with the network entity via the interface denoted as (Npcf) interface in FIG. 1. The UDM [124] is connected with the network entity via the interface denoted as (Nudm) interface in FIG. 1. The AF
30 [126] is connected with the network entity via the interface denoted as (Naf)
interface in FIG. 1. The NSSAAF [114] is connected with the network entity via
13

the interface denoted as (Nnssaaf) interface in FIG. 1. The AUSF [112] is connected
with the network entity via the interface denoted as (Nausf) interface in FIG. 1. The
AMF [106] is connected with the network entity via the interface denoted as (Namf)
interface in FIG. 1. The SMF [108] is connected with the network entity via the
5 interface denoted as (Nsmf) interface in FIG. 1. The SMF [108] is connected with
the UPF [128] via the interface denoted as (N4) interface in FIG. 1. The UPF [128] is connected with the RAN [104] via the interface denoted as (N3) interface in FIG. 1. The UPF [128] is connected with the DN [130] via the interface denoted as (N6) interface in FIG. 1. The RAN [104] is connected with the AMF [106] via the
10 interface denoted as (N2). The AMF [106] is connected with the RAN [104] via the
interface denoted as (N1). The UPF [128] is connected with other UPF [128] via the interface denoted as (N9). The interfaces such as Nnssf, Nnef, Nnrf, Npcf, Nudm, Naf, Nnssaaf, Nausf, Namf, Nsmf, N9, N6, N4, N3, N2, and N1 can be referred to as a communication channel between one or more functions or modules
15 for enabling exchange of data or information between such functions or modules,
and network entities.
[0056] FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in
20 accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method for implementing execution of one or more tasks on a Platform Scheduler utilising a system. In another implementation, the computing device [200] itself implements the method for implementing execution of one or more tasks on a Platform
25 Scheduler using one or more units configured within the computing device [200],
wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0057] The computing device [200] may include a bus [202] or other
30 communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
14

processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a random-
access memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
5 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a special-
purpose machine that is customized to perform the operations specified in the
10 instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
[0058] A storage device [210], such as a magnetic disk, optical disk, or solid-state
15 drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
20 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
25 cursor movement on the display [212]. This input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0059] The computing device [200] may implement the techniques described
30 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
15

or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
5 instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
10 software instructions.
[0060] The computing device [200] also may include a communication interface [218] coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a
15 local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a
20 compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.
25 [0061] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the communication interface [218]. In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], the host [224] and the communication interface
30 [218]. The received code may be executed by the processor [204] as it is received,
16

and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0062] Referring to FIG. 3, an exemplary block diagram of a system [300] for
5 implementing execution of one or more tasks, in accordance with the exemplary
implementations of the present disclosure, is shown. In one example, the system
[300] may be implemented as or within a Capacity Management Platform (CP). As
would be understood, a Capacity Management Platform may be referred to a system
or a network component that may aid in optimization of resources within the
10 network.
[0063] FIG. 4. illustrates an exemplary flow diagram for implementing execution of one or more tasks, in accordance with exemplary implementations of the present disclosure. 15
[0064] It may be noted that FIG. 3 and FIG. 4 have been explained simultaneously and may be read in conjunction with each other.
[0065] In one example, the system [300] may be in communication with other
20 network entities/components as depicted in FIG. 4. It may be further noted that any
other network entities/components known to a person skilled in the art and not depicted in FIG. 4, may also be in communication with the system [300]. Such network entities/components have not been explained here for the sake of brevity.
25 [0066] As depicted in FIG. 3, the system [300] comprises at least one transceiver
unit [302], at least one creation unit [304] and at least one storage unit [306]. In cases where the system [300] may be implemented as the Capacity Management Platform, the different aforementioned units may be a part of such Capacity Management Platform.
30
17

[0067] Also, all of the components/ units of the system [300] are assumed to be
connected to each other unless otherwise indicated below. As shown in FIG.3, all
units shown within the system [300] should also be assumed to be connected to
each other. Also, in FIG. 3 only a few units are shown, however, the system [300]
5 may comprise multiple such units or the system [300] may comprise any such
numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [300] may be present in a user
device/ user equipment [102] to implement the features of the present disclosure.
The system [300] may be a part of the user device [102]/ or may be independent of
10 but in communication with the user device [102] (may also referred herein as a UE).
In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
15 [0068] The system [300] is configured for implementing execution of one or more
tasks, with the help of the interconnection between the components/units of the system [300].
[0069] In order to implement execution of one or more tasks , in operation, the
20 transceiver unit [302] may obtain one or more policy provisioning events.
[0070] The policy provisioning events refer to data that define specific policies or
conditions under which tasks may be executed. These policies may involve data
such as resource allocation, execution priorities, or operational conditions that the
25 Platform Scheduler needs to follow when performing tasks.
[0071] For example, a policy provisioning event may be a rule for resource
allocation, where the policy provisioning event specifies that high-priority tasks
must receive 20% of CPU and memory resources, confirming real-time data
30 processing, while lower-priority tasks use remaining resources.
18

[0072] In an implementation of the present disclosure, the process of obtaining
these policy provisioning events involves the transceiver unit [302] to communicate
with various sources that generate or store such events. These sources may include
network management systems, external databases, or other systems responsible for
5 generating policy data. In one example, the transceiver unit [302] may receive the
policy provisioning events from a User Interface [402]. This has been depicted by Step [404] in FIG. 4.
[0073] In an example, when the transceiver unit [302] obtains the policy
10 provisioning events, the storage unit [306] may store the same. For example, the
policy provisioning events, obtained by the transceiver unit [302], may be stored in the storage unit [306] for future reference.
[0074] For example, if the system [300] receives a policy event related to resource
15 allocation, this event will be stored in the storage unit [306]. This stored information
then be used for later processing.
[0075] In another example, for obtaining the policy provisioning events, the transceiver unit [302] may fetch the one or more policy provisioning events from
20 the storage unit [306]. For example, the transceiver unit [302] may obtain policy
provisioning events by retrieving pre-configured policies stored in the storage unit [306]. The transceiver unit [302] accesses the storage unit [306] to fetch previously stored policy provisioning events that may either pre-defined or generated at an earlier time. This retrieval mechanism may useful when real-time policy data
25 unavailable, or when the system may operate in a disconnected state.
[0076] For example, if the transceiver unit [302] needs to process a task based on
past policy events that were stored previously, it may fetch these events from the
storage unit [306]. This confirms continuity and consistency in task execution, even
30 if the original source of the policy events is no longer available or the system [300]
is operating offline.
19

[0077] However, it may be noted that the above-mentioned ways of obtaining the
policy provisioning events is only exemplary, and in no manner is construed to limit
the scope of the present subject matter in any manner. The transceiver unit [302]
5 may obtain the policy provisioning events in any other manner as well, known to a
person skilled in the art. All such examples would lie within the scope of the present subject matter.
[0078] Continuing further, once the one or more policy provisioning events are
10 obtained, the creation unit [304] may create a set of task query events based on the
one or more policy provisioning events.
[0079] In an implementation of the present disclosure, the creation of a set of task
query events begins when the transceiver unit [302] obtains one or more policy
15 provisioning events. These events contain system-level information and policies
that dictate how resources may be managed, tasks to be prioritized, or actions to be executed.
[0080] Upon obtaining these policy provisioning events, the creation unit [304]
20 processes them to identify the specific actions that need to be taken. Each policy
provisioning event may be analysed to determine the nature of the task it
corresponds to, such as, whether it involves creating a new task, modifying an
existing task, deleting a task that is no longer required, or executing a task that has
been delayed until certain conditions met such as when enough resources are
25 available or when other tasks are completed.
[0081] The creation unit [304] then organizes these individual task query events into a set, which forms the basis of the dynamic query builder.
30 [0082] For example, if the policy provisioning events indicate that a high-priority
task may execute with specific resource allocation, the creation unit [304] generate
20

a task query event personalised to those requirements. These task query events may
define tasks related to resource allocation, task scheduling that need to be required.
The task query events act as middle instructions that the Platform Scheduler [406]
may use to manage and execute tasks in accordance with the policy provisioning
5 events.
[0083] Another example, a policy provisioning event may specify that a particular
computational task must utilize 30% of the system’s available memory resources
and complete within a defined time window. In response to this, the creation unit
10 [304] may generate a set of task query events specifying the memory allocation and
time limitation.
[0084] Continuing further, thereafter, after the set of task query events has been
created, the transceiver unit [302] may transmit a dynamic query builder based on
15 the set of task query events to a Platform Scheduler [406]. This has been depicted
by Step [408] in FIG. 4. The Platform Scheduler [406] may be configured to execute one or more tasks based on the dynamic query builder.
[0085] In an implementation of the present disclosure, the transceiver unit [302]
20 sends a dynamic query builder to the Platform Scheduler [406], where the dynamic
query builder captures the instructions provided by the task query events. The
dynamic query builder functions as a complete set of instructions that the Platform
Scheduler may use to execute one or more tasks. The transmission of the dynamic
query builder enables the Platform Scheduler [406] to understand the set of task
25 query events.
[0086] In an example, the transceiver unit [302] may manage the dynamic query
builder by transmitting it to the Platform Scheduler [406] through a sequence of one
or more task query events. These task query events may relate to specific types of
30 actions, including creation, modification, deletion, or execution tasks, depending
on the requirements of the Platform Scheduler [406].
21

[0087] For example, if the policy provisioning events order that a new task needs
to be created to handle additional system load, the creation unit [304] generates a
task query event for the creation of the task. This task query event included in the
5 dynamic query builder, which is then transmitted to the Platform Scheduler [406].
Similarly, if an existing task needs to be modified, the creation unit [304] may generate task query events accordingly.
[0088] In another example, to allocate more resources the transceiver unit [302]
10 may transmit a dynamic query builder containing the modification task query event
to adjust the task’s parameters. The task query event may indicate the deletion of a task if it is no longer necessary, or the execution of a task if it has been scheduled for immediate processing.
15 [0089] In an example, the transceiver unit [302] may transmit, to the Platform
Scheduler [406], the dynamic query builder further based on a resource hysteresis information.
[0090] The resource hysteresis information refers to data that tracks the historical
20 usage patterns and thresholds of system resources such as CPU, memory, or
network bandwidth. The resource hysteresis information understandings into past resource usage patterns and trends, which are used to make more informed decisions about task scheduling and resource allocation.
25 [0091] For example, if historical data shows that resource usage tends to peak
during specific times or under certain conditions, this information may use to adjust the dynamic query builder to account for these fluctuations.
[0092] Continuing further, in another example, after the dynamic query builder has
30 been created, the same may be stored in a repository [410]. This has been depicted
22

by Step [412] in FIG. 4. The repository [410] may be configured to hold task-related queries that may part of the dynamic query builder.
[0093] For example, when a set of task query events is generated, it is stored in the
5 repository [404] for future use. If the system [300] later detects an increase in
resource demand or a shift in task priorities, the system [300] may fetch these stored dynamic query events from the repository (as depicted by Step [414] in FIG. 4) and accordingly transmits them to the Platform Scheduler [406].
10 [0094] Continuing further, the Platform Scheduler [406] on receiving the dynamic
query builder from the system [300], may generate a feedback indicating that the system’s resource usage has breached a predefined threshold. The transceiver unit [302] may then receive this feedback from the Platform Scheduler [406] in the form of a notification related to a breached output condition. This has been depicted by
15 Step [416] in FIG. 4.
[0095] The term breached output is an alert or notification triggered when resource usage exceeds predefined limits, indicating a potential need for corrective action to maintain system stability and performance.
20
[0096] The configurable thresholds are limits set by system administrators or predefined by the system [300] to control things like resource usage, performance, and task execution. These thresholds may be adjusted during system setup. The Platform Scheduler monitors the system [300], and if these limits may exceed, it
25 sends a notification.
[0097] The notification occurs when the Platform Scheduler detects that certain
predefined limits or conditions have been exceeded. These notifications may
indicate resource overuse, such as when the system consumes more memory, CPU,
30 or bandwidth than allowed. They may also signal performance issues, such as when
a task takes longer than expected or doesn't meet performance benchmarks.
23

Additionally, if a task fails during execution due to errors or resource constraints, the scheduler sends a notification to alert the system about the failure.
[0098] The transceiver unit [302], on receiving this notification, may allow the
5 system [300] to take corrective action. Such actions may include adjusting the
resource allocation, delaying low-priority tasks, or even terminating non-essential operations to confirm that critical tasks continue to function within the resource limits.
10 [0099] In cases where the dynamic query builder transmitted by the system [300]
is based on the resource hysteresis information, the notification, received from the Platform Scheduler [406] may indicate that one or more breach conditions based on the resource hysteresis information have been satisfied.
15 [0100] For example, if the system’s CPU usage exceeds 90% of its maximum
capacity for an extended period, this may trigger a breach condition. The Platform Scheduler [406] detects this difference based on the resource hysteresis information and generates a notification to the transceiver unit [302]. This notification indicates that corrective measures need to be taken, such as reallocating resources, optimizing
20 task execution, or scaling up resources to handle the increased load.
[0101] Referring to FIG. 5, an exemplary method flow diagram [500] for
implementing execution of one or more tasks, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the
25 method [500] is performed by the system [300]. Further, in an implementation, the
system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 5, the method [500] starts at step [502].
[0102] At step [504], the method [500] comprises, obtaining, by a transceiver unit
30 [302], one or more policy provisioning events.
24

[0103] In order to implement execution of one or more tasks , in operation, the transceiver unit [302] may obtain one or more policy provisioning events.
[0104] The policy provisioning events refer to data that define specific policies or
5 conditions under which tasks may be executed. These policies may involve data
such as resource allocation, execution priorities, or operational conditions that the Platform Scheduler needs to follow when performing tasks.
[0105] In an implementation of the present disclosure, the process of obtaining
10 these policy provisioning events involves the transceiver unit [302] to communicate
with various sources that generate or store such events. These sources may include network management systems, external databases, or other systems responsible for generating policy data. In one example, the transceiver unit [302] may receive the policy provisioning events from a User Interface [402]. 15
[0106] In an example, when the transceiver unit [302] obtains the policy provisioning events, the storage unit [306] may store the same. For example, the policy provisioning events, obtained by the transceiver unit [302], may be stored in the storage unit [306] for future reference. 20
[0107] For example, if the system [300] receives a policy event related to resource allocation, this event will be stored in the storage unit [306]. This stored information then be used for later processing.
25 [0108] In another example, for obtaining the policy provisioning events, the
transceiver unit [302] may fetch the one or more policy provisioning events from the storage unit [306]. For example, the transceiver unit [302] may obtain policy provisioning events by retrieving pre-configured policies stored in the storage unit [306]. The transceiver unit [302] accesses the storage unit [306] to fetch previously
30 stored policy provisioning events that may either pre-defined or generated at an
25

earlier time. This retrieval mechanism may useful when real-time policy data unavailable, or when the system may operate in a disconnected state.
[0109] For example, if the transceiver unit [302] needs to process a task based on
5 past policy events that were stored previously, it may fetch these events from the
storage unit [306]. This confirms continuity and consistency in task execution, even if the original source of the policy events is no longer available or the system [300] is operating offline.
10 [0110] At step [506], the method [500] comprises, creating, by a creation unit
[304], a set of task query events based on the one or more policy provisioning events.
[0111] Continuing further, once the one or more policy provisioning events are
15 obtained, the creation unit [304] may create a set of task query events based on the
one or more policy provisioning events.
[0112] In an implementation of the present disclosure, the creation of a set of task
query events begins when the transceiver unit [302] obtains one or more policy
20 provisioning events. These events contain system-level information and policies
that dictate how resources may be managed, tasks to be prioritized, or actions to be executed.
[0113] Upon obtaining these policy provisioning events, the creation unit [304]
25 processes them to identify the specific actions that need to be taken. Each policy
provisioning event may be analysed to determine the nature of the task it
corresponds to, such as, whether it involves creating a new task, modifying an
existing task, deleting a task that is no longer required, or executing a task that has
been delayed until certain conditions met like when enough resources are available
30 or when other tasks are completed.
26

[0114] The creation unit [304] then organizes these individual task query events into a set, which forms the basis of the dynamic query builder.
[0115] For example, if the policy provisioning events indicate that a high-priority
5 task may execute with specific resource allocation, the creation unit [304] generate
a task query event personalised to those requirements. These task query events may
define tasks related to resource allocation, task scheduling that need to be required.
The task query events act as middle instructions that the Platform Scheduler [406]
may use to manage and execute tasks in accordance with the policy provisioning
10 events.
[0116] At step [508], the method [500] comprises, transmitting, by the transceiver
unit [302] to a Platform Scheduler, a dynamic query builder based on the set of task
query events, wherein the Platform Scheduler is to execute one or more tasks based
15 on the dynamic query builder.
[0117] Continuing further, thereafter, after the set of task query events has been
created, the transceiver unit [302] may transmit a dynamic query builder based on
the set of task query events to a Platform Scheduler [406]. The Platform Scheduler
20 [406] may be configured to execute one or more tasks based on the dynamic query
builder.
[0118] In an implementation of the present disclosure, the transceiver unit [302]
sends a dynamic query builder to the Platform Scheduler [406], where the dynamic
25 query builder captures the instructions provided by the task query events. The
dynamic query builder functions as a complete set of instructions that the Platform Scheduler may use to execute one or more tasks. The transmission of the dynamic query builder enables the Platform Scheduler [406] to understand the set of task query events.
30
27

[0119] In an example, the transceiver unit [302] may manage the dynamic query
builder by transmitting it to the Platform Scheduler [406] through a sequence of one
or more task query events. These task query events may relate to specific types of
actions, including creation, modification, deletion, or execution tasks, depending
5 on the requirements of the Platform Scheduler [406].
[0120] For example, if the policy provisioning events order that a new task needs
to be created to handle additional system load, the creation unit [304] generates a
task query event for the creation of the task. This task query event included in the
10 dynamic query builder, which is then transmitted to the Platform Scheduler [406].
Similarly, if an existing task needs to be modified, the creation unit [304] may generate task query events accordingly.
[0121] In another example, to allocate more resources the transceiver unit [302]
15 may transmit a dynamic query builder containing the modification task query event
to adjust the task’s parameters. The task query event may indicate the deletion of a task if it is no longer necessary, or the execution of a task if it has been scheduled for immediate processing.
20 [0122] In an example, the transceiver unit [302] may transmit, to the Platform
Scheduler [406], the dynamic query builder further based on a resource hysteresis information.
[0123] The resource hysteresis information refers to data that tracks the historical
25 usage patterns and thresholds of system resources such as CPU, memory, or
network bandwidth. The resource hysteresis information understandings into past resource usage patterns and trends, which are used to make more informed decisions about task scheduling and resource allocation.
28

[0124] For example, if historical data shows that resource usage tends to peak during specific times or under certain conditions, this information may use to adjust the dynamic query builder to account for these fluctuations.
5 [0125] Continuing further, in another example, after the dynamic query builder has
been created, the same may be stored in a repository [410]. The repository [410] may be configured to hold task-related queries that may part of the dynamic query builder.
10 [0126] For example, when a set of task query events is generated, it is stored in the
repository [404] for future use. If the system [300] later detects an increase in resource demand or a shift in task priorities, the system [300] may fetch these stored dynamic query events from the repository and accordingly transmits them to the Platform Scheduler [406].
15
[0127] Continuing further, the Platform Scheduler [406] on receiving the dynamic query builder from the system [300], may generate a feedback indicating that the system’s resource usage has breached a predefined threshold. The transceiver unit [302] may then receive this feedback from the Platform Scheduler [406] in the form
20 of a notification related to a breached output condition.
[0128] The term breached output is an alert or notification triggered when resource usage exceeds predefined limits, indicating a potential need for corrective action to maintain system stability and performance.
25
[0129] The notification occurs when the Platform Scheduler detects that certain predefined limits or conditions have been exceeded. These notifications may indicate resource overuse, such as when the system consumes more memory, CPU, or bandwidth than allowed. They may also signal performance issues, such as when
30 a task takes longer than expected or doesn't meet performance benchmarks.
29

Additionally, if a task fails during execution due to errors or resource constraints, the scheduler sends a notification to alert the system about the failure.
[0130] The transceiver unit [302], on receiving this notification, may allow the system [300] to take corrective action. Such actions may include adjusting the resource allocation, delaying low-priority tasks, or even terminating non-essential operations to confirm that critical tasks continue to function within the resource limits.
[0131] In cases where the dynamic query builder transmitted by the system [300] is based on the resource hysteresis information, the notification, received from the Platform Scheduler [406] may indicate that one or more breach conditions based on the resource hysteresis information have been satisfied.
[0132] For example, if the system’s CPU usage exceeds 90% of its maximum capacity for an extended period, this may trigger a breach condition. The Platform Scheduler [406] detects this difference based on the resource hysteresis information and generates a notification to the transceiver unit [302]. This notification indicates that corrective measures need to be taken, such as reallocating resources, optimizing task execution, or scaling up resources to handle the increased load.
[0133] Thereafter, the method terminates at step [510].
[0134] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for implementing execution of one or more tasks. The instructions include executable code which, when executed by one or more units of a system [300], causes a transceiver unit [302] of the system [300] to obtain one or more policy provisioning events. Further, the instructions include executable code which, when executed, causes a creation unit [304] to create a set of task query events based on the one or more policy provisioning events. Further, the instructions include executable code which, when executed, causes the

transceiver unit [302] to transmit, to a Platform Scheduler, a dynamic query builder based on the set of task query events, wherein the Platform Scheduler is to execute one or more tasks based on the dynamic query builder.
[0135] As is evident from the above, the present disclosure provides a technically advanced solution for implementing execution of one or more tasks. The present disclosure provides a solution for creating a dynamic query builder, that is transferred to the Platform Scheduler microservice through a sequence of task events. The approaches of the present subject matter ensure that all task executions take place within the PS microservice, effectively managing resource limitations and minimizing overhead within the CP microservice. As a result, the system's integrity is upheld.
[0136] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0137] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.

We claim:
1. A method for implementing execution of one or more tasks, the method
comprising:
- obtaining, by a transceiver unit [302], one or more policy provisioning events;
- creating, by a creation unit [304], a set of task query events based on the one or more policy provisioning events; and
- transmitting, by the transceiver unit [302] to a Platform Scheduler, a dynamic query builder based on the set of task query events, wherein the Platform Scheduler is to execute one or more tasks based on the dynamic query builder.

2. The method as claimed in claim 1, wherein the method further comprises storing, by a storage unit [306], the one or more policy provisioning events post obtaining the one or more policy provisioning events.
3. The method as claimed in claim 1, wherein for obtaining the one or more policy provisioning events, the method further comprises: fetching, by the transceiver unit [302] from a storage unit [306], the one or more policy provisioning events.
4. The method as claimed in claim 1, wherein the dynamic query builder is transmitted by the transceiver unit [302] to the Platform Scheduler through a sequence of one or more task query events among the set of task query events.
5. The method as claimed in claim 4, wherein the set of task query events is related to one or more of a creation task, a modification task, a deletion task, and an execution task.

6. The method as claimed in claim 1, wherein transmitting, by the transceiver unit [302] to the Platform Scheduler, the dynamic query builder is further based on a resource hysteresis information.
7. The method as claimed in claim 6, wherein the method further comprises receiving, by the transceiver unit [302] from the Platform Scheduler, a notification related to a breached output, in an event of satisfaction of one or more breach conditions based on the resource hysteresis information.
8. A system for implementing execution of one or more tasks, the system comprising:

- a transceiver unit [302], wherein the transceiver unit [302] is configured to obtain one or more policy provisioning events;
- a creation unit [304] connected at least to the transceiver unit [302], wherein the creation unit [304] is configured to create a set of task query events based on the one or more policy provisioning events; and
- the transceiver unit [302] is further configured to transmit, to a Platform Scheduler, a dynamic query builder based on the set of task query events, wherein the Platform Scheduler is to execute one or more tasks based on the dynamic query builder.
9. The system as claimed in claim 8, further comprising a storage unit [306],
wherein the storage unit [306] is configured to store the one or more policy
provisioning events post obtaining the one or more policy provisioning events.

10. The system as claimed in claim 8, wherein for obtaining the one or more policy provisioning events, the system further comprises: the transceiver unit [302] configured to fetch, from a storage unit [306], the one or more policy provisioning events.
11. The system as claimed in claim 8, wherein the transceiver unit [302] is configured to transmit the dynamic query builder to the Platform Scheduler through a sequence of one or more task query events among the set of task query events.
12. The system as claimed in claim 11, wherein the set of task query events is related to one or more of a creation task, a modification task, a deletion task, and an execution task.
13. The system as claimed in claim 8, wherein the transceiver unit [302] is configured to transmit, to the Platform Scheduler, the dynamic query builder further based on a resource hysteresis information.
14. The system as claimed in claim 13, wherein the transceiver unit [302] is further configured to: receive, from the Platform Scheduler, a notification related to a breached output, in an event of satisfaction of one or more breach conditions based on the resource hysteresis information.

Documents

Application Documents

# Name Date
1 202321061576-STATEMENT OF UNDERTAKING (FORM 3) [13-09-2023(online)].pdf 2023-09-13
2 202321061576-PROVISIONAL SPECIFICATION [13-09-2023(online)].pdf 2023-09-13
3 202321061576-POWER OF AUTHORITY [13-09-2023(online)].pdf 2023-09-13
4 202321061576-FORM 1 [13-09-2023(online)].pdf 2023-09-13
5 202321061576-FIGURE OF ABSTRACT [13-09-2023(online)].pdf 2023-09-13
6 202321061576-DRAWINGS [13-09-2023(online)].pdf 2023-09-13
7 202321061576-Proof of Right [09-01-2024(online)].pdf 2024-01-09
8 202321061576-FORM-5 [12-09-2024(online)].pdf 2024-09-12
9 202321061576-ENDORSEMENT BY INVENTORS [12-09-2024(online)].pdf 2024-09-12
10 202321061576-DRAWING [12-09-2024(online)].pdf 2024-09-12
11 202321061576-CORRESPONDENCE-OTHERS [12-09-2024(online)].pdf 2024-09-12
12 202321061576-COMPLETE SPECIFICATION [12-09-2024(online)].pdf 2024-09-12
13 202321061576-Request Letter-Correspondence [20-09-2024(online)].pdf 2024-09-20
14 202321061576-Power of Attorney [20-09-2024(online)].pdf 2024-09-20
15 202321061576-Form 1 (Submitted on date of filing) [20-09-2024(online)].pdf 2024-09-20
16 202321061576-Covering Letter [20-09-2024(online)].pdf 2024-09-20
17 202321061576-CERTIFIED COPIES TRANSMISSION TO IB [20-09-2024(online)].pdf 2024-09-20
18 Abstract 1.jpg 2024-10-08
19 202321061576-FORM 3 [08-10-2024(online)].pdf 2024-10-08
20 202321061576-ORIGINAL UR 6(1A) FORM 1 & 26-070125.pdf 2025-01-14