Abstract: The present disclosure relates to a method and a system periodic synchronisation of resources. The disclosure encompasses: transmitting, by a transceiver unit [302] from an auditor unit (AU) to a platform scheduler and cron jobs (PSC) unit via an interface, a request message for performing an automatic synchronisation of resources, wherein the request message comprises at least a set of time intervals; receiving, by the transceiver unit [302] at the AU from the PSC unit, an acknowledgement associated the request, wherein the acknowledgement is at least one of a positive acknowledgement and a negative acknowledgement; receiving, by the transceiver unit [302] at the AU, a plurality of scheduled tasks associated with the set of time intervals based on the request message and the positive acknowledgement; synchronising, by a processing unit [304] at the AU, one or more resources at the plurality of scheduled tasks. [FIG. 4]
1
FORM 2
THE 5 PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
10 (See section 10 and rule 13)
“METHOD AND SYSTEM FOR PERIODIC SYNCHRONISATION OF
15 RESOURCES”
20 We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point,
Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
25
30
The following specification particularly describes the invention and the manner in which it is
35 to be performed.
2
METHOD AND SYSTEM FOR PERIODIC SYNCHRONISATION OF RESOURCES
FIELD OF INVENTION
[0001] The present disclosure relates generally to the field of wireless 5 communication
systems. More particularly, embodiments of the present disclosure relate to a method and
system for performing periodic synchronisation of resources.
BACKGROUND
10
[0002] The following description of the related art is intended to provide background
information pertaining to the field of the disclosure. This section may include certain aspects
of the art that may be related to various features of the present disclosure. However, it should
be appreciated that this section is used only to enhance the understanding of the reader with
15 respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and advancements. The first
generation of wireless communication technology was based on analog technology and offered
20 only voice services. However, with the advent of the second-generation (2G) technology,
digital communication and data services became possible, and text messaging was introduced.
3G technology marked the introduction of high-speed internet access, mobile video calling,
and location-based services. The fourth generation (4G) technology revolutionized wireless
communication with faster data speeds, better network coverage, and improved security.
25 Currently, the fifth generation (5G) technology is being deployed, promising even faster data
speeds, low latency, and the ability to connect multiple devices simultaneously. With each
generation, wireless communication technology has become more advanced, sophisticated, and
capable of delivering more services to its users. The integration of 4G and 5G wireless
technologies with cloud computing facilitates a more agile and scalable ecosystem,
30 empowering businesses and consumers to leverage advanced computing capabilities from
virtually anywhere.
3
[0004] In modern computing environments, particularly in cloud computing and
virtualization, the efficient management of resources such as physical memory, Random
Access Memory (RAM), and Central Processing Unit (CPU) is critical. As organizations
increasingly rely on complex systems, maintaining accurate inventory and synchronization of
these resources becomes essential. A mismatch between an Inventory Manager 5 (IM) and actual
hardware resources leads to inefficiencies, resource underutilization, and potential system
failures. To mitigate these challenges, an Auditor Unit (AU) performs audits of the resources
to ensure that the IM reflects the true state of the system. AU leverages a service Adapter (SA)
to facilitate communication between various microservices, enabling the AU to fetch real-time
10 data regarding resource utilization. AU brings inventory in close sync with real time
available/used resources and minimizes the mismatch between the IM and real time hardware
resources. Current resource management systems often face significant challenges in
maintaining accurate inventories of physical and virtual resources, including memory, RAM,
and CPU. It is to be noted that discrepancies between the Inventory Manager (IM) and actual
15 hardware may lead to various operational issues.
[0005] In current the scenario, a sync request is manually generated for synchronization
of such resources and hence unable to synchronize resources periodically and accurately.
Current solutions for managing resource auditing and synchronization frequently lack robust
20 fault tolerance mechanisms. As a result, when an instance of the AU encounters a failure during
request processing, it leads to service disruption due to the inability to process such a sync
request. Many existing systems operate with a single instance of auditing services, creating a
vulnerability where the failure of that instance compromises the entire system's functionality.
25 [0006] Thus, there exists an imperative need in the art to automatically syncing resources
via a fault-tolerant interface that operates in a high availability mode, which the present
disclosure aims to address.
SUMMARY
30
[0007] This section is provided to introduce certain aspects of the present disclosure in a
simplified form that are further described below in the detailed description. This summary is
not intended to identify the key features or the scope of the claimed subject matter.
4
[0008] An aspect of the present disclosure may relate to a method for periodic
synchronisation of resources. The method includes transmitting, by a transceiver unit from an
auditor unit (AU) to a platform scheduler and cron jobs (PSC) unit via an interface, a request
message for performing an automatic synchronisation of resources, wherein 5 the request
message comprises at least a set of time intervals. Next, the method includes receiving, by the
transceiver unit at the AU from the PSC unit, an acknowledgement associated with the request
message, wherein the acknowledgement is at least one of a positive acknowledgement and a
negative acknowledgement. Next, the method includes receiving, by the transceiver unit at the
10 AU, a plurality of scheduled tasks associated with the set of time intervals based on the request
message and the positive acknowledgement. Next, the method includes synchronising, by a
processing unit at the AU, one or more resources at the plurality of scheduled tasks.
[0009] In an exemplary aspect of the present disclosure, the method further comprises:
15 allocating, by the processing unit via the AU, a first instance associated with the AU for
processing at least one scheduled task; detecting, by the processing unit at the AU, a failure
associated with processing the at least one scheduled task by the first instance; allocating, by
the processing unit via the AU, a second instance associated with the AU for processing the at
least one scheduled task based on the failure.
20
[0010] In an exemplary aspect of the present disclosure, synchronising the one or more
resources at the plurality of scheduled tasks is based on comparison of a number of resources
available in real time and a number of resources managed by an Inventory Manager (IM).
25 [0011] In an exemplary aspect of the present disclosure, if the number of resource
available in real time is exhausted, the method comprises transmitting, by the transceiver unit,
from the AU to a service adapter (SA), a notification instructing the SA to migrate one or more
resources to different network function components.
30 [0012] In an exemplary aspect of the present disclosure, if the number of resources
managed by the IM is exhausted, the method comprises transmitting, by the transceiver unit,
from the AU to the IM, a notification instructing the IM to terminate idle network function
components.
5
[0013] In an exemplary aspect of the present disclosure, the AU and the PSC unit is
connected with the interface, wherein the interface is an AU_PS interface.
[0014] In an exemplary aspect of the present disclosure, the method 5 further comprises
creating, by the processing unit at the PSC unit, the plurality of scheduled tasks associated with
the automatic synchronisation of the resources.
[0015] In an exemplary aspect of the present disclosure, the method further comprises
10 storing, by the processing unit via the PSC unit, a set of details associated with the plurality of
scheduled tasks in a database.
[0016] In an exemplary aspect of the present disclosure, the positive acknowledgement is
transmitted by the PSC unit to the AU upon successful creation of the plurality of scheduled
15 tasks based on the request message, and wherein the negative acknowledgement is transmitted
by the PSC unit to the AU upon failure of creation of the plurality of scheduled tasks based on
the request message.
[0017] In an exemplary aspect of the present disclosure, the one or more resources are
20 synchronised periodically at the set of time intervals associated with the plurality of scheduled
tasks.
[0018] Another aspect of the present disclosure may relate to a system for periodic
synchronisation of resources. The system comprises: a transceiver unit, wherein the transceiver
25 unit is configured to: transmit, from an auditor unit to a platform scheduler and cron jobs (PSC)
unit via an interface, a request message to perform an automatic synchronisation of resources,
wherein the request message comprises at least a set of time intervals; receive, at the AU from
the PSC unit, an acknowledgement associated the request message, wherein the
acknowledgement is at least one of a positive acknowledgement and a negative
30 acknowledgement; receive, at the AU, a plurality of scheduled tasks associated with the set of
time intervals based on the request message and the positive acknowledgement; a processing
6
unit connected at least with the transceiver unit, configured to synchronise, at the AU, one or
more resources at the plurality of scheduled tasks.
OBJECTS OF THE INVENTION
5
[0019] Some of the objects of the present disclosure, which at least one embodiment
disclosed herein satisfies are listed herein below.
[0020] It is an object of the present disclosure to provide a system and a method for
10 periodic synchronisation of resources.
[0021] It is another object of the present disclosure to provide a system and a method
automatically syncing via the AU_PS interface due to the triggering of plurality of events
related to scheduling tasks is triggered periodically based on their schedule time.
15
[0022] It is another object of the present disclosure to provide a solution that ensures auto
syncing of resources at regular intervals and eliminates manual request generation.
DESCRIPTION OF THE DRAWINGS
20
[0023] The accompanying drawings, which are incorporated herein, and constitute a part
of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in
which like reference numerals refer to the same parts throughout the different drawings.
Components in the drawings are not necessarily to scale, emphasis instead being placed upon
25 clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the
figures are not to be construed as limiting the disclosure, but the possible variants of the method
and system according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings
includes disclosure of electrical components or circuitry commonly used to implement such
30 components.
7
[0024] FIG. 1 illustrates an exemplary block diagram representation of a management and
orchestration (MANO) architecture.
[0025] FIG. 2 illustrates an exemplary block diagram of a computing device upon which
the features of the present disclosure may be implemented in accordance 5 with exemplary
implementation of the present disclosure.
[0026] FIG. 3 illustrates an exemplary block diagram of a system for periodic
synchronisation of resources, in accordance with exemplary implementations of the present
10 disclosure.
[0027] FIG. 4 illustrates a method flow diagram for periodic synchronisation of resources
in accordance with exemplary implementations of the present disclosure.
15 [0028] FIG. 5 illustrates an exemplary system architecture for periodic synchronisation of
resources, in accordance with exemplary implementations of the present disclosure.
[0029] FIG. 6 illustrates an exemplary process flow diagram depicting a method for
periodic synchronization of resources, in accordance with the exemplary implementations of
20 the present disclosure.
[0030] The foregoing shall be more apparent from the following more detailed description
of the disclosure.
25 DETAILED DESCRIPTION
[0031] In the following description, for the purposes of explanation, various specific
details are set forth in order to provide a thorough understanding of embodiments of the present
disclosure. It will be apparent, however, that embodiments of the present disclosure may be
30 practiced without these specific details. Several features described hereafter may each be used
independently of one another or with any combination of other features. An individual feature
8
may not address any of the problems discussed above or might address only some of the
problems discussed above.
[0032] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather, 5 the ensuing
description of the exemplary embodiments will provide those skilled in the art with an enabling
description for implementing an exemplary embodiment. It should be understood that various
changes may be made in the function and arrangement of elements without departing from the
spirit and scope of the disclosure as set forth.
10
[0033] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of ordinary skill in
the art that the embodiments may be practiced without these specific details. For example,
circuits, systems, processes, and other components may be shown as components in block
15 diagram form in order not to obscure the embodiments in unnecessary detail.
[0034] Also, it is noted that individual embodiments may be described as a process which
is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block
diagram. Although a flowchart may describe the operations as a sequential process, many of
20 the operations may be performed in parallel or concurrently. In addition, the order of the
operations may be re-arranged. A process is terminated when its operations are completed but
could have additional steps not included in a figure.
[0035] The word “exemplary” and/or “demonstrative” is used herein to mean serving as
25 an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed
herein is not limited by such examples. In addition, any aspect or design described herein as
“exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or
advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary
structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent
30 that the terms “includes,” “has,” “contains,” and other similar words are used in either the
detailed description or the claims, such terms are intended to be inclusive—in a manner similar
to the term “comprising” as an open transition word—without precluding any additional or
other elements.
9
[0036] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for processing
instructions. A processor may be a general-purpose processor, a special purpose processor, a
conventional processor, a digital signal processor, a plurality of microprocessors, 5 one or more
microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a
microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array
circuits, any other type of integrated circuits, etc. The processor may perform signal coding
data processing, input/output processing, and/or any other functionality that enables the
10 working of the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0037] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a
smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless
15 communication device”, “a mobile communication device”, “a communication device” may
be any electrical, electronic and/or computing device or equipment, capable of implementing
the features of the present disclosure. The user equipment/device may include, but is not limited
to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital
assistant, tablet computer, wearable device or any other computing device which is capable of
20 implementing the features of the present disclosure. Also, the user device may contain at least
one input means configured to receive an input from at least one of a transceiver unit, a
processing unit, a storage unit, a detection unit and any other such unit(s) which are required
to implement the features of the present disclosure.
25 [0038] As used herein, “storage unit” or “memory unit” refers to a machine or computerreadable
medium including any mechanism for storing information in a form readable by a
computer or similar machine. For example, a computer-readable medium includes read-only
memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical
storage media, flash memory devices or other types of machine-accessible storage media. The
30 storage unit stores at least the data that may be required by one or more units of the system to
perform their respective functions.
10
[0039] As used herein “interface” or “user interface” refers to a shared boundary across
which two or more separate components of a system exchange information or data. The
interface may also refer to a set of rules or protocols that define communication or interaction
of one or more modules or one or more units with each other, which also includes the methods,
functions, or procedures that 5 may be called.
[0040] All modules, units, components used herein, unless explicitly excluded herein, may
be software modules or hardware processors, the processors being a general-purpose processor,
a special purpose processor, a conventional processor, a digital signal processor (DSP), a
10 plurality of microprocessors, one or more microprocessors in association with a DSP core, a
controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field
Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0041] As used herein the transceiver unit includes at least one receiver and at least one
15 transmitter configured respectively for receiving and transmitting data, signals, information or
a combination thereof between units/components within the system and/or connected with the
system.
[0042] As used herein, Physical and Virtual Inventory Manager (PVIM) module maintains
20 the inventory and its resources. After getting a request to reserve resources from PEEGN,
PVIM adds up the resources consumed by a particular network function as used resources and
removes them from free resources. Further, the PVIM updates this in the NoSQL database.
[0043] As used herein, Container Network Function (CNF) Life Cycle Manager (CNF25
LM) may capture the details of vendors, CNFs, and Container Network Function Components
(CNFCs) via create, read, and update APIs exposed by the service itself. The captured details
are stored in a database and can be further used by SA service. CNF-LM may create CNF or
individual CNFC instances. CNF-LM may scale-out the CNFs or individual CNFCs.
30 [0044] As used herein, Policy Execution Engine (PEEGN) module provides a network
function virtualisation (NFV) software defined network (SDN) platform functionality to
support dynamic requirements of resource management and network service orchestration in
the virtualized network. Further, the PEEGN is involved during CNF instantiation flow to
11
check for CNF policy and to reserve resources required to instantiate CNF at PVIM. PEEGN
supports scaling policy for CNFC.
[0045] As used herein, Capacity Manager (CMP) creates a task to monitor the performance
metrics data received for that VNF, VNFC and CNFC. Wherever there is a 5 threshold breach,
CMP sends a trigger to NFV Platform and Decision Analytics (NPDA).
[0046] The foregoing shall be more apparent from the following more detailed description
of the disclosure.
10
[0047] Hereinafter, exemplary embodiments of the present disclosure will be described
with reference to the accompanying drawings.
[0048] As discussed in the background section, the current known solutions have several
15 shortcomings. The present disclosure aims to overcome the above-mentioned and other
existing problems in this field of technology by providing a method and a system for periodic
synchronisation of resources.
[0049] FIG. 1 illustrates an exemplary block diagram representation of a management and
20 orchestration (MANO) architecture/ platform [100], in accordance with exemplary
implementation of the present disclosure. The MANO architecture [100] may be developed for
managing telecom cloud infrastructure automatically, managing design or deployment design,
managing instantiation of network node(s)/ service(s) etc. The MANO architecture [100]
deploys the network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/
25 Container Network Function (CNF). The system as provided by the present disclosure may
comprise one or more components of the MANO architecture [100]. The MANO architecture
[100] may be used to auto-instantiate the VNFs into the corresponding environment of the
present disclosure so that it could help in onboarding other vendor(s) CNFs and VNFs to the
platform.
30
[0050] As shown in FIG. 1, the MANO architecture [100] comprises a user interface layer
[102], a network function virtualization (NFV) and software defined network (SDN) design
12
function module [104], a platform foundation services module [106], a Platform Schedulers
and Cron Jobs module [108] and a platform resource adapters and utilities module [112]. All
the components are assumed to be connected to each other in a manner as obvious to the person
skilled in the art for implementing features of the present disclosure.
5
[0051] The NFV and SDN design function module [104] comprises a VNF lifecycle
manager (compute) [1042], a VNF catalogue [1044], a network services catalogue [1046], a
network slicing and service chaining manager [1048], a physical and virtual resource manager
[1050] and a CNF lifecycle manager [1052]. The VNF lifecycle manager (compute) [1042]
10 may be responsible for deciding on which server of the communication network the
microservice will be instantiated. The VNF lifecycle manager (compute) [1042] may manage
the overall flow of incoming/ outgoing requests during interaction with the user. The VNF
lifecycle manager (compute) [1042] may be responsible for determining which sequence to be
followed for executing the process. For e.g., in an AMF network function of the communication
15 network (such as a 5G network), sequence for execution of processes P1 and P2 etc. The VNF
catalogue [1044] stores the metadata of all the VNFs (also CNFs in some cases). The network
services catalogue [1046] stores the information of the services that need to be run. The network
slicing and service chaining manager [1048] manages the slicing (an ordered and connected
sequence of network service/ network functions (NFs)) that must be applied to a specific
20 networked data packet. The physical and virtual resource manager [1050] stores the logical and
physical inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the
CNF lifecycle manager [1052] may be used for the CNFs lifecycle management.
[0052] The platforms foundation services module [106] comprises a microservices elastic
25 load balancer [1062], an identity and access manager [1064], a command line interface (CLI)
[1066], a central logging manager [1068], and an event routing manager [1070]. The
microservices elastic load balancer [1062] may be used for maintaining the load balancing of
the request for the services. The identity and access manager [1064] may be used for logging
purposes. The command line interface (CLI) [1066] may be used to provide commands to
30 execute certain processes which requires changes during the run time. The central logging
manager [1068] may be responsible for keeping the logs of every service. These logs are
generated by the MANO platform [100]. These logs are used for debugging purposes. The
13
event routing manager [1070] may be responsible for routing the events i.e., the application
programming interface (API) hits to the corresponding services.
[0053] The platforms core services module [108] comprises NFV infrastructure
monitoring manager [1082], an assure manager [1084], a performance 5 manager [1086], a
policy execution engine [1088], a capacity monitoring manager [1090], a release management
(mgmt.) repository [1092], a configuration manager and GCT [1094], an NFV platform
decision analytics [1096], a platform NoSQL DB [1098]; a platform schedulers and cron jobs
[1100], a VNF backup and upgrade manager [1102], a microservice auditor [1104], and a
10 platform operations, administration and maintenance manager [1106]. The NFV infrastructure
monitoring manager [1082] monitors the infrastructure part of the NFs. For e.g., any metrics
such as CPU utilization by the VNF. The assure manager [1084] may be responsible for
supervising the alarms the vendor may be generating. The performance manager [1086] may
be responsible for managing the performance counters. The policy execution engine (PEEGN)
15 [1088] may be responsible for managing all of the policies. The capacity monitoring manager
(CMM) [1090] may be responsible for sending the request to the PEGN [1090]. The release
management (mgmt.) repository (RMR) [1092] may be responsible for managing the releases
and the images of all of the vendor's network nodes. The configuration manager and (GCT)
[1094] manages the configuration and GCT of all the vendors. The NFV platform decision
20 analytics (NPDA) [1096] helps in deciding the priority of using the network resources. It may
be further noted that the policy execution engine (PEGN) [1088], the configuration manager
and GCT [1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] may
be a database for storing all the inventory (both physical and logical) as well as the metadata
of the VNFs and CNF. The platform schedulers and cron jobs [1100] schedules the task such
25 as but not limited to triggering of an event, traverse the network graph etc. The VNF backup
and upgrade manager [1102] takes backup of the images, binaries of the VNFs and the CNFs
and produces those backups on demand in case of server failure. The microservice auditor
[1104] audits the microservices. For e.g., in a hypothetical case, instances not being instantiated
by the MANO architecture [100] may be using the network resources. In such case, the
30 microservice auditor [1104] audits and informs the same so that resources can be released for
services running in the MANO architecture [100]. The audit assures that the services only run
on the MANO platform [100]. The platform operations, administration and maintenance
manager [1106] may be used for newer instances that are spawning.
14
[0054] The platform resource adapters and utilities module [112] further comprises a
platform external API adapter and gateway [1122]; a generic decoder and indexer (XML, CSV,
JSON) [1124]; a service adapter [1126]; an API adapter [1128]; and a NFV gateway [1130].
The platform external API adapter and gateway [1122] may be responsible 5 for handling the
external services (to the MANO platform [100]) that require the network resources. The generic
decoder and indexer (XML, CSV, JSON) [1124] gets directly the data of the vendor system in
the XML, CSV, JSON format. The service adapter [1126] may be the interface provided
between the telecom cloud and the MANO architecture [100] for communication. The API
10 adapter [1128] may be used to connect with the virtual machines (VMs). The NFV gateway
[1130] may be responsible for providing the path to each service going to/incoming from the
MANO architecture [100].
[0055] FIG. 2 illustrates an exemplary block diagram of a computing device [200] (also
15 referred herein as a computer system [200]) upon which the features of the present disclosure
may be implemented in accordance with exemplary implementation of the present disclosure.
In an implementation, the computing device [200] may also implement a method for periodic
synchronisation of resources utilising the system. In another implementation, the computing
device [200] itself implements the method for periodic synchronisation of resources using one
20 or more units configured within the computing device [200], wherein said one or more units
are capable of implementing the features as disclosed in the present disclosure.
[0056] The computing device [200] may include a bus [202] or other communication
mechanism for communicating information, and a hardware processor [204] coupled with bus
25 [202] for processing information. The hardware processor [204] may be, for example, a
general-purpose microprocessor. The computing device [200] may also include a main memory
[206], such as a random-access memory (RAM), or other dynamic storage device, coupled to
the bus [202] for storing information and instructions to be executed by the processor [204].
The main memory [206] also may be used for storing temporary variables or other intermediate
30 information during execution of the instructions to be executed by the processor [204]. Such
instructions, when stored in non-transitory storage media accessible to the processor [204],
render the computing device [200] into a special-purpose machine that is customized to
perform the operations specified in the instructions. The computing device [200] further
15
includes a read only memory (ROM) [208] or other static storage device coupled to the bus
[202] for storing static information and instructions for the processor [204].
[0057] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is
provided and coupled to the bus [202] for storing information and instructions. 5 The computing
device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube
(CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED
(OLED) display, etc. for displaying information to a computer user. An input device [214],
including alphanumeric and other keys, touch screen input means, etc. may be coupled to the
10 bus [202] for communicating information and command selections to the processor [204].
Another type of user input device may be a cursor controller [216], such as a mouse, a trackball,
or cursor direction keys, for communicating direction information and command selections to
the processor [204], and for controlling cursor movement on the display [212]. The input device
typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g.,
15 y), that allow the device to specify positions in a plane.
[0058] The computing device [200] may implement the techniques described herein using
customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic
which in combination with the computing device [200] causes or programs the computing
20 device [200] to be a special-purpose machine. According to one implementation, the techniques
herein are performed by the computing device [200] in response to the processor [204]
executing one or more sequences of one or more instructions contained in the main memory
[206]. Such instructions may be read into the main memory [206] from another storage
medium, such as the storage device [210]. Execution of the sequences of instructions contained
25 in the main memory [206] causes the processor [204] to perform the process steps described
herein. In alternative implementations of the present disclosure, hard-wired circuitry may be
used in place of or in combination with software instructions.
[0059] The computing device [200] also may include a communication interface [218]
30 coupled to the bus [202]. The communication interface [218] provides a two-way data
communication coupling to a network link [220] that is connected to a local network [222]. For
example, the communication interface [218] may be an integrated services digital network
(ISDN) card, cable modem, satellite modem, or a modem to provide a data communication
16
connection to a corresponding type of telephone line. As another example, the communication
interface [218] may be a local area network (LAN) card to provide a data communication
connection to a compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing 5 various types of
information.
[0060] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the communication interface
10 [218]. In the Internet example, a server [230] might transmit a requested code for an application
program through the Internet [228], the ISP [226], the local network [222], host [224] and the
communication interface [218]. The received code may be executed by the processor [204] as
it is received, and/or stored in the storage device [210], or other non-volatile storage for later
execution.
15
[0061] The computing device [200] encompasses a wide range of electronic devices
capable of processing data and performing computations. Examples of the computing device
[200] include, but are not limited only to, personal computers, laptops, tablets, smartphones,
servers, and embedded systems. The devices may operate independently or as part of a network
20 and can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally,
the computing device [200] may include peripheral devices, such as monitors, keyboards, and
printers, as well as integrated components within larger electronic systems, showcasing their
versatility in various technological applications.
25 [0062] Referring to FIG. 3, an exemplary block diagram of a system [300] for periodic
synchronisation of resources, is shown, in accordance with the exemplary implementations of
the present disclosure. The system [300] comprises at least one transceiver unit [302] and at
least one processing unit [304]. Also, all of the components/ units of the system [300] are
assumed to be connected to each other unless otherwise indicated below. As shown in the
30 figures all units shown within the system [300] should also be assumed to be connected to each
other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise
multiple such units or the system [300] may comprise any such numbers of said units, as
required to implement the features of the present disclosure. Further, in an implementation, the
17
system [300] may be present in a user device to implement the features of the present
disclosure. In another implementation, the system [300] may reside in a server or a network
entity. In yet another implementation, the system [300] may reside partly in the server/ network
entity.
5
[0063] The system [300] is configured for performing periodic synchronisation of
resources, with the help of the interconnection between the components/units of the system
[300].
10 [0064] The system [300] comprises a transceiver unit [302]. The transceiver unit [302] is
configured to transmit, from an auditor unit (AU) to a platform scheduler and cron jobs (PSC)
unit via an interface, a request message (e.g., a task create request) to perform an automatic
synchronisation of resources. The request message comprises at least a set of time intervals.
For example, the request message may comprise a plurality of events related to scheduling of
15 a plurality of tasks (also referred to as a plurality of scheduled tasks).
[0065] The AU is a component responsible for monitoring, assessing, and reporting of
various resources at an Inventory Manager (IM) within an environment. The PSC unit is
responsible for managing the scheduling and execution of tasks within an environment. In an
20 implementation, the AU is implemented as a service. The AU sends a task create request to the
PSC unit.
[0066] In an implementation, the resources comprise physical resources and virtual
resources. The physical resources may include but are not limited to, a physical memory (e.g.,
25 used memory, free memory, cache memory, etc.), Random Access Memory (RAM) (e.g.,
capacity, type and speed of the RAM), and a Central Processing Unit (CPU) (e.g., CPU speed,
utilization, etc.). The virtual resources may include but are not limited to, virtual memory,
virtual CPU, network resources (e.g., bandwidth allocation, virtual network interfaces, network
latency, etc.).
30
[0067] The transceiver unit [302] of the system [300] is further configured to receive, at
the AU from the PSC unit, an acknowledgement (also referred to as acknowledgement
message) associated the request message, wherein the acknowledgement is at least one of a
18
positive acknowledgement and a negative acknowledgement. The AU and the PSC unit are
connected with the interface. In an implementation, the interface is an AU_PS interface. The
PSC unit transmits via the AU_PS interface to the AU, an acknowledgement message in
response to the request message.
5
[0068] The AU_PS interface may connect the AU and the PSC unit. The AU_PS interface
allows for bidirectional communication between the AU and the PSC unit. In an embodiment,
the AU_PS interface is configured to facilitate exchange of information using hypertext
transfer protocol (http) rest application programming interface (API). In an embodiment, the
10 http rest API is used in conjunction with JSON and/or XML communication media. In another
embodiment, the AU_PS interface is configured to facilitate exchange of information by
establishing a web-socket connection between the AU and the PSC unit. A web-socket
connection may involve establishing a persistent connectivity between the AU and the PSC
unit. An example of the web-socket based communication includes, without limitation, a
15 transmission control protocol (TCP) connection. In such a connection, information, such as
operational status, health, etc. of different components may be exchanged through the interface
using a ping-pong-based communication.
[0069] The positive acknowledgement is transmitted by the PSC unit to the AU upon
20 successful creation of the plurality of scheduled tasks based on the request message. The PSC
unit is configured to create the plurality of scheduled tasks. The negative acknowledgement is
transmitted by the PSC unit to the AU upon failure of creation of the plurality of scheduled
tasks based on the request message.
25 [0070] Further, the transceiver unit [302] of the system [300] is configured to receive, at
the AU, the plurality of scheduled tasks associated with the set of time intervals based on the
request message and the positive acknowledgement.
[0071] Upon receiving the request message from the AU, the transceiver unit [302]
30 receives the plurality of scheduled tasks from the PSC unit. Each task may include but is not
limited to detailed information regarding execution parameters, ensuring that the AU is
equipped with the necessary data to manage its operations effectively. The plurality of
scheduled tasks is associated with a predefined set of time intervals, which dictate their
19
execution timing. The transceiver unit [302] ensures that such time intervals are accurately
communicated to the AU, allowing for precise scheduling and execution of the plurality of
scheduled tasks.
[0072] In an implementation, the processing unit [304] is configured to 5 create, at the PSC
unit, the plurality of scheduled tasks associated with the automatic synchronisation of the
resources. The processing unit [304] via the PSC unit stores a set of details associated with the
plurality of scheduled tasks in a database. The set of details may include but is not limited to,
a task identifier, a task description, priority level, status, and a completion criterion.
10
[0073] In an implementation, the processing unit [304] is connected at least with the
transceiver unit [302]. The processing unit [304] is configured to synchronise, at the AU, one
or more resources at the plurality of scheduled tasks.
15 [0074] The synchronisation of the one or more resources at the plurality of scheduled tasks
is based on comparison of a number of resources available in real time and a number of
resources managed by an Inventory Manager (IM). The processing unit [304] at the AU
performs a comparative analysis between the number of resources currently available in real
time and a total number of resources managed by the IM. This comparison identifies potential
20 shortfalls or excesses in resource allocation for the plurality of scheduled tasks.
[0075] If the number of resources available in real time is exhausted, the transceiver unit
[302] is configured to transmit, from the AU to a service adapter (SA), a notification instructing
the SA to migrate one or more resources to different network function components. In an
25 implementation, the resources comprise containers, and the network function components
comprise container network function components (CNFCs).
[0076] If the number of resources managed by the IM is exhausted, the transceiver unit
[302] is configured to transmit, from the AU to the IM, a notification instructing the IM to
30 terminate idle network function components. In an implementation, the network function
components comprise container network function components (CNFCs). By terminating idle
CNFCs, the corresponding resources may be freed.
20
[0077] The processing unit [304] is further configured to allocate, via the AU, a first
instance associated with the AU to process at least one scheduled task. Further, the processing
unit [304] is configured to detect, at the AU, a failure associated with processing the at least
one scheduled task by the 5 first instance.
[0078] For example, during the execution of a scheduled task by the first instance, the
processing unit [304] at the AU actively checks for any failures. Failure may include issues
like timeouts, errors in task execution, or unexpected results.
10
[0079] Thereafter, the processing unit [304] allocates, via the AU, a second instance
associated with the AU to process the at least one scheduled task based on the failure. For
example, upon detecting a failure, the processing unit [304] responds by allocating the second
instance associated with the AU. Such a second instance acts as a backup or an alternative
15 execution environment designed to handle the same scheduled task. The processing unit [304]
monitors the second instance to ensure that it successfully completes the scheduled task. If
successful, it logs the outcome and any relevant metrics in the database.
[0080] Further, in accordance with the present disclosure, it is to be acknowledged that the
20 functionality described for the various components/units can be implemented interchangeably.
While specific embodiments may disclose a particular functionality of these units for clarity, it
is recognized that various configurations and combinations thereof are within the scope of the
disclosure. The functionality of specific units as disclosed in the disclosure should not be
construed as limiting the scope of the present disclosure. Consequently, alternative
25 arrangements and substitutions of units, provided they achieve the intended functionality
described herein, are considered to be encompassed within the scope of the present disclosure.
[0081] Referring to FIG. 4, an exemplary method flow diagram [400] for periodic
synchronisation of resources, in accordance with exemplary implementations of the present
30 disclosure is shown. In an implementation the method [400] is performed by the system [300].
Further, in an implementation, the system [300] may be present in a server device to implement
the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step
[402].
21
[0082] At step [404], the method [400] as disclosed by the present disclosure comprises
transmitting, by a transceiver unit [302] from an auditor unit (AU) to a platform scheduler and
cron jobs (PSC) unit via an interface, a request message (e.g., a task create request) for
performing an automatic synchronisation of resources, wherein the request 5 message comprises
at least a set of time intervals. For example, the request message may comprise a plurality of
events related to scheduling of a plurality of tasks (also referred to as a plurality of scheduled
tasks).
10 [0083] In an implementation, the resources comprise physical resources and virtual
resources. The physical resources may include but are not limited to, a physical memory (e.g.,
used memory, free memory, cache memory, etc.), Random Access Memory (RAM) (e.g.,
capacity, type and speed of the RAM), and a Central Processing Unit (CPU) (e.g., CPU speed,
utilization, etc.). The virtual resources may include but are not limited to, virtual memory,
15 virtual CPU, network resources (e.g., bandwidth allocation, virtual network interfaces, network
latency, etc.).
[0084] Next, at step [406], the method [400] as disclosed by the present disclosure
comprises receiving, by the transceiver unit [302] at the AU from the PSC unit, an
20 acknowledgement associated the request message, wherein the acknowledgement is at least
one of a positive acknowledgement and a negative acknowledgement. The AU and the PSC
unit are connected with the interface. In an implementation, the interface is an AU_PS
interface.
25 [0085] The PSC unit transmits via the AU_PS interface to the AU, an acknowledgement
message in response to the request message. The positive acknowledgement is transmitted by
the PSC unit to the AU upon successful creation of the plurality of scheduled tasks based on
the request message. The PSC unit is configured to create the plurality of scheduled tasks. The
negative acknowledgement is transmitted by the PSC unit to the AU upon failure of creation
30 of the plurality of scheduled tasks based on the request message.
[0086] Next, at step [408], the method [400] as disclosed by the present disclosure
comprises receiving, by the transceiver unit [302] at the AU, a plurality of scheduled tasks
22
associated with the set of time intervals based on the request message and the positive
acknowledgement.
[0087] Upon receiving the request message from the AU, the transceiver unit [302]
receives the plurality of scheduled tasks from the PSC. Each task may include 5 but is not limited
to detailed information regarding execution parameters, ensuring that the AU is equipped with
the necessary data to manage its operations effectively. The plurality of scheduled tasks is
associated with a predefined set of time intervals, which dictate their execution timing. The
transceiver unit [302] ensures that these time intervals are accurately communicated to the AU,
10 allowing for precise scheduling and execution of the plurality of scheduled tasks.
[0088] The method further comprises creating, by a processing unit [304] at the PSC unit,
the plurality of scheduled tasks associated with the automatic synchronisation of the resources.
The method comprises storing, by the processing unit [304] via the PSC unit, a set of details
15 associated with the plurality of scheduled tasks in a database. In an implementation, the
processing unit [304] is connected at least with the transceiver unit [302]. The set of details
may include but is not limited to, a task identifier, a task description, priority level, status, and
a completion criterion.
20 [0089] Next, at step [410], the method [400] as disclosed by the present disclosure
comprises synchronising, by the processing unit [304] at the AU, one or more resources at the
plurality of scheduled tasks, wherein synchronising the one or more resources at the plurality
of scheduled tasks is based on comparison of a number of resources available in real time and
a number of resources managed by an Inventory Manager (IM). In an example, the number of
25 resources available in real time is obtained from a Service Adapter (SA).
[0090] For example, the processing unit [304] at the AU performs a comparative analysis
between the number of resources currently available in real time and a total number of
resources managed by the IM. This comparison identifies potential shortfalls or excesses in
30 resource allocation for the plurality of scheduled tasks.
[0091] The method [400] comprises, if the number of resources available in real time is
exhausted, the method comprises transmitting, by the transceiver unit, from the AU to a service
23
adapter (SA), a notification instructing the SA to migrate one or more resources to different
network function components.
[0092] The method [400] comprises, if the number of resources managed by the IM is
exhausted, the method comprises transmitting, by the transceiver unit, from 5 the AU to the IM,
a notification instructing the IM to terminate idle network function components.
[0093] The method further comprises: allocating, by the processing unit [304] via the AU,
a first instance associated with the AU for processing at least one scheduled task; detecting, by
10 the processing unit [304] at the AU, a failure associated with processing the at least one
scheduled task by the first instance; and allocating, by the processing unit [304] via the AU, a
second instance associated with the AU for processing the at least one scheduled task based on
the failure.
15 [0094] For example, during the execution of a scheduled task by the first instance, the
processing unit [304] at the AU actively checks for any failures. Failure may include issues
like timeouts, errors in task execution, or unexpected results. For example, upon detecting a
failure, the processing unit [304] responds by allocating the second instance associated with
the AU. Such a second instance acts as a backup or an alternative execution environment
20 designed to handle the same scheduled task. The processing unit [304] monitors the second
instance to ensure that it successfully completes the scheduled task. If successful, it logs the
outcome and any relevant metrics in the database.
[0095] Thereafter, the method [400] terminates at step [412].
25
[0096] FIG. 5 illustrates an exemplary system architecture for periodic synchronisation of
resources, in accordance with exemplary implementations of the present disclosure. Referring
to FIG. 5, the system [500] comprises various sub-systems/units such as: at least one an auditor
unit (AU) [502], at least one a platform scheduler and cron jobs (PSC) unit [504], a physical
30 and virtual inventory manager (PVIM) [506], a service adapter (DA) (also referred to as a
service adapter) [508] and a database [510].
24
[0097] In an implementation, the AU [502] is connected with the PSC unit [504] and
transmits a request message (e.g., a task create request) to the PSC unit [504] via an interface
to perform an automatic synchronisation of resources. The request message comprises at least
a set of time intervals. Further, the AU [502] is configured to receive from the PSC unit [504],
an acknowledgement (also referred to as acknowledgement message) associated 5 with the
request message. The acknowledgement is at least one of a positive acknowledgement and a
negative acknowledgement. The AU [502] and the PSC unit [504] are connected with the
interface. In an implementation, the interface is an AU_PS interface which is HTTP based.
10 [0098] The positive acknowledgement is transmitted by the PSC unit [504] to the AU
[502] upon successful creation of a plurality of scheduled tasks based on the request message.
The PSC unit [504] is configured to create the plurality of scheduled tasks. The negative
acknowledgement is transmitted by the PSC unit [504] to the AU [502] upon failure of creation
of the plurality of scheduled tasks based on the request message. Further, the AU [502] is
15 configured to receive the plurality of scheduled tasks associated with the set of time intervals
based on the request message and the positive acknowledgement.
[0099] In an implementation, the AU [504] stores a set of details associated with the
plurality of scheduled tasks in the database [510]. The set of details may include but is not
20 limited to, a task identifier, a task description, priority level, status, and a completion criterion.
[0100] In an implementation, the AU [502] is configured to synchronise one or more
resources at the plurality of scheduled tasks. The synchronisation of the one or more resources
at the plurality of scheduled tasks is based on comparison of a number of resources available
25 in real time and a number of resources managed by the PVIM [506] (hereinafter also referred
to as an Inventory Manager (IM)). The AU [502] performs a comparative analysis between the
number of resources currently available in real time and a total number of resources managed
by the IM [506]. This comparison identifies potential shortfalls or excesses in resource
allocation for the plurality of scheduled tasks. The IM [506] updates the database [510] with
30 real-time inventory data.
[0101] In an implementation, the SA [508] forwards collected real-time data from
microservices to the AU [502] for auditing purposes. This ensures that the AU [502] has access
25
to the latest information needed for verification. After comparing the real-time data received
from the SA [508] with the records in the IM [506], the AU [502] determines if there are any
discrepancies. If discrepancies are found, it sends Application Programming Interface (API)
requests to the IM [506] to update the inventory accordingly.
5
[0102] Further, the AU [502] allocates a first instance to process at least one scheduled
task. Further, the AU [502] detects a failure associated with processing of the at least one
scheduled task by the first instance. Thereafter, the AU [502] allocates a second instance to
process the at least one scheduled task. For example, upon detecting a failure, the AU [502]
10 responds by allocating the second instance. Such a second instance acts as a backup or an
alternative execution environment designed to handle the same scheduled task. The AU [502]
monitors the second instance to ensure that it successfully completes the scheduled task. If
successful, it logs the outcome and any relevant metrics in the database [510]. This way the
system performs periodic synchronisation of one or more resources even if one instance goes
15 down during the processing of at least one scheduled task, then the next available instance will
take care of the message request.
[0103] Referring to FIG. 6 an exemplary process flow diagram depicting a method for
periodic synchronization of resources, in accordance with the exemplary implementations of
20 the present disclosure.
[0104] The process flow is explained as follows: At step S1, the process [600] comprises
transmitting, via an auditor unit (AU) [506] to a platform scheduler and cron jobs unit [504]
(PSCU) (also referred to as PSC unit) via an interface, a request message (e.g., a task create
25 request) to perform an automatic synchronisation of resources. In an implementation, the AU
[502] is implemented as a service. The AU [502] sends a task create request to the PSC unit
[504].
[0105] At step S2, the process [600] comprises receiving, at the AU [502] from the PSC
30 unit [504], an acknowledgement (also referred to as acknowledgement message) associated
with the request message. The acknowledgement is at least one of a positive acknowledgement
and a negative acknowledgement. The AU [502] and the PSC unit [504] are connected with
the interface. In an implementation, the interface is an AU_PS interface.
26
[0106] At step S3, the process [600] comprises creating, by the PSC unit [504], a plurality
of scheduled tasks and schedules it with a set of time intervals.
[0107] At step S4, the process [600] comprises storing, by the PSC unit 5 [504], a set of
details associated with the plurality of scheduled tasks in a database [510].
[0108] At step S5, the process [600] comprises transmitting, by the PSC unit [504], the
plurality of scheduled tasks associated with the set of time intervals to the AU [502]. The AU
10 [502] receives a trigger from the PSC unit [504] to perform synchronizing one or more
resources at the plurality of scheduled tasks.
[0109] At step S6a, the process [600] comprises receiving, at the AU [502], the resources
information stored at the IM [506]. The information is received by the AU [502] in response to
15 a request transmitted by the AU [502] to the IM [506] for details relating to the number of
resources managed by the IM [506].
[0110] At step S6b, the process [600] comprises receiving, at the AU [502], from the SA
[508] the resources information being utilized in real time. The information is received by the
20 AU [502] in response to a request transmitted by the AU [502] to the SA [508] for details
relating to the utilization of resources in real time.
[0111] The AU [502] is configured to compare the information received from the SA
[508], and the IM [506].
25
[0112] At step S7a, in response to determining, at the AU [502] that the resources utilized
in real time is exhausted, the process [600] comprises transmitting a notification from the AU
[502] to the SA [508], the notification instructing the SA [508] to terminate idle network
function components. In an implementation, the network function components comprise
30 container network function components (CNFCs). By terminating idle CNFCs, the
corresponding resources may be freed.
[0113] At step S7b, in response to determining, at the AU [502] that the resources managed
by the IM [506] is exhausted, the process [600] comprises transmitting a notification from the
27
AU [502] to the IM [506], the notification instructing the IM [506] to migrate one or more
resources to different network function components. In an implementation, the resources
comprise containers, and the network function components comprise container network
function components (CNFCs).
5
[0114] As is evident from the above, the present disclosure provides a technically
advanced solution for automatically syncing one or more resources via an AU_PS interface
due to triggering of plurality of events related to scheduling tasks. The present solution
encompasses many advantages some of which are mentioned as follows: the present solution
10 ensures auto syncing of resources at regular intervals. The present solution eliminates the need
for a manual request generation for synchronizing resources. The present method performs
async event-based implementation to utilize the AU_PS interface efficiently.
[0115] While considerable emphasis has been placed herein on the disclosed
15 implementations, it will be appreciated that many implementations can be made and that many
changes can be made to the implementations without departing from the principles of the
present disclosure. These and other changes in the implementations of the present disclosure
will be apparent to those skilled in the art, whereby it is to be understood that the foregoing
descriptive matter to be implemented is illustrative and non-limiting.
20
28
We Claim:
1. A method for periodic synchronisation of resources, the method comprising:
- transmitting, by a transceiver unit [302] from an auditor unit (AU) to a platform
scheduler and cron jobs (PSC) unit via an interface, a request 5 message for
performing an automatic synchronisation of resources, wherein the request message
comprises at least a set of time intervals;
- receiving, by the transceiver unit [302] at the AU from the PSC unit, an
acknowledgement associated the request, wherein the acknowledgement is at least
10 one of a positive acknowledgement and a negative acknowledgement;
- receiving, by the transceiver unit [302] at the AU, a plurality of scheduled tasks
associated with the set of time intervals based on the request message and the
positive acknowledgement; and
- synchronising, by a processing unit [304] at the AU, one or more resources at the
15 plurality of scheduled tasks.
2. The method as claimed in claim 1, wherein the method further comprises:
- allocating, by the processing unit [304] via the AU, a first instance associated with
the AU for processing at least one scheduled task;
20 - detecting, by the processing unit [304] at the AU, a failure associated with
processing the at least one scheduled task by the first instance; and
- allocating, by the processing unit [304] via the AU, a second instance associated
with the AU for processing the at least one scheduled task based on the failure.
25 3. The method as claimed in claim 1, wherein the synchronising the one or more resources
at the plurality of scheduled tasks is based on comparison of a number of resources
available in real time and a number of resources managed by an Inventory Manager
(IM).
30 4. The method as claimed in claim 3, wherein, if the number of resources available in real
time is exhausted, the method comprises transmitting, by the transceiver unit [302],
29
from the AU to a service adapter (SA), a notification instructing the SA to migrate one
or more resources to different network function components.
5. The method as claimed in claim 3, wherein if the number of resources managed by the
IM is exhausted, the method comprises transmitting, by the transceiver 5 unit [302], from
the AU to the IM, a notification instructing the IM to terminate idle network function
components.
6. The method as claimed in claim 1, wherein the AU and the PSC unit is connected with
10 the interface, wherein the interface is an AU_PS interface.
7. The method as claimed in claim 2, the method further comprising creating, by the
processing unit [304] at the PSC unit, the plurality of scheduled tasks associated with
the automatic synchronisation of the resources.
15
8. The method as claimed in claim 1, further comprises storing, by a processing unit [304]
via the PSC unit, a set of details associated with the plurality of scheduled tasks in a
database.
20 9. The method as claimed in claim 2, wherein the positive acknowledgement is transmitted
by the PSC unit to the AU upon successful creation of the plurality of scheduled tasks
based on the request message, and wherein the negative acknowledgement is
transmitted by the PSC unit to the AU upon failure of creation of the plurality of
scheduled tasks based on the request message.
25
10. The method as claimed in claim 1, wherein the one or more resources are synchronised
periodically at the set of time intervals associated with the plurality of scheduled tasks.
11. A system for periodic synchronisation of resources, the system comprising:
30 a transceiver unit [302], wherein the transceiver unit [302] is configured to:
transmit, from an auditor unit to a platform scheduler and cron jobs
(PSC) unit via an interface, a request message to perform an automatic
30
synchronisation of resources, wherein the request message comprises at least a
set of time intervals;
receive, at the AU from the PSC unit, an acknowledgement associated
the request, wherein the acknowledgement is at least one of a positive
acknowledgement and a negative 5 acknowledgement;
receive, at the AU, a plurality of scheduled tasks associated with the set of time
intervals based on the request message and the positive acknowledgement; and
a processing unit [304] connected at least with the transceiver unit [302], wherein the
processing unit [304] is configured to:
10 synchronise, at the AU, one or more resources at the plurality of scheduled
tasks.
12. The system as claimed in claim 11, wherein the processing unit [304] is further
configured to:
15 allocate, via the AU, a first instance associated with the AU to process at least one
schedule task;
detect, at the AU, a failure associated with processing the at least one scheduled task
by the first instance; and
allocate, via the AU, a second instance associated with the AU to process the at least
20 one scheduled task based on the failure.
13. The system as claimed in claim 11, wherein synchronise the one or more resources at
the plurality of scheduled tasks is based on comparison of a number of resources
available in real time and a number of resources managed by an Inventory Manager
25 (IM).
14. The system as claimed in claim 13, wherein, if the number of resources available in real
time is exhausted, the transceiver unit [302] is configured to transmit, from the AU to
a service adapter (SA), a notification instructing the SA to migrate one or more
30 resources to different network function components.
31
15. The system as claimed in claim 13, wherein if the number of resources managed by the
IM is exhausted, the transceiver unit [302] is configured to transmit, from the AU to
the IM, a notification instructing the IM to terminate idle network function components.
16. The system as claimed in claim 11, wherein the AU and the PSC unit 5 is connected with
the interface, wherein the interface is an AU_PS interface.
17. The system as claimed in claim 12, wherein the processing unit [304] is configured to
create, at the PSC unit, the plurality of scheduled tasks associated with the automatic
10 synchronisation of the resources.
18. The system as claimed in claim 11, wherein the processing unit [304] via the PSC unit
stores a set of details associated with the plurality of scheduled tasks in a database.
15 19. The system as claimed in claimed in claim 12, the positive acknowledgement is
transmitted by the PSC unit to the AU upon successful creation of the plurality of
scheduled tasks based on the request message, and wherein the negative
acknowledgement is transmitted by the PSC unit to the AU upon failure of creation of
the plurality of scheduled tasks based on the request message.
20
20. The system as claimed in claim 11, wherein the one or more resources are synchronised
periodically at the set of time intervals associated with the plurality of scheduled tasks.
| # | Name | Date |
|---|---|---|
| 1 | 202321065955-STATEMENT OF UNDERTAKING (FORM 3) [30-09-2023(online)].pdf | 2023-09-30 |
| 2 | 202321065955-PROVISIONAL SPECIFICATION [30-09-2023(online)].pdf | 2023-09-30 |
| 3 | 202321065955-POWER OF AUTHORITY [30-09-2023(online)].pdf | 2023-09-30 |
| 4 | 202321065955-FORM 1 [30-09-2023(online)].pdf | 2023-09-30 |
| 5 | 202321065955-FIGURE OF ABSTRACT [30-09-2023(online)].pdf | 2023-09-30 |
| 6 | 202321065955-DRAWINGS [30-09-2023(online)].pdf | 2023-09-30 |
| 7 | 202321065955-Proof of Right [06-02-2024(online)].pdf | 2024-02-06 |
| 8 | 202321065955-FORM-5 [30-09-2024(online)].pdf | 2024-09-30 |
| 9 | 202321065955-ENDORSEMENT BY INVENTORS [30-09-2024(online)].pdf | 2024-09-30 |
| 10 | 202321065955-DRAWING [30-09-2024(online)].pdf | 2024-09-30 |
| 11 | 202321065955-CORRESPONDENCE-OTHERS [30-09-2024(online)].pdf | 2024-09-30 |
| 12 | 202321065955-COMPLETE SPECIFICATION [30-09-2024(online)].pdf | 2024-09-30 |
| 13 | Abstract.jpg | 2024-11-18 |
| 14 | 202321065955-ORIGINAL UR 6(1A) FORM 1 & 26-200125.pdf | 2025-01-24 |