Sign In to Follow Application
View All Documents & Correspondence

Method And System For Handling Resource Constraints In A Network

Abstract: The present disclosure relates to a method and a system for handling resource constraints in a network. The method comprises receiving, by a transceiver unit [302], an instantiation event associated with a Network Function (NF) from an Inventory Manager (IM). Further, the method comprises generating, by a processing unit [304], a create task event based on the instantiation event. The method further comprises transmitting, by the transceiver unit [302], the create task event to a scheduler service. Further, the method comprises receiving, by the transceiver unit [302], a termination event associated with the NF, from the IM. Furthermore, the method comprises transmitting, by the transceiver unit [302], a delete task event based on the termination event, to the scheduler service. Thereafter, the method comprises halting, by the processing unit [304], a job associated with the create task event, based on the delete task event. FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR HANDLING RESOURCE
CONSTRAINTS IN A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR HANDLING RESOURCE CONSTRAINTS
IN A NETWORK
FIELD OF THE DISCLOSURE
5
[0001] Embodiment of the present disclosure generally relate to the field of
network management. More particularly, embodiments of the present disclosure
relate to a system and a method for handling resource constraints in a network.
10 BACKGROUND
[0002] The following description of related art is intended to provide background
information pertaining to the field of the disclosure. This section may include
certain aspects of the art that may be related to various features of the present
15 disclosure. However, it should be appreciated that this section be used only to
enhance the understanding of the reader with respect to the present disclosure, and
not as admissions of prior art.
[0003] A scheduler service is a system that manages the execution of jobs, typically
20 based on a schedule or some other trigger. A scheduler service with event-driven
architecture, makes the jobs highly available, compatible with distributed
environments, extendable and monitorable. With the right technology stack and
design, one can develop a custom scheduler service that meets specific needs. The
scheduling systems are integrated with microservices architecture to optimize
25 computational resources and enhance the performance of applications. Schedulers
play an essential role in the management of computational resources. They are
responsible for allocating resources to various tasks, ensuring that each task
receives the resources it requires to execute efficiently. In a microservices
environment, a scheduler can be used to manage the distribution of tasks among the
30 various services, ensuring that the overall system operates efficiently. Schedulers
are particularly important in a microservices environment because they help to
3
manage the complexity of dealing with multiple, independent services. They can
help to ensure that each service is given the resources it needs to function effectively
and can also help to manage the interdependencies between services, ensuring that
they work together effectively. However, the current network systems face a critical
5 challenge in efficiently managing and scheduling jobs/tasks within various network
components such as microservice(s). The scheduler services for task creation and
scheduling are struggling to effectively coordinate with the network functions.
[0004] Moreover, the network component(s) such as a capacity management
10 platform/ capacity monitoring manager (CP) primary function revolves around
monitoring resource usages, including CPU, RAM, storage, bandwidth, and various
parameters. The CP primarily interacts with a centralised platform such as platform
scheduler & cron job (PSC) service by continuously sending queries and receiving
event acknowledgments for breached events, wherein the resource usage may end
15 up surpassing predefined threshold values. Further, the core services of the PSC
service are struggling to effectively coordinate with the CP. Therefore, this process
has proven to be inefficient and prone to delays, leading to suboptimal task
scheduling in the network systems.
20 [0005] Hence, in view of these and other existing limitations, there arises an
imperative need to provide an efficient solution to overcome the above-mentioned
and other limitations and to provide a method and a system for handling resource
constraints in a network.
25 SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
30 subject matter.
4
[0007] An aspect of the present disclosure may relate to a method for handling
resource constraints in a network. The method comprises receiving, by a transceiver
unit, at a capacity management platform (CP), an instantiation event associated with
a network function (NF) from an Inventory Manager (IM). Further, the method
5 comprises generating, by a processing unit, at the CP, a create task event based on
the instantiation event. The method further comprises transmitting, by the
transceiver unit, from the CP to a scheduler service, the create task event. Further,
the method comprises receiving, by the transceiver unit at the CP, a termination
event associated with the NF, from the IM. Furthermore, the method comprises
10 transmitting, by the transceiver unit from the CP, a delete task event based on the
termination event, to the scheduler service. Thereafter, the method comprises
halting, by the processing unit at the scheduler service, a job associated with the
create task event, based on the delete task event.
15 [0008] In an exemplary aspect of the present disclosure, the NF comprises at least
one of a Virtual Network Function (VNF), a Virtual Network Function Component
(VNFC), a containerized Network Function (CNF), and a containerized Network
Function Component (CNFC).
20 [0009] In an exemplary aspect of the present disclosure, in response to the create
task event, the method comprises performing, by the processing unit at the
scheduler service, a scale-out operation for the NF.
[0010] In an exemplary aspect of the present disclosure, in response to the delete
25 task event, the method comprises performing, by the processing unit at the
scheduler service, a scale-in operation for the NF.
[0011] In an exemplary aspect of the present disclosure, the create task event
comprises one of a creation of the job and a modification of the job at the scheduler
30 service.
5
[0012] In an exemplary aspect of the present disclosure, prior to transmitting the
delete task event to the scheduler service, a detection unit detects a breach condition
associated with the create task event at the CP.
5 [0013] Another aspect of the present disclosure may relate to a system for handling
resource constraints in a network. The system comprises a transceiver unit
configured to receive, at a capacity management platform (CP), an instantiation
event associated with a network function from an Inventory Manager (IM). Further,
the system comprises a processing unit connected to at least the transceiver unit,
10 the processing unit is configured to generate, at the CP, a create task event based on
the instantiation event. Further, the transceiver unit is configured to transmit, from
the CP to a scheduler service, a command based on the create task event. The
transceiver unit further receive, at the CP, a termination event associated with the
NF, from the IM. Also, the transceiver unit transmit, from the CP, a delete task event
15 based on the termination event, to the scheduler service. Furthermore, the
processing unit is configured to halt, at the scheduler service, a job associated with
the create task event based on the delete task event.
[0014] Yet another aspect of the present disclosure may relate to a non-transitory
20 computer readable storage medium storing one or more instructions for handling
resource constraints in a network, the instructions include executable code which,
when executed by one or more units of a system, causes a transceiver unit, of the
system, to receive, at a capacity management platform (CP), an instantiation event
associated with a network function from an Inventory Manager (IM). Further, the
25 executable code when executed causes a processing unit, of the system, to generate,
at the CP, a create task event based on the instantiation event. Further, the executable
code when executed causes the transceiver unit to transmit, from the CP to a
scheduler service, a command based on the create task event. The executable code
when further executed causes transceiver unit to receive, at the CP, a termination
30 event associated with the NF, from the IM. Also, the executable code when executed
causes the transceiver unit to transmit, from the CP, a delete task event based on the
6
termination event, to the scheduler service. Furthermore, the executable code when
executed causes the processing unit to halt, at the scheduler service, a job associated
with the create task event based on the delete task event.
5 OBJECTS OF THE DISCLOSURE
[0015] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
10 [0016] It is an object of the present disclosure to provide a method and a system for
handling resource constraints in a network.
[0017] It is another object of the present disclosure to provide a solution for
harmonious collaboration between a capacity management platform (CMP/CP) and
15 inventory manager (IM) services.
[0018] It is yet another object of the present disclosure to provide a solution for
smooth execution of task creation, modification, and deletion events.
20 [0019] It is yet another object of the present disclosure to provide a solution to
transmit the task creation, modification, and deletion events to platform scheduler
(PS) microservice to ensure that resource constraints are comprehensively
addressed.
25 [0020] It is yet another object of the present disclosure to provide a solution to
maintain the system's optimal operation even in the presence of breached scenarios.
BREIF DESCRIPTION OF DRAWINGS
30 [0021] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
7
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
5 according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
10 [0022] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture, in accordance with
exemplary implementation of the present disclosure.
[0023] FIG. 2 illustrates an exemplary block diagram of a computing device upon
15 which the features of the present disclosure may be implemented, in accordance
with exemplary implementation of the present disclosure.
[0024] FIG. 3 illustrates an exemplary block diagram of a system for handling
resource constraints in a network, in accordance with exemplary implementation of
20 the present disclosure.
[0025] FIG. 4 illustrates an exemplary flow diagram of a method for handling
resource constraints in a network, in accordance with exemplary implementation of
the present disclosure.
25
[0026] FIG. 5 illustrates an exemplary block diagram of system architecture for
handling resource constraints in a network, in accordance with exemplary
implementation of the present disclosure.
30 [0027] FIG. 6 illustrates an exemplary process flow for handling resource
constraints in a network, in accordance with exemplary implementation of the
present disclosure.
8
[0028] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
5 DETAILED DESCRIPTION
[0029] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
10 embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter can each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above. Some of the problems discussed above might not be
15 fully addressed by any of the features described herein. Example embodiments of
the present disclosure are described below, as illustrated in various drawings in
which like reference numerals refer to the same parts throughout the different
drawings.
20 [0030] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
25 arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0031] It should be noted that the terms "mobile device", "user equipment", "user
device", “communication device”, “device” and similar terms are used
30 interchangeably for the purpose of describing the disclosure. These terms are not
intended to limit the scope of the disclosure or imply any specific functionality or
9
limitations on the described embodiments. The use of these terms is solely for
convenience and clarity of description. The disclosure is not limited to any
particular type of device or equipment, and it should be understood that other
equivalent terms or variations thereof may be used interchangeably without
5 departing from the scope of the disclosure as defined herein.
[0032] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
10 specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail in order to avoid obscuring the embodiments.
15
[0033] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations can be performed in parallel or
20 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
[0034] The word “exemplary” and/or “demonstrative” is used herein to mean
25 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
30 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
10
description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any
additional or other elements.
5 [0035] As used herein, an “electronic device”, or “portable electronic device”, or
“user device” or “communication device” or “user equipment” or “device” refers
to any electrical, electronic, electromechanical and computing device. The user
device is capable of receiving and/or transmitting one or parameters, performing
function/s, communicating with other user devices and transmitting data to the other
10 user devices. The user equipment may have a processor, a display, a memory, a
battery and an input-means such as a hard keypad and/or a soft keypad. The user
equipment may be capable of operating on any radio access technology including
but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low
Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance,
15 the user equipment may include, but not limited to, a mobile phone, smartphone,
virtual reality (VR) devices, augmented reality (AR) devices, laptop, a generalpurpose computer, desktop, personal digital assistant, tablet computer, mainframe
computer, or any other device as may be obvious to a person skilled in the art for
implementation of the features of the present disclosure.
20
[0036] Further, the user device and/or a system as described herein to implement
technical features as disclosed in the present disclosure may also comprise a
“processor” or “processing unit”, wherein processor refers to any logic circuitry for
processing instructions. The processor may be a general-purpose processor, a
25 special purpose processor, a conventional processor, a digital signal processor, a
plurality of microprocessors, one or more microprocessors in association with a
Digital Signal Processor (DSP) core, a controller, a microcontroller, Application
Specific Integrated Circuits, Field Programmable Gate Array circuits, any other
type of integrated circuits, etc. The processor may perform signal coding data
30 processing, input/output processing, and/or any other functionality that enables the
11
working of the system according to the present disclosure. More specifically, the
processor is a hardware processor.
[0037] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
5 “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
10 phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
15 such unit(s) which are required to implement the features of the present disclosure.
[0038] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
20 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
25
[0039] As used herein, “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
30 each other, which also includes the methods, functions, or procedures that may be
called.
12
[0040] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
5 digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
10 [0041] As used herein, the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
15 [0042] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
method and a system for handling resource constraints in a network. More
particularly, the present disclosure provides a solution for harmonious collaboration
20 between a capacity management platform (CMP/CP) and an inventory manager
(IM) services. Further, the present disclosure provides a solution for smooth
execution of task creation, modification, and deletion events. Also, the present
disclosure provides a solution to transmit the task creation, modification, and
deletion events to a platform scheduler (PS) microservice to ensure that resource
25 constraints are comprehensively addressed. Furthermore, the present disclosure
provides a solution to maintain the system's optimal operation even in the presence
of breached scenarios.
[0043] Hereinafter, exemplary embodiments of the present disclosure will be
30 described with reference to the accompanying drawings.
13
[0044] Referring to FIG. 1 an exemplary block diagram representation of a
management and orchestration (MANO) architecture/ platform [100], in
accordance with exemplary implementation of the present disclosure is illustrated.
The MANO architecture [100] is developed for managing telecom cloud
5 infrastructure automatically, managing design or deployment design, managing
instantiation of a network node(s) etc. The MANO architecture [100] deploys the
network node(s) in the form of Virtual Network Function (VNF) and Cloud-native/
Container Network Function (CNF). The MANO architecture [100] is used to autoinstantiate the VNFs into the corresponding environment of the present disclosure
10 so that it could help in onboarding other vendor(s) CNFs and VNFs to the platform.
[0045] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer, a network function virtualization (NFV) and software defined
network (SDN) design function module [104]; a platforms foundation services
15 module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112], wherein all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
20 [0046] The NFV and SDN design function module [104] further comprises a VNF
lifecycle manager (compute) [1042]; a VNF catalogue [1044]; a network services
catalogue [1046]; a network slicing and service chaining manager [1048]; a
physical and virtual resource manager [1050] and a CNF lifecycle manager [1052].
The VNF lifecycle manager (compute) [1042] is responsible for determining on
25 which server of the communication network the microservice will be instantiated.
The VNF lifecycle manager (compute) [1042] will manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
manager (compute) [1042] is responsible for determining which sequence to be
followed for executing the process. For e.g. in an AMF network function of the
30 communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
14
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
of network service/ network functions (NFs)) that must be applied to a specific
5 networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] is similarly used for the CNFs
lifecycle management.
10 [0047] The platforms foundation services module [106] further comprises a
microservices elastic load balancer [1062]; an identify & access manager [1064]; a
command line interface (CLI) [1066]; a central logging manager [1068]; and an
event routing manager [1070]. The microservices elastic load balancer [1062] is
used for maintaining the load balancing of the request for the services. The identify
15 & access manager [1064] is used for logging purposes. The command line interface
(CLI) [1066] is used to provide commands to execute certain processes which
require changes during the run time. The central logging manager [1068] is
responsible for keeping the logs of every services. Theses logs are generated by the
MANO platform [100]. These logs are used for debugging purposes. The event
20 routing manager [1070] is responsible for routing the events i.e., the application
programming interface (API) hits to the corresponding services.
[0048] The platforms core services module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
25 performance manager [1086]; a policy execution engine [1088]; a capacity
monitoring manager [1090]; a release management (mgmt.) repository [1092]; a
configuration manager & (Golden Configuration Template (GCT)) [1094]; an NFV
platform decision analytics [1096]; a platform NoSQL DB [1098]; a platform
schedulers and cron jobs [1100]; a VNF backup & upgrade manager [1102]; a micro
30 service auditor [1104]; and a platform operations, administration and maintenance
manager [1106]. The NFV infrastructure monitoring manager [1082] monitors the
15
infrastructure part of the NFs. For e.g., any metrics such as CPU utilization by the
VNF. The assure manager [1084] is responsible for supervising the alarms the
vendor is generating. The performance manager [1086] is responsible for manging
the performance counters. The policy execution engine [1088] is responsible for
5 managing all the policies. The capacity and performance monitoring manager/
capacity monitoring manger/ capacity management platform (CMP/CP) [1090] is
responsible for sending the request to the policy execution engine [1088]. The CP
[1090] is capable of monitoring usage of network resources such as but not limited
to CPU utilization, RAM utilization and storage utilization across all the instances
10 of the virtual infrastructure manager (VIM) or simply the NFV infrastructure
monitoring manager [1082]. The CP [1090] is also capable of monitoring said
network resources for each instance of the VNF. The CP [1090] is responsible for
constantly tracking the network resource utilization. The release management
(mgmt.) repository [1092] is responsible for managing the releases and the images
15 of all the vendor network nodes. The configuration manager & (GCT) [1094]
manages the configuration and GCT of all the vendors. The NFV platform decision
analytics [1096] helps in deciding the priority of using the network resources. It is
further noted that the policy execution engine [1088], the configuration manager &
(GCT) [1094] and the NFV platform decision analytics [1096] work together. The
20 platform NoSQL DB [1098] is a database for storing all the inventory (both physical
and logical) as well as the metadata of the VNFs and CNF. The platform schedulers
and cron jobs [1100] schedules the task such as but not limited to triggering of an
event, traversing the network graph etc. The VNF backup & upgrade manager
[1102] takes backup of the images, binaries of the VNFs and the CNFs and produces
25 those backups on demand in case of server failure. The micro service auditor [1104]
audits the microservices. For e.g., in a hypothetical case, instances not being
instantiated by the MANO architecture [100] and using the network resources then
the micro service auditor [1104] audits and informs the same so that resources can
be released for services running in the MANO architecture [100], thereby assuring
30 the services only run on the MANO platform [100]. The platform operations,
16
administration and maintenance manager [1106] is used for newer instances that are
spawning.
[0049] The platform resource adapters and utilities module [112] further comprises
5 a platform external API adaptor and gateway [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter [1128];
and a NFV gateway [1130]. The platform external API adaptor and gateway [1122]
is responsible for handling the external services (to the MANO platform [100]) that
require the network resources. The generic decoder and indexer (XML, CSV,
10 JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON
format. The docker service adaptor [1126] is the interface provided between the
telecom cloud and the MANO architecture [100] for communication. The API
adapter [1128] is used to connect with the virtual machines (VMs). The NFV
gateway [1130] is responsible for providing the path to each services going
15 to/incoming from the MANO architecture [100].
[0050] Referring to FIG. 2 an exemplary block diagram of a computing device
[200] upon which the features of the present disclosure may be implemented, in
accordance with exemplary implementation of the present disclosure is illustrated.
20 In an implementation, the computing device [200] may implement a method for
handling an overload condition in a network by utilising a system [200]. In another
implementation, the computing device [200] itself implements the method for
handling an overload condition in a network using one or more units configured
within the computing device [200], wherein said one or more units are capable of
25 implementing the features as disclosed in the present disclosure.
[0051] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
30 processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a random-
17
access memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
5 processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
10 information and instructions for the processor [204].
[0052] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
15 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
20 [204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
25 the device to specify positions in a plane.
[0053] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
30 or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
18
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
5 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
10 [0054] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
15 a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
20 electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0055] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
25 communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], a host [224], the local network [222] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
30 execution.
19
[0056] Referring to FIG. 3 an exemplary block diagram of a system for handling
resource constraints in a network, in accordance with exemplary implementation of
the present disclosure. The system comprises at least one transceiver unit [302], at
least one processing unit [304] and at least one detection unit [306]. Also, all of the
5 components/ units of the system [300] are assumed to be connected to each other
unless otherwise indicated below. As shown in the FIG. 3 all units shown within the
system [300] should also be assumed to be connected to each other. Also, in FIG. 3
only a few units are shown, however, the system [300] may comprise multiple such
units or the system [300] may comprise any such numbers of said units, as required
10 to implement the features of the present disclosure. Further, in an implementation,
the system [300] may reside in a server or the network entity or the system [300]
may be in communication with the network entity to implement the features as
disclosed in the present disclosure.
15 [0057] The system [300] is configured for handling resource constraints in a
network with the help of the interconnection between the components/units of the
system [300]. Further, FIG. 3 is to be read in conjunction with FIG. 1 which
illustrates an exemplary block diagram representation of a management and
orchestration (MANO) architecture/ platform [100].
20
[0058] In operation the transceiver unit [302] may receive, at a capacity
management platform (CP) [1090], an instantiation event associated with a network
function (NF) from an Inventory Manager (IM). It is to be noted that the IM
performs the same function as the physical and virtual resource manager [1050] as
25 described in FIG. 1. As would be understood, the CP [1090] may manage and
monitor the capacity of one or more network functions deployed in the network
environment. Also, the CP [1090] may provide real-time insights related to
utilization of resources by the one or more network function, traffic pattern and load
on the one or more network functions. Further, the instantiation event may refer to
30 the process where a network function may be deployed, initialized, and made
operational in the network.
20
[0059] Continuing further, the NF comprises at least one of a Virtual Network
Function (VNF), a Virtual Network Function Component (VNFC), a Cloud-Native
Network Function (CNF), and a Cloud-Native Network Function Component
5 (CNFC). The VNFs are software based implementation of the network function that
were earlier implemented by a dedicated hardware. Also, the VNF may enable the
virtualization of the network services. Furthermore, the VNFC may be a component
or a unit of the VNF that may perform defined functions and may provide a specific
service. Furthermore, the CNF may be a network function that is designed and
10 implemented to run inside containers. The containers are packages of software that
contain all of the necessary elements to run in any environment. Moreover, the
CNFC may be a component or a unit of the CNF that may perform defined functions
and may provide a specific service.
15 [0060] Further, the IM is a component that is responsible for managing and
maintaining the real-time record of all NFs (e.g. the VNFs and the CNFs) that may
be available in the network. The IM may further ensure that one or more virtual
machines (VMs), one or more containers, storage and other such services, required
by the NFs (e.g. the VNFs and the CNFs), may be tracked and allocated to the NFs
20 requiring the said services. Further, the IM also keeps a track of assigned versus
actual resources consumed by the NF. In situations, when the actual resources
exceed the assigned resources, the IM identifies a breach condition. The number of
assigned resources to be consumed by the NF may be defined by the network
operator’s policy.
25
[0061] Continuing further, the processing unit [304] may generate, at the CP
[1090], a create task event based on the instantiation event. The create task event
may involve initiation of a specific task or process for the instantiation event. The
instantiation event may be related to instantiating a new task for the NF. Also, the
30 create task event may involve allocation of resources for execution of task or
process by the NF. Further, the create task event may comprise parameters such as
Task type, Task frequency, Task periodicity, Task counter and Task information. The
21
Task type may be for example, an API creation, an FTP, an EVENT creation or a
QUERY. The Task frequency can be periodic such as to be done daily, weekly,
monthly or one-time execution as per the requirement of the operations team. The
Task periodicity may define the time period when the task is to be scheduled. The
5 Task counter defines the number of task notifications. The Task information defines
details related to resources such as name, identifier, address and threshold value of
usage. An example of a tasks may be such as, adding a CPU resource instance.
[0062] Continuing further, the create task event comprises one of a creation of the
10 job and a modification of the job at the scheduler service. As would be understood,
the creation of the job may involve defining and initiating a new job for the NF. The
creation of the job may also involve specifying the parameters related to the job
such as, but not limited to, job name, job type, resources required for the job,
scheduling the job etc. Whereas the modification of the job may involve changing
15 the parameters related to the job such as, but not limited to, changing the resources
allocated (e.g. increase or decrease the resources allocated), changing the
scheduling information (e.g. the changing the time of the job), etc. Further, it is to
be noted that the parameters related to the create task event are implemented on the
job to be created or the job to be modified.
20
[0063] Continuing further, the transceiver unit [302] may transmit, from the CP
[1090] to a scheduler service, a command based on the create task event. In an
exemplary implementation the schedular service may be a platform schedulers and
cron jobs (PS) microservice [1100] as described in FIG. 1. The PS microservice
25 instance is a centralised platform which helps to create and schedule jobs on behalf
of other micro services. Also, it is to be noted that a microservice is a small, loosely
coupled distributed service and each microservice is designed to perform a specific
function. Further, each microservice may be developed and deployed
independently. Further, the microservice breaks a service into small and
30 manageable components of services.
22
[0064] Continuing further, in response to the create task event, the processing unit
[304] performs at the scheduler service, a scale-out operation for the NF. As would
be understood the scale-out operation for the network function may refer to a
process to add or increase the number of active instances and the resources allocated
5 to the network function, in response to the increased demand and usage of the
resources. Since the create task ends up creating more jobs or modifying existing
jobs, this calls for increased use of resources. Therefore, the processing unit [304]
has to scale-out the number of resources required to execute the jobs for the NF.
10 [0065] In an exemplary implementation, the scale-out operation in response to the
create task event may involve increasing the number of instances after the tasks
associated with the NF, assigned at the instantiation of the event, are completed. In
an example, if the resources assigned to the NF are not sufficient to handle the
traffic load, then the NF may send a request to the CP [1090], to add more resources.
15 In response to this request, the CP [1090] will send a create task event to the
scheduler service, to add a resource instance, for example to add a CPU instance.
[0066] Further, as described, the IM keeps a track of assigned and actual resources
consumed by the NF. Based on the create task event, there may be a scenario that
20 the actual resources now used by the NF exceed the assigned resources to the NF.
For example, as described in the above example, if the create task event related to
the NF is to add a CPU to increase the processing power of the NF, it may happen
that after the CPU is added, the number of actual resources, i.e. CPUs as consumed
by the NF exceed the number of assigned number of CPUs to the NF. This results
25 in a breach scenario.
[0067] To detect the breach scenario and take immediate action, the present
disclosure provides an interface between the IM and the CP [1090]. This interface
operates as the receiver of notifications concerning instantiation events. Its primary
30 function is to initiate the scaling procedure by triggering a "create task" event within
the scheduler service, inclusive of the relevant query, whether it's a newly created
or modified one. Subsequently, the interface remains in an awaiting state for the
23
scheduler's response regarding the task creation event, which solely contains
information related to breach scenarios. The interface remains in an awaiting state
to receive any information related to the breach condition. The harmonious
collaboration between the CP [1090] and the IM services results in the smooth
5 execution of task creation, modification, and deletion events. These events are then
transmitted to the scheduler service, which ensures that resource constraints are
comprehensively addressed, thereby maintaining the system's optimal operation
even in the presence of breached scenarios.
10 [0068] The system [300] further comprises a detection unit [306] which is
configured to detect a breach condition associated with the create task event at the
CP [1090]. Thereafter, once the IM has detected a breach condition, the transceiver
unit [302] receives, at the CP [1090], a termination event associated with the NF,
from the IM. Continuing further, the transceiver unit [302] transmits, from the CP
15 [1090], a delete task event based on the termination event, to the scheduler service.
Further, the processing unit [304] is further configured to halt, at the scheduler
service, a job associated with the create task event based on the delete task event.
This means that the resource instance that was added based on the create task event
and allocated to the NF during the instantiation event may be taken back. For
20 example, the CPU instance that was added based on the request from the NF, will
be deleted and will no longer be available to the NF.
[0069] Continuing further, in response to the delete task event, the processing unit
[304] is configured to perform at the scheduler service, a scale-in operation for the
25 NF. In the scale-in operation the resource instances assigned to the NF are decreased
or scaled down to handle the breach condition and keep the network in a stable
condition.
[0070] Referring to FIG. 4 an exemplary flow diagram of a method [400] for
30 handling resource constraints in a network, in accordance with exemplary
implementation of the present disclosure. In an implementation the method [400] is
24
performed by the system [300]. Also, as shown in FIG. 4, the method [400] initiates
at step [402].
[0071] At step [404], the method [400] comprises receiving, by a transceiver unit
5 [302], at a capacity management platform (CP) [1090], an instantiation event
associated with a network function (NF) from an Inventory Manager (IM). As
would be understood, the CP [1090] may manage and monitor the capacity of one
or more network functions deployed in the network environment. Also, the CP
[1090] may provide real-time insights related to utilization of resources by the one
10 or more network function, traffic pattern and load on the one or more network
functions. Further, the instantiation event may refer to the process where a network
function may be deployed, initialized, and made operational in the network.
[0072] Continuing further, the NF comprises at least one of a Virtual Network
15 Function (VNF), a Virtual Network Function Component (VNFC), a Cloud-Native
Network Function (CNF), and a Cloud-Native Network Function Component
(CNFC). The VNFs are software based implementation of the network function that
were earlier implemented by a dedicated hardware. Also, the VNF may enable the
virtualization of the network services. Furthermore, the VNFC may be a component
20 or a unit of the VNF that may perform defined functions and may provide a specific
service. Furthermore, the CNF may be a network function that is designed and
implemented to run inside containers. The containers are packages of software that
contain all of the necessary elements to run in any environment. Moreover, the
CNFC may be a component or a unit of the CNF that may perform defined functions
25 and may provide a specific service.
[0073] Next, at step [406], the method [400] comprises generating, by a processing
unit [304], at the CP [1090], a create task event based on the instantiation event.
The create task event may involve initiation of a specific task or process for the NF
30 instantiated. Also, the create task event may involve allocation of resources for
execution of task or process by the NF. Further, the create task event may comprises
parameters such as Task type, Task frequency, Task periodicity, Task counter and
25
Task information. The Task type may be for example, an API creation, an FTP, an
EVENT creation or a QUERY. The Task frequency can be periodic such as done
daily, weekly, monthly or one-time execution as per the requirement of the
operations team. The Task periodicity may define the time period when the task is
5 to be scheduled. The Task counter defines the number of task notifications. The
Task information defines details related to resources such as name, identifier,
address and threshold value of usage. An example of a task may be such as, adding
a CPU resource instance.
10 [0074] Continuing further, the create task event comprises one of a creation of the
job and a modification of the job at the scheduler service. As would be understood,
the creation of the job may involve defining and initiating a new job for the NF. The
creation of the job may also involve specifying the parameters related to the job
such as, but not limited to, job name, job type, resources required for the job,
15 scheduling the job etc. Whereas the modification of the job may involve changing
the parameters related to the job such as, but not limited to, changing the resources
allocated (e.g. increase or decrease the resources allocated), changing the
scheduling information (e.g. the changing the time of the job), etc. Further, it is to
be noted that the parameters related to create task event are implemented on the job
20 to be created or the job to be modified.
[0075] Further, at step [408], the method [400] comprises transmitting, by the
transceiver unit [302], from the CP [1090] to a scheduler service, the create task
event. In an exemplary implementation the schedular service may be a platform
25 schedular (PS) and corn jobs microservice [1100] instance. The PS microservice
instance may be a centralised platform which helps to create and schedule jobs on
behalf of other micro services.
[0076] Continuing further, in response to the create task event, the processing unit
30 [304] performs at the scheduler service, a scale-out operation for the NF. As would
be understood the scale-out operation for the network function may refer to a
process to add or increase the number of active instances and the resources allocated
26
to the network function, in response to the increased demand and usage of the
resources. Since the create task ends up creating more jobs or modifying existing
jobs, this calls for increased use of resources. Therefore, the processing unit [304]
has to scale-out the number of resources required to execute the jobs for the NF.
5
[0077] In an exemplary implementation, the scale-out operation in response to the
create task event may involve increasing the number of instances after the tasks
associated with the NF, assigned at the instantiation of the event, are completed. In
an example, if the resources assigned to the NF are not sufficient to handle the
10 traffic load, then the NF may send a request to the CP [1090], to add more resources.
In response to this request, the CP [1090] will send a create task event to the
scheduler service, to add a resource instance, for example add a CPU instance.
[0078] Further, as described, the IM keeps a track of assigned and actual resources
15 consumed by the NF. Based on the create task event, there may be a scenario that
the actual resources now used by the NF exceed the assigned resources to the NF.
For example, as described in the above example, if the create task event related to
the NF is to add a CPU to increase the processing power of the NF, it may happen
that after the CPU is added, the number of actual resources, i.e. CPUs as consumed
20 by the NF exceed the number of assigned number of CPUs to the NF. This results
in a breach scenario.
[0079] To detect the breach scenario and take immediate action, the present
disclosure provides an interface between the IM and the CP [1090]. This interface
25 operates as the receiver of notifications concerning instantiation events. Its primary
function is to initiate the scaling procedure by triggering a "create task" event within
the scheduler service, inclusive of the relevant query, whether it's a newly created
or modified one. Subsequently, the interface remains in an awaiting state for the
scheduler's response regarding the task creation event, which solely contains
30 information related to breach scenarios. The interface remains in an awaiting state
to receive any information related to the breach condition. The harmonious
27
collaboration between CP [1090] and IM services results in the smooth execution
of task creation, modification, and deletion events. These events are then
transmitted to the scheduler service, which ensures that resource constraints are
comprehensively addressed, thereby maintaining the system's optimal operation
5 even in the presence of breached scenarios. The method [400] comprises a detection
unit [306] which is configured to detect a breach condition associated with the
create task event at the CP [1090].
[0080] Further, once the IM has detected a breach condition, at step [410], the
10 method [400] comprises, receiving, by the transceiver unit [302] at the CP [1090],
a termination event associated with the NF, from the IM. Next, at step [412], the
method [400] comprises, transmitting, by the transceiver unit [302] from the CP
[1090], a delete task event based on the termination event, to the scheduler service.
Further the resource instance that was added based on the create task event and
15 allocated to the NF during the instantiation event may be taken back. For example,
the CPU instance that was added based on the request from the NF, will be deleted
and will no longer be available to the NF.
[0081] Furthermore, at step [414], the method [400] comprises halting, by the
20 processing unit [304] at the scheduler service, a job associated with the create task
event, based on the delete task event.
[0082] Continuing further, in response to the delete task event, the processing unit
[304] is configured to perform at the scheduler service, a scale-in operation for the
25 NF. In scale-in operation the resource instances assigned to the NF are decreased or
scaled down to handle the breach condition and keep the network in a stable
condition.
[0083] Thereafter, the method [400] terminates at step [416].
30
[0084] Referring to FIG. 5 an exemplary block diagram [500] of system
architecture for handling resource constraints in a network, in accordance with
28
exemplary implementation of the present disclosure is illustrated. The system
architecture [500] comprises an inventory manager (IM) [502], a capacity
management platform (CP) [1090] and a platform scheduler and cron jobs [1100].
Also, all of the components/ units of the system architecture [500] are assumed to
5 be connected to each other unless otherwise indicated below. Further, it is to be
noted that the IM [502] performs the same function as the IM as described in FIG.
3 and FIG. 4.
[0085] The CP microservice, known as the CMP (or CP) [1090] microservice,
10 guarantees the seamless transfer of a dynamic query builder, formed in the design
phase, to the PS [1100] microservice. This transfer occurs as a task presented
through a "createTask/deleteTask” event during the reception of
Instantiation/Termination events from the IM to the CP [1090] microservice. This
process ensures the effective handling of any potential breach scenarios within the
15 system, thereby maintaining the overall system integrity.
[0086] The harmonious collaboration between CP [1090] and IM [502] services
results in the smooth execution of task creation, modification, and deletion events.
These events are then transmitted to the PS [1100] microservice, which ensures that
20 resource constraints are comprehensively addressed, thereby maintaining the
system's optimal operation even in the presence of breached scenarios.
[0087] The system architecture [500] provides dynamic execution and termination
of "createTask" and "deleteTask" events. These events are activated in response to
25 "instantiation" and "termination" events emitted by IM [502], ensuring the system's
robustness in the face of breached scenarios. Additionally, this process subsequently
triggers "scale in" or "scale out" events, adapting to the specific breach scenario
encountered.
30 [0088] Further, to facilitate communication between CP [1090] and IM [502], the
system architecture [500] provides a CP-IM interface that operates as the receiver
of notifications concerning instantiation events. Its primary function is to initiate
29
the scaling procedure by triggering a "create task" event within the scheduler
service, inclusive of the relevant query, whether it's a newly created or modified
one. Subsequently, the interface remains in an awaiting state for the scheduler's
response regarding the task creation event, which solely contains information
5 related to breach scenarios. Whenever a termination event is received from IM, the
CP [1090] initiates a "delete task" event towards PSC to halt the job that was
producing breach-related responses. This proactive action serves to maintain the
overall integrity of the system.
10 [0089] Further referring to FIG. 6 an exemplary process flow [600] for handling
resource constraints in a network, in accordance with exemplary implementation of
the present disclosure is illustrated. The process [600] is performed by system
architecture [500]. The process [600] starts at step [602].
15 [0090] After step [602], the IM may emit one of an instantiation event and a
termination event. The instantiation event refers to a create task event for a network
Function (NF). The create task event may be to create a job related to instantiating
a resource instance for the NF. Whereas the termination event may refer to the
stopping or halting the job associated with the create task event, which may further
20 mean to de-instantiate a resource instance for the NF.
[0091] Next, at step [604], the CP [1090] may receive VNF/VNFC/CNF/CNFC
instantiation notification from the IM [504]. The instantiation notification may
relate to the instantiation of a resource for either a virtual network function (VNF),
25 a virtual network function component (VNFC), cloud-native network function
(CNF) or cloud-native network function component (CNFC). The CP [1090] may
generate a create task event based on the received instantiation notification.
[0092] Further, at step [606], the CP [1090] may transmit a create task event, based
30 on the received instantiation notification, to the PS [506]. The PS [506] may either
create the task or a job based on the create task event received from the CP [1090]
30
and assign it to the NF or in response a breach condition may be detected at the CP
[1090]. The breach condition indicates that the actual resources instantiated for the
NF exceeds the assigned or permitted resource instances. This breach condition
notification is transmitted by the CP [1090] to the IM [502] through a CP-IM
5 interface.
[0093] Furthermore, at step [608], the CP [1090] may receive a
VNF/VNFC/CNF/CNFC termination notification from the IM [504] based on the
breach condition notification. The CP [1090] may generate a delete task event based
10 on the received termination notification.
[0094] Next, at step [610], the CP [1090] may transmit the delete task event to the
PS [1100]. The PS [1100] stops or deletes the resource instance that caused the
breach condition. This is done to keep the network in a stable condition.
15
[0095] Thereafter, the process [600] ends at step [612].
[0096] The present disclosure may further relate to a non-transitory computer
readable storage medium storing one or more instructions for handling resource
20 constraints in a network, the instructions include executable code which, when
executed by one or more units of a system [300], causes a transceiver unit [302], of
the system [300], to receive, at a capacity monitoring manager (CP) [1090], an
instantiation event associated with a network function from an Inventory Manager
(IM). Further, the executable code when executed causes a processing unit [304],
25 of the system [300], to generate, at the CP [1090], a create task event based on the
instantiation event. Further, the executable code when executed causes the
transceiver unit [302] to transmit, from the CP [1090] to a scheduler service, a
command based on the create task event. The executable code when further
executed causes transceiver unit [302] to receive, at the CP [1090], a termination
30 event associated with the NF, from the IM. Also, the executable code when executed
causes the transceiver unit [302] to transmit, from the CP [1090], a delete task event
based on the termination event, to the scheduler service. Furthermore, the
31
executable code when executed causes the processing unit [304] to halt, at the
scheduler service, a job associated with the create task event based on the delete
task event.
5 [0097] As is evident from the above, the present disclosure provides a technically
advanced solution for handling resource constraints in a network. More particularly,
the present solution provides a harmonious collaboration between a capacity
management platform (CMP/CP) and inventory manager (IM) services. Further, the
present solution smoothens the execution of task creation, modification, and
10 deletion events. Also, the present solution transmits the task creation, modification,
and deletion events to platform scheduler (PS) microservice to ensure that resource
constraints are comprehensively addressed. Furthermore, the present solution
maintains the system's optimal operation even in the presence of breached
scenarios.
15
[0098] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
20 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0099] Further, in accordance with the present disclosure, it is to be acknowledged
25 that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
30 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
32
functionality described herein, are considered to be encompassed within the scope
of the present disclosure
33
We Claim:
1. A method [400] for handling resource constraints in a network, the method
comprising:
- receiving, by a transceiver unit [302], at a capacity management
5 platform (CP), an instantiation event associated with a network
function (NF) from an Inventory Manager (IM);
- generating, by a processing unit [304], at the CP, a create task event
based on the instantiation event;
- transmitting, by the transceiver unit [302], from the CP to a scheduler
10 service, the create task event;
- receiving, by the transceiver unit [302] at the CP, a termination event
associated with the NF, from the IM;
- transmitting, by the transceiver unit [302] from the CP, a delete task
event based on the termination event, to the scheduler service; and
15 - halting, by the processing unit [304] at the scheduler service, a job
associated with the create task event, based on the delete task event.
2. The method [400] as claimed in claim 1, wherein the NF comprises at least
one of a Virtual Network Function (VNF), a Virtual Network Function
20 Component (VNFC), a containerized Network Function (CNF), and a
containerized Network Function Component (CNFC).
3. The method [400] as claimed in claim 1, wherein, in response to the create
task event, the method comprises performing, by the processing unit [304]
25 at the scheduler service, a scale-out operation for the NF.
4. The method [400] as claimed in claim 1, wherein in response to the delete
task event, the method comprises performing, by the processing unit [304]
at the scheduler service, a scale-in operation for the NF.
30
34
5. The method [400] as claimed in claim 1, wherein the create task event
comprises one of a creation of the job and a modification of the job at the
scheduler service.
5 6. The method as claimed in claim 1, wherein prior to transmitting the delete
task event to the scheduler service, a detection unit [306] detects a breach
condition associated with the create task event at the CP.
7. A system [300] for handling resource constraints in a network, the system
10 comprising:
- a transceiver unit [302] configured to:
o receive, at a capacity management platform (CP), an
instantiation event associated with a Network Function from an
Inventory Manager (IM);
15 - a processing unit [304] connected to at least the transceiver unit [302],
the processing unit [304] is configured to:
o generate, at the CP, a create task event based on the instantiation
event;
- the transceiver unit [302] further configured to:
20 o transmit, from the CP to a scheduler service, the create task
event;
o receive, at the CP, a termination event associated with the NF,
from the IM;
o transmit, from the CP, a delete task event based on the
25 termination event, to the scheduler service; and
- the processing unit [304] further configured to:
o halt, at the scheduler service, a job associated with the create task
event, based on the delete task event.
30 8. The system [300] as claimed in claim 7, wherein the NF comprises at least
one of a Virtual Network Function (VNF), a Virtual Network Function
35
Component (VNFC), a Cloud-Native Network Function (CNF), and a
Cloud-Native Network Function Component (CNFC).
9. The system [300] as claimed in claim 7, wherein, in response to the create
5 task event, the processing unit [304] is configured to perform at the
scheduler service, a scale-out operation for the NF.
10. The system [300] as claimed in claim 7, wherein, in response to the delete
task event, the processing unit [304] is configured to perform at the
10 scheduler service, a scale-in operation for the NF.
11. The system [300] as claimed in claim 7, wherein the create task event
comprises one of a creation of the job and a modification of the job at the
scheduler service.
15
12. The system [300] as claimed in claim 7, wherein prior to transmitting the delete task event to the scheduler service, the system comprises a detection unit [306] configured to detect a breach condition associated with the create task event at the CP.

Documents

Application Documents

# Name Date
1 202321064701-STATEMENT OF UNDERTAKING (FORM 3) [26-09-2023(online)].pdf 2023-09-26
2 202321064701-PROVISIONAL SPECIFICATION [26-09-2023(online)].pdf 2023-09-26
3 202321064701-POWER OF AUTHORITY [26-09-2023(online)].pdf 2023-09-26
4 202321064701-FORM 1 [26-09-2023(online)].pdf 2023-09-26
5 202321064701-FIGURE OF ABSTRACT [26-09-2023(online)].pdf 2023-09-26
6 202321064701-DRAWINGS [26-09-2023(online)].pdf 2023-09-26
7 202321064701-Proof of Right [09-02-2024(online)].pdf 2024-02-09
8 202321064701-FORM-5 [26-09-2024(online)].pdf 2024-09-26
9 202321064701-ENDORSEMENT BY INVENTORS [26-09-2024(online)].pdf 2024-09-26
10 202321064701-DRAWING [26-09-2024(online)].pdf 2024-09-26
11 202321064701-CORRESPONDENCE-OTHERS [26-09-2024(online)].pdf 2024-09-26
12 202321064701-COMPLETE SPECIFICATION [26-09-2024(online)].pdf 2024-09-26
13 202321064701-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321064701-Request Letter-Correspondence [09-10-2024(online)].pdf 2024-10-09
15 202321064701-Power of Attorney [09-10-2024(online)].pdf 2024-10-09
16 202321064701-Form 1 (Submitted on date of filing) [09-10-2024(online)].pdf 2024-10-09
17 202321064701-Covering Letter [09-10-2024(online)].pdf 2024-10-09
18 202321064701-CERTIFIED COPIES TRANSMISSION TO IB [09-10-2024(online)].pdf 2024-10-09
19 Abstract.jpg 2024-11-07
20 202321064701-ORIGINAL UR 6(1A) FORM 1 & 26-070125.pdf 2025-01-14