Sign In to Follow Application
View All Documents & Correspondence

Method And System For Managing A Network Function

Abstract: The present disclosure relates to a method and a system for managing a network function. The present disclosure encompasses receiving a resource threshold event associated with at least one network function from an event routing manager (ERM) module [316], wherein the resource threshold event at least comprises resource load information; retrieving, from a database, a predefined scaling policy associated with the at least one network function, wherein the predefined scaling policy comprises at least one of a set of threshold parameters and a set of hysteresis rules; computing a hysteresis evaluation based on the received resource load information and the predefined scaling policy; determining, whether the computed hysteresis evaluation breaches the set of threshold parameters; and transmitting a scaling request to a policy execution engine (PEEGN) module to mitigate breach of the computed hysteresis evaluation. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
25 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane- Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR MANAGING A NETWORK
FUNCTION”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre
Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR MANAGING A NETWORK FUNCTION
FIELD OF INVENTION
5 [0001] The present disclosure generally relates to network performance
management systems. More particularly, embodiments of the present disclosure
relate to methods and systems for managing a network function.
BACKGROUND
10
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
15 to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
20 advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second-generation (2G) technology, digital communication and data
services became possible, and text messaging was introduced. 3G technology
marked the introduction of high-speed internet access, mobile video calling, and
25 location-based services. The fourth generation (4G) technology revolutionized
wireless communication with faster data speeds, better network coverage, and
improved security. Currently, the fifth generation (5G) technology is being
deployed, promising even faster data speeds, low latency, and the ability to connect
multiple devices simultaneously. With each generation, wireless communication
30 technology has become more advanced, sophisticated, and capable of delivering
more services to its users.
3
[0004] Virtual Network Functions (VNFs) and Cloud Network Functions (CNFs)
play a critical role in modern networking infrastructure, enabling dynamic and
scalable network services. To efficiently manage VNFs/ Virtual Network Function
5 Components (VNFCs) and CNFs/ Cloud Network Function Components (CNFCs),
it is essential to have a mechanism for auto-scaling that responds to changes in
resource utilization in real-time. The Network Platform Decision Analytics (NPDA)
micro-service offers an innovative solution by providing auto-scaling capabilities
through seamless interaction with the Capacity Management Platform (CMP)
10 microservice.
[0005] In traditional networking systems, there is a lack of real-time management
for VNFs/VNFCs and CNFs/CNFCs, making it challenging to respond to resource
utilization changes promptly. The CMP microservice not only tracks the resource
15 details of these network functions but also serves as the conduit for initiating
communication with the NPDA micro-service. This interaction is crucial for
enabling dynamic auto-scaling.
[0006] Further, over the period of time various solutions have been developed to
20 address resource management operations in microservices architecture. However,
there are certain challenges with the existing solutions. For example, the existing
solutions do not perform in real-time.
[0007] Thus, there exists an imperative need in the art to provide a method and
25 system for managing a network function that addresses the challenges associated
with resource management in microservices architecture, which the present
disclosure aims to address.
SUMMARY
30
4
[0008] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
5
[0009] An aspect of the present disclosure may relate to a method for managing a
network function. The method includes receiving, by a transceiver unit at a network
function virtualization platform decision and analytics (NPDA) module, a resource
threshold event associated with at least one network function from an event routing
10 manager (ERM) module, wherein the resource threshold event at least comprises
resource load information. The method further includes retrieving, by a retrieving
unit at the NPDA module, from a database, a predefined scaling policy associated
with the at least one network function, wherein the predefined scaling policy
comprises at least one of a set of threshold parameters and a set of hysteresis rules.
15 The method further includes computing, by a processing unit at the NPDA module,
a hysteresis evaluation based on the received resource load information and the
predefined scaling policy. The method further includes determining, by a
determining unit at the NPDA module, whether the computed hysteresis evaluation
breaches the set of threshold parameters. Finally, the method includes transmitting,
20 by the transceiver unit at the NPDA module, a scaling request to a policy execution
engine (PEEGN) module to mitigate breach of the computed hysteresis evaluation.
[0010] In an exemplary aspect of the present disclosure, the method further
comprises executing, by the processing unit at the PEEGN module, a scaling action
25 on the at least one network function, based on the scaling request.
[0011] In an exemplary aspect of the present disclosure, the scaling action
comprises at least one of an auto-scale up, and an auto-scale down of the at least
one network function.
30
5
[0012] In an exemplary aspect of the present disclosure, at least the network
function is selected from a group consisting of Virtual Network Functions (VNFs),
Virtual Network Function Components (VNFCs), Container Network functions
(CNFs), and Container Network Function Components (CNFCs).
5
[0013] In an exemplary aspect of the present disclosure, the hysteresis evaluation
comprises comparing, by a comparing unit, the resource load information with
historical resource usage data to prevent frequent scaling operations.
10 [0014] In an exemplary aspect of the present disclosure, the resource threshold
event is received at the ERM module from a capacity monitoring manager (CMM)
microservice.
[0015] Another aspect of the present disclosure may relate to a system for
15 managing a network function. The system comprises a network function
virtualization platform decision and analytics (NPDA) module. The NPDA module
comprises a transceiver unit configured to receive a resource threshold event
associated with at least one network function from an event routing manager (ERM)
module, wherein the resource threshold event includes resource load information.
20 The NPDA module further comprises a retrieving unit configured to retrieve, from
a database, a predefined scaling policy associated with the at least one network
function, wherein the predefined scaling policy comprises at least one of a set of
threshold parameters and a set of hysteresis rules. The NPDA module further
comprises a processing unit configured to compute a hysteresis evaluation based on
25 the received resource load information and the predefined scaling policy. The
NPDA module further comprises a determining unit configured to determine
whether the computed hysteresis evaluation breaches the set of threshold
parameters. The transceiver unit is further configured to transmit a scaling request
to a policy execution engine (PEEGN) module to mitigate breach of the computed
30 hysteresis.
6
[0016] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for managing a network
function, the instructions include executable code which, when executed by one or
more units of a system, causes a transceiver unit of the system to receive a resource
5 threshold event associated with at least one network function from an event routing
manager (ERM) module, wherein the resource threshold event includes resource
load information. The instructions when executed further causes a retrieving unit of
the system to retrieve, from a database, a predefined scaling policy associated with
the at least one network function, wherein the predefined scaling policy comprises
10 at least one of a set of threshold parameters and a set of hysteresis rules. The
instructions when executed further causes a processing unit of the system to
compute a hysteresis evaluation based on the received resource load information
and the predefined scaling policy. The instructions when executed further causes a
determining unit of the system to determine whether the computed hysteresis
15 evaluation breaches the set of threshold parameters. The instructions when executed
further causes the transceiver unit of the system to transmit a scaling request to a
policy execution engine (PEEGN) module to mitigate breach of the computed
hysteresis.
20 OBJECTS OF THE DISCLOSURE
[0017] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
25 [0018] It is an object of the present disclosure to provide a system and a method for
managing a network function.
[0019] It is an object of the present disclosure to provide a system and a method to
define a trigger point for evaluating a threshold breach based on pre-defined policies
30 for a VNF/VNFC or CNF/CNFC.
7
[0020] It is another object of the present disclosure to provide a solution to keep
track of the VNF/VNFC or CNF/CNFC load and informing the same to NPDA
micro service in real-time.
BRIEF DESCRIPTION OF THE DRAWINGS
5
[0021] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
10 emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
15 drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
[0022] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture [100], in accordance with
20 exemplary implementation of the present disclosure.
[0023] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
25
[0024] FIG. 3 illustrates an exemplary block diagram of a system for managing a
network function, in accordance with exemplary implementations of the present
disclosure.
8
[0025] FIG. 4 illustrates a method flow diagram for managing a network function
in accordance with exemplary implementations of the present disclosure.
[0026] FIG. 5 illustrates a process flow diagram for managing a network function
5 in accordance with exemplary implementations of the present disclosure.
[0027] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
10 DETAILED DESCRIPTION
[0028] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
15 embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
20
[0029] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
25 It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0030] Specific details are given in the following description to provide a thorough
30 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
9
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
5 [0031] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
10 is terminated when its operations are completed but could have additional steps not
included in a figure.
[0032] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
15 subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
20 “includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
25 [0033] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
30 Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
10
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
5
[0034] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
10 or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
15 contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
[0035] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
20 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
25 functions.
[0036] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
30 communication or interaction of one or more modules or one or more units with
11
each other, which also includes the methods, functions, or procedures that may be
called.
[0037] All modules, units, components used herein, unless explicitly excluded
5 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
10 circuits (FPGA), any other type of integrated circuits, etc.
[0038] As used herein the transceiver unit include at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information, or a combination thereof between units/components within the system
15 and/or connected with the system.
[0039] As used herein, a hysteresis evaluation refers to a process used in decisionmaking systems where actions are triggered based on certain threshold conditions,
but with a delay or "buffer" to avoid rapid fluctuations between two states.
20
[0040] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing
method and system for managing a network function.
25
[0041] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture [100], in accordance with
exemplary implementation of the present disclosure. The MANO architecture [100]
is developed for managing telecom cloud infrastructure automatically, managing
30 design or deployment design, managing instantiation of a network node(s) etc. The
MANO architecture [100] deploys the network node(s) in the form of Virtual
12
Network Function (VNF) and Cloud-native/ Container Network Function (CNF).
The system may comprise one or more components of the MANO architecture. The
MANO architecture [100] is used to auto-instantiate the VNFs into the
corresponding environment of the present disclosure so that it could help in
5 onboarding other vendor(s) CNFs and VNFs to the platform. In an implementation,
the system comprises a NFV Platform Decision Analytics (NPDA) module [1096]
component.
[0042] As shown in FIG. 1, the MANO architecture [100] comprises a user
10 interface layer, a network function virtualization (NFV) and software defined
network (SDN) design function module [104]; a platforms foundation services
module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112], wherein all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
15 implementing features of the present disclosure.
[0043] The NFV and SDN design function module [104] further comprises a VNF
lifecycle manager (compute) [1042]; a VNF catalogue [1044]; a network services
catalogue [1046]; a network slicing and service chaining manager [1048]; a
20 physical and virtual resource manager [1050] and a CNF lifecycle manager [1052].
The VNF lifecycle manager (compute) [1042] is responsible for on which server of
the communication network the microservice will be instantiated. The VNF
lifecycle manager (compute) [1042] will manage the overall flow of incoming/
outgoing requests during interaction with the user. The VNF lifecycle manager
25 (compute) [1042] is responsible for determining which sequence to be followed for
executing the process. For example, in an AMF network function of the
communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
30 information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
13
of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores the
logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] is similarly used for the CNFs
5 lifecycle management.
[0044] The platforms foundation services module [106] further comprises a
microservices elastic load balancer [1062]; an identify & access manager [1064]; a
command line interface (CLI) [1066]; a central logging manager [1068]; and an
10 event routing manager [1070]. The microservices elastic load balancer [1062] is
used for maintaining the load balancing of the request for the services. The identify
& access manager [1064] is used for logging purposes. The command line interface
(CLI) [1066] is used to provide commands to execute certain processes which
requires changes during the run time. The central logging manager [1068] is
15 responsible for keeping the logs of every service. These logs are generated by the
MANO architecture [100]. These logs are used for debugging purposes. The event
routing manager [1070] is responsible for routing the events i.e., the application
programming interface (API) hits to the corresponding service.
20 [0045] The platforms core service module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
performance manager [1086]; the PEEGN module [1088]; a capacity monitoring
manager (CMM) microservice [1090] (alternatively referred to as CP microservice
[1090], and capacity management platform (CMP) microservice [1090]); a release
25 management (mgmt.) repository (RMR) [1092]; a configuration manager & (GCT)
[1094]; an NFV platform decision analytics (NPDA) [1096]; a platform NoSQL DB
[1098]; a platform schedulers and cron jobs [1100]; a VNF backup & upgrade
manager [1102]; a micro service auditor [1104]; and a platform operations,
administration and maintenance manager [1106]. The NFV infrastructure
30 monitoring manager [1082] monitors the infrastructure part of the NFs. For e.g.,
any metrics such as CPU utilization by the VNF. The assure manager [1084] is
14
responsible for supervising the alarms the vendor is generating. The performance
manager [1086] is responsible for manging the performance counters. The PEEGN
module [1088] is responsible for all the managing the policies. The CMM)
microservice [1090] is responsible for sending the request to the PEEGN module
5 [1088]. The release management (mgmt.) repository (RMR) [1092] is responsible
for managing the releases and the images of all the vendor network node. The
configuration manager & (GCT) [1094] manages the configuration and GCT of all
the vendors. The NFV platform decision analytics (NPDA) [1096] helps in deciding
the priority of using the network resources. It is further noted that the PEEGN
10 module [1088], the configuration manager & (GCT) [1094] and the (NPDA) [1096]
work together. The platform NoSQL DB [1098] is a database for storing all the
inventory (both physical and logical) as well as the metadata of the VNFs and CNF.
The platform schedulers and cron jobs [1100] schedules the task such as but not
limited to triggering of an event, traverse the network graph etc. The VNF backup
15 & upgrade manager [1102] takes backup of the images, binaries of the VNFs and
the CNFs and produces those backups on demand in case of server failure. The
micro service auditor [1104] audits the microservices. For e.g., in a hypothetical
case, instances not being instantiated by the MANO architecture [100] using the
network resources then the micro service auditor [1104] audits and informs the
20 same so that resources can be released for services running in the MANO
architecture [100], thereby assuring the services only run on the MANO
architecture [100]. The platform operations, administration, and maintenance
manager [1106] is used for newer instances that are spawning.
25 [0046] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter [1128];
and a NFV gateway [1130]. The platform external API adaptor and gateway [1122]
is responsible for handling the external services (to the MANO architecture [100])
30 that requires the network resources. The generic decoder and indexer (XML, CSV,
JSON) [1124] directly gets the data of the vendor system in the XML, CSV, JSON
15
format. The docker service adaptor [1126] is the interface provided between the
telecom cloud and the MANO architecture [100] for communication. The API
adapter [1128]; is used to connect with the virtual machines (VMs). The NFV
gateway [1130] is responsible for providing the path to each service going
5 to/incoming from the MANO architecture [100].
[0047] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
10 implementation, the computing device [200] may also implement a method for
managing a network function utilising the system [300]. In another implementation,
the computing device [200] itself implements the method for managing a network
function using one or more units configured within the computing device [200],
wherein said one or more units are capable of implementing the features as
15 disclosed in the present disclosure.
[0048] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a processor [204]
coupled with bus [202] for processing information. The processor [204] may be, for
20 example, a general-purpose microprocessor. The computing device [200] may also
include a main memory [206], such as a random-access memory (RAM), or other
dynamic storage device, coupled to the bus [202] for storing information and
instructions to be executed by the processor [204]. The main memory [206] also
may be used for storing temporary variables or other intermediate information
25 during execution of the instructions to be executed by the processor [204]. Such
instructions, when stored in non-transitory storage media accessible to the processor
[204], render the computing device [200] into a special-purpose machine that is
customized to perform the operations specified in the instructions. The computing
device [200] further includes a read only memory (ROM) [208] or other static
30 storage device coupled to the bus [202] for storing static information and
instructions for the processor [204].
16
[0049] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
5 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
10 [204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
15 the device to specify positions in a plane.
[0050] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware,
and/or program logic which in combination with the computing device [200] causes
20 or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
25 such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
30
17
[0051] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
5 integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
10 implementation, the communication interface [218] sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing
various types of information.
[0052] The computing device [200] can send messages and receive data, including
15 program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
20 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0053] The computing device [200] encompasses a wide range of electronic
devices capable of processing data and performing computations. Examples of
25 computing device [200] include, but are not limited only to, personal computers,
laptops, tablets, smartphones, servers, and embedded systems. The devices may
operate independently or as part of a network and can perform a variety of tasks
such as data storage, retrieval, and analysis. Additionally, computing device [200]
may include peripheral devices, such as monitors, keyboards, and printers, as well
30 as integrated components within larger electronic systems, showcasing their
versatility in various technological applications.
18
[0054] Referring to FIG. 3, an exemplary block diagram of a system [300] for
managing a network function, is shown, in accordance with the exemplary
implementations of the present disclosure. The system [300] comprises, at least one
5 network function virtualization platform decision and analytics (NPDA) module
[1096]. The NPDA module [1096] comprises at least one transceiver unit [302], at
least one retrieving unit [304], at least one database [306], at least one processing
unit [308], at least one a determining unit [310] and at least one policy execution
engine (PEEGN) module [1088], at least one comparing unit [314], and at least one
10 event routing manager (ERM) module [316]. Also, all of the components/ units of
the system [300] are assumed to be connected to each other unless otherwise
indicated below. As shown in the figures all units shown within the system [300]
should also be assumed to be connected to each other. Also, in FIG. 3 only a few
units are shown, however, the system [300] may comprise multiple such units or
15 the system [300] may comprise any such numbers of said units, as required to
implement the features of the present disclosure. Further, in an implementation, the
system [300] may be present in a user device/ user equipment to implement the
features of the present disclosure. The system [300] may be a part of the user device
or may be independent of but in communication with the user device (may also
20 referred herein as a UE). In another implementation, the system [300] may reside
in a server or a network entity. In yet another implementation, the system [300] may
reside partly in the server/ network entity and partly in the user device.
[0055] The system [300] is configured for managing a network function, with the
25 help of the interconnection between the components/units of the system [300].
[0056] The system [300] comprises a network function virtualization platform
decision and analytics (NPDA) module [1096]. The NPDA module [1096] further
comprises a transceiver unit [302] which is configured to receive a resource
30 threshold event associated with at least one network function from an event routing
19
manager (ERM) module [316], wherein the resource threshold event includes
resource load information.
[0057] The transceiver unit [302] receives from the event routing manager (ERM)
5 module [316], the resource threshold event associated with at least one network
function that includes resource load information about the load capacity of various
resources such as, but not limited only to CPU, RAM, storage, etc., in at least one
network function. The resource threshold event determines whether the one or more
resources has exceeded its load capacity which further helps the system in
10 allocating additional resources thereby maintaining optimal system performance
and preventing potential failures.
[0058] For example, if CPU usage exceeds its specified resource threshold event,
additional virtual machines or containers might be provisioned. Similarly, if usage
15 drops significantly, resources might be scaled back to avoid over-provisioning.
[0059] In an exemplary aspect, the network function is selected from a group
consisting of Virtual Network Functions (VNFs), Virtual Network Function
Components (VNFCs), Container Network functions (CNFs), and Container
20 Network Function Components (CNFCs).
[0060] In an exemplary aspect, the virtual network function (VNF) refers to a
network function module that operates in virtualized environments such as virtual
machines or containers. This virtualization allows for dynamic scaling and rapid
25 adaptation to changing network conditions, improving reducing hardware
requirement.
[0061] In an exemplary aspect, the virtual network function component (VNFC)
refers to a sub-component within a virtual network function (VNF) that performs a
30 specific task or set of tasks related to the overall network function. VNFCs reduces
20
VNFs into smaller units, each responsible for unique functions, such as packet
inspection, policy enforcement, etc.
[0062] In an exemplary aspect, the containerized network function (CNF) refers to
5 a network function that act as portable container, which include all necessary
configurations. CNFs offer increased portability, and scalability compared to
traditional network functions.
[0063] In an exemplary aspect, the Containerized Network Function Component
10 (CNFC) refers to a subcomponent of a Containerized Network Function (CNF) that
performs a specific task or set of tasks within the broader network function. CNFCs
are deployed in containers, having same advantages as CNFs, which includes
efficient resource management.
15 [0064] In an exemplary aspect, the resource threshold event is received at the ERM
module [316] from a capacity monitoring manager (CMP) microservice [1090].
[0065] In an exemplary aspect, the resource threshold event is received using the
event routing manager (ERM) module [316] which is responsible for routing the
20 events i.e., an application programming interface (API) hit to the CMP microservice
[1090].
[0066] The NPDA module [1096] further comprises a retrieving unit [304]
configured to retrieve, from a database [306], a predefined scaling policy associated
25 with the at least one network function, wherein the predefined scaling policy
comprises at least one of a set of threshold parameters and a set of hysteresis rules.
[0067] The retrieving unit [304] retrieves predefined scaling policy associated with
the at least one network function, from a database [306]. In an exemplary aspect the
30 predefined scaling policy includes specific guidelines that instruct how to scale
network resources in response to changes in load or utilization.
21
[0068] The predefined scaling policy includes at least one of a set of threshold
parameters which are specific parameters defined by the network administrator
within a system that initiates certain responses when they are exceeded. In an
5 exemplary aspect, threshold parameters are used to determine whether to add
resources or removes resources based on resource load information. For example,
the network administrator sets the threshold parameter at 90% CPU utilization. if
CPU utilization exceed this set threshold parameter of 90% then the system [300]
may allocate additional resources maintaining optimal system performance and
10 preventing potential failures.
[0069] The predefined scaling policy further includes at least one of a set of
hysteresis rules. The set of hysteresis rules are rules for adjusting resource
allocation parameters in real-time to manage system resources effectively so that to
15 maintain system stability and performance.
[0070] The NPDA module [1096] further comprises a processing unit [308] which
is configured to compute a hysteresis evaluation based on the received resource load
information and the predefined scaling policy.
20
[0071] The processing unit [308] computes the hysteresis evaluation based on the
received resource load information and predefined scaling policy. In an exemplary
aspect, hysteresis evaluation is analysed and evaluated based on the received
resource load information which states how much resources are utilized which helps
25 in avoiding frequent scaling actions that could put unnecessary load on the overall
performance of the system.
[0072] Furthermore, the processing unit [308] computes the hysteresis evaluation
based on predefined scaling policy by adjusting resource allocation parameters in
30 real-time to manage system resources effectively so that to maintain system stability
and performance.
22
[0073] In an exemplary aspect, for the hysteresis evaluation the system [300]
further comprises a comparing unit [314] configured to compare the resource load
information with historical resource usage data to prevent frequent scaling
5 operations.
[0074] In order to compute the hysteresis evaluation, the comparing unit [314]
compares the resource load information with the historical resource usage data to
prevent frequent scaling operations. In an implementation, the trained model is
10 trained on a historical resource usage data. This comparison, by the comparing unit
[314], determines whether the current load is a temporary load fluctuation or a
significant trend, thereby preventing frequent and unnecessary scaling operations.
By comparing trained historical data with the resource load information, the system
[300] ensures that scaling actions are based on long term reoccurring trends and
15 patterns rather than short-term changes, leading to more stable and efficient
resource management.
[0075] The system [300] further comprises a determining unit [310] configured to
determine whether the computed hysteresis evaluation breaches the set of threshold
20 parameters.
[0076] The determining unit [310] determines whether the computed hysteresis
evaluation breaches the set of threshold parameters which are specific parameters
defined by the network administrator within the system [300] that initiates certain
25 responses when they are exceeded or breached.
[0077] In an exemplary aspect, the set of threshold parameters are breached when
there is an anomaly or fault in the reported load at the NPDA module [1096]. For
e.g., once the alarm is raised, the NPDA module [1096] fetches the set of alarm
30 restoration data defined against the provided network function from the database
[306]. For example, an administrator may define the alarm restoration data for
23
raising an alarm to determine that the network function requires recovery in a
network. When the determination is made that the network function requires
healing, the alarms are raised or triggered. The raising of alarm signifies that a
particular network function requires healing.
5
[0078] The transceiver unit [302] is further configured to transmit a scaling request
to the PEEGN module [1088] to mitigate breach of the computed hysteresis
evaluation.
10 [0079] The transceiver unit [302] transmits the scaling request in the form of
instructions in order to mitigate breach of the computed hysteresis to the PEEGN
module [1088] by performing auto scale operations.
[0080] The processing unit [308] is further configured to execute a scaling action
15 on the at least one network function, based on the scaling request.
[0081] The processing unit [308] executes the scaling action on the at least one
network function, based on the scaling request. In an exemplary aspect, the scaling
action on the at least one network function, is performed in order to manage the
20 network resources when they are exceeded or breached leading to more stable and
efficient resource management. In an implementation, the request may include the
kind of scaling action that needs to be performed based on the current load of the
network resources, which may further include scale in action, scale out action, scale
up action, scale down action.
25
[0082] In an exemplary aspect, the scaling action comprises at least one of an autoscale up, and an auto-scale down for the at least one network function.
[0083] In an exemplary aspect, the scaling action includes auto-scale up actions
30 which automatically allocates more resources to the overloaded resources, such as
CPU, memory, or storage, to meet increased demands. Similarly, the scaling action
24
includes auto-scale down actions which automatically reduces the number of
resources when the overall load on the system [300] decreases. The auto scale-up/
auto scale down actions are significant as it allow the system [300] to optimize
resource utilization, and efficiently manage predictable or variable workloads
5 without the complexity of adding or removing more resources.
[0084] Referring to FIG. 4, an exemplary method flow diagram [400] for managing
a network function, in accordance with exemplary implementations of the present
disclosure is shown. In an implementation the method [400] is performed by the
10 system [300]. Further, in an implementation, the system [300] may be present in a
server device to implement the features of the present disclosure. Also, as shown in
FIG. 4, the method [400] starts at step [402].
[0085] At step 404, the method [400] comprises receiving, by a transceiver unit
15 [302] at a network function virtualization platform decision and analytics (NPDA)
module [1096], a resource threshold event associated with at least one network
function from an event routing manager (ERM) module [316], wherein the resource
threshold event at least comprises resource load information.
20 [0086] The transceiver unit [302] receives from an event routing manager (ERM)
module [316], the resource threshold event associated with at least one network
function that includes resource load information about the load capacity of various
resources such as, but not limited only to CPU, RAM, storage, etc., in at least one
network function. The resource threshold event determines whether the one or more
25 resources has exceeded its load capacity which further helps the system in
allocating additional resources thereby maintaining optimal system performance
and preventing potential failures.
[0087] For example, if CPU usage exceeds its specified resource threshold event,
30 additional virtual machines or containers might be provisioned. Similarly, if usage
drops significantly, resources might be scaled back to avoid over-provisioning.
25
[0088] In an exemplary aspect, the network function is selected from a group
consisting of Virtual Network Functions (VNFs), Virtual Network Function
Components (VNFCs), Container Network functions (CNFs), and Container
5 Network Function Components (CNFCs).
[0089] In an exemplary aspect, the virtual network function (VNF) refers to a
network function module that operates in virtualized environments such as virtual
machines or containers. This virtualization allows for dynamic scaling and rapid
10 adaptation to changing network conditions, improving reducing hardware
requirement.
[0090] In an exemplary aspect, the virtual network function component (VNFC)
refers to a sub-component within a virtual network function (VNF) that performs a
15 specific task or set of tasks related to the overall network function. VNFCs reduces
VNFs into smaller units, each responsible for unique functions, such as packet
inspection, policy enforcement, etc.
[0091] In an exemplary aspect, the containerized network function (CNF) refers to
20 a network function that act as portable container, which include all necessary
configurations. CNFs offer increased portability, and scalability compared to
traditional network functions.
[0092] In an exemplary aspect, the Containerized Network Function Component
25 (CNFC) refers to a subcomponent of a Containerized Network Function (CNF) that
performs a specific task or set of tasks within the broader network function. CNFCs
are deployed in containers, having same advantages as CNFs, which includes
efficient resource management.
30 [0093] In an exemplary aspect, the resource threshold event is received at the ERM
module [316] from the CMP microservice [1090].
26
[0094] In an exemplary aspect, the resource threshold event is received using the
event routing manager (ERM) module [316] which is responsible for routing the
events i.e., an application programming interface (API) hit to the CMP microservice
5 [1090].
[0095] At step 406, the method [400] further comprises retrieving, by a retrieving
unit [304] at the NPDA module [1096], from a database [306], a predefined scaling
policy associated with the at least one network function, wherein the predefined
10 scaling policy comprises at least one of a set of threshold parameters and a set of
hysteresis rules.
[0096] The retrieving unit [304] retrieves predefined scaling policy associated with
the at least one network function, from a database [306]. In an exemplary aspect the
15 predefined scaling policy includes specific guidelines that instruct how to scale
network resources in response to changes in load or utilization.
[0097] The predefined scaling policy includes at least one of a set of threshold
parameters which are specific parameters defined by the network administrator
20 within a system that initiates certain responses when they are exceeded. In an
exemplary aspect, threshold parameters are used to determine whether to add
resources or removes resources based on resource load information. For example,
the network administrator sets the threshold parameter at 90% CPU utilization. if
CPU utilization exceed this set threshold parameter of 90% then the system [300]
25 may allocate additional resources maintaining optimal system performance and
preventing potential failures.
[0098] The predefined scaling policy further includes at least one of a set of
hysteresis rules. The set of hysteresis rules are rules for adjusting resource
30 allocation parameters in real-time to manage system resources effectively so that to
maintain system stability and performance.
27
[0099] At step 408, the method [400] further comprises computing, by a processing
unit [308] at the NPDA module [1096], a hysteresis evaluation based on the
received resource load information and the predefined scaling policy.
5
[0100] The processing unit [308] computes the hysteresis evaluation based on the
received resource load information and predefined scaling policy. In an exemplary
aspect, hysteresis evaluation is analysed and evaluated based on the received
resource load information which states how much resources are utilized which helps
10 in avoiding frequent scaling actions that could put unnecessary load on the overall
performance of the system.
[0101] Furthermore, the processing unit [308] computes the hysteresis evaluation
based on predefined scaling policy by adjusting resource allocation parameters in
15 real-time to manage system resources effectively so that to maintain system stability
and performance.
[0102] The method [400] further comprises the hysteresis evaluation comprises
comparing, by a comparing unit [314], the resource load information with historical
20 resource usage data to prevent frequent scaling operations.
[0103] In order to compute the hysteresis evaluation, the comparing unit [314]
compares the resource load information with the historical resource usage data to
prevent frequent scaling operations. In an implementation, the trained model is
25 trained on a historical resource usage data. This comparison, by the comparing unit
[314], determines whether the current load is a temporary load fluctuation or a
significant trend, thereby preventing frequent and unnecessary scaling operations.
By comparing trained historical data with the resource load information, the system
[300] ensures that scaling actions are based on long term reoccurring trends and
30 patterns rather than short-term changes, leading to more stable and efficient
resource management.
28
[0104] At step 410, the method [400] further comprises determining, by a
determining unit [310] at the NPDA module [1096], whether the computed
hysteresis evaluation breaches the set of threshold parameters.
5
[0105] The determining unit [310] determines whether the computed hysteresis
evaluation breaches the set of threshold parameters which are specific parameters
defined by the network administrator within the system [300] that initiates certain
responses when they are exceeded or breached.
10
[0106] At step 412, the method [400] further comprises transmitting, by the
transceiver unit [302] at the NPDA module [1096], a scaling request to the PEEGN
module [1088] to mitigate breach of the computed hysteresis evaluation.
15 [0107] The transceiver unit [302] transmits the scaling request in the form of
instructions in order to mitigate breach of the computed hysteresis to the PEEGN
module [1088] by performing auto scale operations.
[0108] The method [400] further comprises executing, by the processing unit [308]
20 at the PEEGN module [1088], a scaling action on the at least one network function,
based on the scaling request.
[0109] The processing unit [308] executes the scaling action on the at least one
network function, based on the scaling request. In an exemplary aspect, the scaling
25 action on the at least one network function, is performed in order to manage the
network resources when they are exceeded or breached leading to more stable and
efficient resource management. In an implementation, the request may include the
kind of scaling action that needs to be performed based on the current load of the
network resources, which may further include scale in action, scale out action, scale
30 up action, scale down action.
29
[0110] In an exemplary aspect, the scaling action comprises at least one of an autoscale up, and an auto-scale down for the at least one network function.
[0111] In an exemplary aspect, the scaling action includes auto-scale up actions
5 which automatically allocates more resources to the overloaded resources, such as
CPU, memory, or storage, to meet increased demands. Similarly, the scaling action
includes auto-scale down actions which automatically reduces the number of
resources when the overall load on the system [300] decreases. The auto scale-up/
auto scale down actions are significant as it allow the system [300] to optimize
10 resource utilization, and efficiently manage predictable or variable workloads
without the complexity of adding or removing more resources.
[0112] Thereafter, at step [414], the method [400] is terminated.
15 [0113] Referring to FIG. 5, an exemplary process [500] flow diagram for managing
a network function, in accordance with exemplary implementations of the present
disclosure is shown. The process [500] starts at step [502].
[0114] At step 504, the process [500] comprises transmitting, by the CMP
20 microservice [1090], a resource threshold event associated with at least one network
function to the event routing manager (ERM) module [316]. In an exemplary
aspect, the network function is selected from a group consisting of Virtual Network
Functions (VNFs), Virtual Network Function Components (VNFCs), Container
Network functions (CNFs), and Container Network Function Components
25 (CNFCs).
[0115] At step 506, the process [500] comprises receiving, at the ERM module
[316], a resource threshold event associated with at least one network function,
wherein the resource threshold event includes resource load information. In an
30 exemplary aspect, the resource threshold event is received using the event routing
manager (ERM) module [316] which is responsible for routing the events i.e., an
30
application programming interface (API) hit to the CMP microservice [1090]. In an
exemplary aspect, the received resource threshold event is further transmitted from
the ERM module [316] to the NPDA module [1096].
5 [0116] At step 508, the process [500] comprises providing, to the NPDA module
[1096], resource load details/information raised by the CMP microservice [1090].
In an exemplary aspect, the NPDA module [1096] computes the hysteresis
evaluation based on the received resource load information and predefined scaling
policy. In an exemplary aspect, hysteresis evaluation is analysed and evaluated
10 based on the received resource load information which states how much resources
are utilized which helps in avoiding frequent scaling actions that could put
unnecessary load on the overall performance of the system.
[0117] At step 510, the process [500] comprises receiving, at the policy evaluation
15 module from the NPDA module, the hysteresis evaluation based on the received
resource load information/details and the predefined scaling policy and transmitting
the same to the policy evaluation module. In an exemplary aspect, the policy
evaluation module [1096] determines whether the computed hysteresis evaluation
breaches the set of threshold parameters which are specific parameters defined by
20 the network administrator within the system [300] that initiates certain responses
when they are exceeded or breached. In an exemplary aspect, if the computed
hysteresis evaluation does not breach the set of threshold parameters, the process
ends in the next step.
25 [0118] At step 512, if the computed hysteresis evaluation breaches the set of
threshold parameters, the process [500] comprises performing, at the PEEGN
module [1088], closed loop report to adjacent system regarding scaling (in/out) or
scale up/scale out decisions based on evaluated CNFC/VNFC policy. In an
exemplary aspect, the scaling action includes auto-scale up actions which
30 automatically allocates more resources to the overloaded resources, such as CPU,
memory, or storage, to meet increased demands. Similarly, the scaling action
31
includes auto-scale down actions which automatically reduces the number of
resources when the overall load on the system [300] decreases. The auto scale-up/
auto scale down actions is significant as it allows the system [300] to optimize
resource utilization, and efficiently manage predictable or variable workloads
5 without the complexity of adding or removing more resources. In an exemplary
aspect, the scaling request in the form of instructions are performed in order to
mitigate breach of the computed hysteresis at the PEEGN module [1088] by
performing auto scale operations. Thereafter, the process [500], ends in the next
step.
10
[0119] At step 514, the process steps [510] and [512] ends.
[0120] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for managing a network function, the
15 instructions include executable code which, when executed by one or more units of
a system, causes a transceiver unit [302] of the system to receive a resource
threshold event associated with at least one network function from an event routing
manager (ERM) module [316], wherein the resource threshold event includes
resource load information. The instructions when executed further causes a
20 retrieving unit [304] to retrieve, from a database [306], a predefined scaling policy
associated with the at least one network function, wherein the predefined scaling
policy comprises at least one of a set of threshold parameters and a set of hysteresis
rules. The instructions when executed further causes a processing unit [308]
configured to compute a hysteresis evaluation based on the received resource load
25 information and the predefined scaling policy. The instructions when executed
further causes a determining unit [310] to determine whether the computed
hysteresis evaluation breaches the set of threshold parameters. The instructions
when executed further causes the transceiver unit [302] to transmit a scaling request
to the PEEGN module [1088] to mitigate breach of the computed hysteresis
30 evaluation.
32
[0121] As is evident from the above, the present disclosure provides a technically
advanced solution for managing a network function that adapts to changing
workloads and resource demands in microservices architecture, ensuring optimal
system performance while preventing overload.
5
[0122] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
10 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0123] Further, in accordance with the present disclosure, it is to be
15 acknowledged that the functionality described for the various components/units can
be implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
20 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
33
We Claim:
1. A method for managing a network function, the method comprising:
- receiving, by a transceiver unit [302] at a network function virtualization
platform decision and analytics (NPDA) module [1096], a resource
5 threshold event associated with at least one network function from an
event routing manager (ERM) module [316], wherein the resource
threshold event at least comprises resource load information;
- retrieving, by a retrieving unit [304] at the NPDA module [1096], from a
database [306], a predefined scaling policy associated with the at least
10 one network function, wherein the predefined scaling policy comprises
at least one of a set of threshold parameters and a set of hysteresis rules;
- computing, by a processing unit [308] at the NPDA module [1096], a
hysteresis evaluation based on the resource load information and the
predefined scaling policy;
15 - determining, by a determining unit [310] at the NPDA module [1096],
whether the computed hysteresis evaluation breaches the set of threshold
parameters; and
- transmitting, by the transceiver unit [302] at the NPDA module [1096],
a scaling request to a policy execution engine (PEEGN) module [1088]
20 to mitigate breach of the computed hysteresis evaluation.
2. The method as claimed in claim 1, wherein the method comprises executing,
by the processing unit [308] at the PEEGN module [1088], a scaling action
on the at least one network function, based on the scaling request.
25
3. The method as claimed in claim 2, wherein the scaling action comprises at
least one of an auto-scale up, and an auto-scale down of the at least one
network function.
34
4. The method as claimed in claim 1, wherein at least the network function is
selected from a group consisting of Virtual Network Functions (VNFs),
Virtual Network Function Components (VNFCs), Container Network
functions (CNFs), and Container Network Function Components (CNFCs).
5
5. The method as claimed in claim 1, wherein the hysteresis evaluation
comprises comparing, by a comparing unit [314], the resource load
information with historical resource usage data to prevent frequent scaling
operations.
10
6. The method as claimed in claim 1, wherein the resource threshold event is
received at the ERM module [316] from a capacity monitoring manager
(CMM) microservice [1090].
15 7. A system for managing a network function, the system comprising:
- a network function virtualization platform decision and analytics
(NPDA) module [1096] comprising:
- a transceiver unit [302] configured to receive a resource
threshold event associated with at least one network function
20 from an event routing manager (ERM) module [316], wherein
the resource threshold event includes resource load information;
- a retrieving unit [304] configured to retrieve, from a database
[306], a predefined scaling policy associated with the at least
one network function, wherein the predefined scaling policy
25 comprises at least one of a set of threshold parameters and a set
of hysteresis rules;
- a processing unit [308] configured to compute a hysteresis
evaluation based on the resource load information and the
predefined scaling policy;
35
- a determining unit [310] configured to determine whether the
computed hysteresis evaluation breaches the set of threshold
parameters; and
- the transceiver unit [302] configured to transmit a scaling
5 request to a policy execution engine (PEEGN) module [1088] to
mitigate breach of the computed hysteresis.
8. The system as claimed in claim 7, wherein the processing unit [308] is
configured to execute a scaling action on the at least one network function,
10 based on the scaling request.
9. The system as claimed in claim 8, wherein the scaling action comprises at
least one of an auto-scale up, and an auto-scale down for the at least one
network function.
15
10. The system as claimed in claim 7, wherein at least the network function is
selected from a group consisting of Virtual Network Functions (VNFs),
Virtual Network Function Components (VNFCs), Container Network
functions (CNFs), and Container Network Function Components (CNFCs).
20
11. The system as claimed in claim 7, wherein for the hysteresis evaluation the
system further comprises a comparing unit [314] configured to compare the
resource load information with historical resource usage data to prevent
frequent scaling operations.
25
12. The system as claimed in claim 7, wherein the resource threshold event is
received at the ERM module [316] from a capacity monitoring manager (CMM) microservice [1090].

Documents

Application Documents

# Name Date
1 202321064306-STATEMENT OF UNDERTAKING (FORM 3) [25-09-2023(online)].pdf 2023-09-25
2 202321064306-PROVISIONAL SPECIFICATION [25-09-2023(online)].pdf 2023-09-25
3 202321064306-POWER OF AUTHORITY [25-09-2023(online)].pdf 2023-09-25
4 202321064306-FORM 1 [25-09-2023(online)].pdf 2023-09-25
5 202321064306-FIGURE OF ABSTRACT [25-09-2023(online)].pdf 2023-09-25
6 202321064306-DRAWINGS [25-09-2023(online)].pdf 2023-09-25
7 202321064306-Proof of Right [09-02-2024(online)].pdf 2024-02-09
8 202321064306-FORM-5 [25-09-2024(online)].pdf 2024-09-25
9 202321064306-ENDORSEMENT BY INVENTORS [25-09-2024(online)].pdf 2024-09-25
10 202321064306-DRAWING [25-09-2024(online)].pdf 2024-09-25
11 202321064306-CORRESPONDENCE-OTHERS [25-09-2024(online)].pdf 2024-09-25
12 202321064306-COMPLETE SPECIFICATION [25-09-2024(online)].pdf 2024-09-25
13 202321064306-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321064306-Request Letter-Correspondence [09-10-2024(online)].pdf 2024-10-09
15 202321064306-Power of Attorney [09-10-2024(online)].pdf 2024-10-09
16 202321064306-Form 1 (Submitted on date of filing) [09-10-2024(online)].pdf 2024-10-09
17 202321064306-Covering Letter [09-10-2024(online)].pdf 2024-10-09
18 202321064306-CERTIFIED COPIES TRANSMISSION TO IB [09-10-2024(online)].pdf 2024-10-09
19 Abstract.jpg 2024-10-25
20 202321064306-ORIGINAL UR 6(1A) FORM 1 & 26-060125.pdf 2025-01-10