Sign In to Follow Application
View All Documents & Correspondence

Method And System For Implementing Corrective Actions During A Resource Threshold Error Event

Abstract: The present disclosure relates to a method and a system for implementing one or more corrective actions during a resource threshold error event. In one example, the method comprises receiving, by a transceiver unit [304] at a Network Function Virtualization (NFV) Platform Decision Analytics (NPDA) module [302], a resource threshold error event for a Network Function (NF). The method further comprises retrieving, by a retrieval unit [306] at the NPDA module [302], a set of data related to historical instances of the resource threshold error events. Then based on the retrieved set of data, the method comprises evaluating, by an evaluation unit [308], a hysteresis for the resource threshold error event. On evaluation of a positive hysteresis for the resource threshold error event, the method further comprises generating, by a generation unit [310], a response message indicating an occurrence of the positive hysteresis. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR IMPLEMENTING
CORRECTIVE ACTIONS DURING A RESOURCE
THRESHOLD ERROR EVENT”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR IMPLEMENTING CORRECTIVE
ACTIONS DURING A RESOURCE THRESHOLD ERROR EVENT
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure generally relate to management of
operations within a network. More particularly, embodiments of the present
disclosure relate to methods and systems for implementing one or more corrective
actions during a resource threshold error event.
10
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
15 include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. The third generation
(3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth generation (5G) technology is
30 being deployed, promising even faster data speeds, low latency, and the ability to
connect multiple devices simultaneously. With each generation, wireless
3
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
[0004] Generally, there may be multiple network functions (NFs) in
5 telecommunication networks, which may use network resources respectively
allocated to each of them. Based on the allocated network resources, the NF may
perform an operation in the network that may be within the resource capacity of
said NF.
10 [0005] In cases where the allocated network resources get exhausted due to
overutilization and the NF may need to perform an additional operation, the
network function may be unable to do so and may generate an error. Further, in
cases where an operation that may entail a quantity of resources more than what has
been allocated to the NF is assigned to the NF, the NF may be unable to do so.
15
[0006] In traditional solutions, human intervention is required for making decisions
related to scaling operations (in/out) or healing operations of the NF or the resources
used for running the NF such as compute, storage, network, slice instances, etc.
Further, the scaling decision is not automatic in terms of notifying to its closed loop
20 systems. Further in traditional solutions, there is no way to apply the suggested
scale-in / scale-out or healing operations against microservice servers in real-time.
[0007] For example, conventionally, to resolve such a problem, a network
administrator or operator may assess the NF, resources allocated to the NF,
25 resources required for performing the operation, available resources in the network,
etc. Based on the assessment, the network operator may optimize the network
resource allocation by manually modifying the allocated resources on the NF, or
assigning another NF, or performing a healing operation on said NF.
30 [0008] This conventional process may be inefficient and cumbersome. This
problem may be further aggravated in cases where the network operator has
4
performed the network optimization, and an error again comes up. As a result of
this, the network operator may need to repeatedly perform the network resource
optimization, thereby leading to an inefficient, cumbersome, and computationally
expensive task.
5
[0009] Thus, there exists an imperative need in the art to develop methods and
systems which addresses the need to provide an efficient solution for notifying
automatic scale in/out request, for making intelligent decisions in real-time, and for
transmitting automatic scaling or automatic-healing request to microservices server,
10 which the present disclosure aims to address.
SUMMARY
[0010] This section is provided to introduce certain aspects of the present disclosure
15 in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0011] An aspect of the present disclosure may relate to a method for implementing
20 one or more corrective actions during a resource threshold error event. The method
comprises receiving, by a transceiver unit at a Network Function Virtualization
(NFV) Platform Decision Analytics (NPDA) module, a resource threshold error
event for a Network Function (NF). The method further comprises retrieving, by a
retrieval unit at the NPDA module, a set of data related to historical instances of
25 resource threshold error events for the NF. Then based on the retrieved set of data,
the method further comprises evaluating, by an evaluation unit, a hysteresis for the
resource threshold error event. Then on evaluation of a positive hysteresis for the
resource threshold error event, the method further comprises generating, by a
generation unit, a response message indicating an occurrence of the positive
30 hysteresis.
5
[0012] In an exemplary aspect of the present disclosure, the method further
comprises transmitting, by the transceiver unit, the response message to a user,
wherein the user, based on the received response message, is to implement one or
more corrective actions.
5
[0013] In another exemplary aspect of the present disclosure, based on the response
message, the method further comprises retrieving, by the retrieval unit at the NPDA
module, a resource threshold policy defined for the NF relating to the resource
threshold error event. Then the method comprises transmitting, by the transceiver
10 unit to a Policy Execution Engine (PEE), a request for one or more corrective
actions to negate the resource threshold error event. Then the method involves
receiving, by the transceiver unit from the PEE, an indication of an implementation
of the one or more corrective actions by a Virtual Network Function Lifecycle
Manager (VLM). The PEE is to create the one or more corrective actions and
15 transmit the one or more corrective actions to the VLM, wherein the VLM is to
implement the one or more corrective actions.
[0014] In another exemplary aspect of the present disclosure, the PEE is to transmit
the one or more corrective actions, and a predefined time instance data related to
20 implementation of the one or more corrective actions, and wherein the VLM is to
implement the one or more corrective actions at the predefined time instance.
[0015] In another exemplary aspect of the present disclosure, the one or more
corrective actions comprises scaling the NFs.
25
[0016] In another exemplary aspect of the present disclosure, the action of scaling
the Network Function is based on at least one of a total available resources in the
network, a minimum required resources, and a resource capacity of the NF.
30 [0017] In another exemplary aspect of the present disclosure, the NPDA module
and the PEE are in communication through a NA_PE interface.
6
[0018] In another exemplary aspect of the present disclosure, the resource threshold
error event corresponds to an error event occurred upon consumption of resources
by the NF above a predefined threshold.
5
[0019] In another exemplary aspect of the present disclosure, the resource threshold
error event for the NF is received from a Capacity Monitoring Manager (CMM).
[0020] In another exemplary aspect of the present disclosure, the Network Function
10 (NF) is selected from a group of NFs comprising virtual network function (VNF),
container network function components (CNF), and combinations thereof, wherein
the VNF further comprises one or more VNF components, and the CNF further
comprises one or more CNF components.
15 [0021] In another exemplary aspect of the present disclosure, the error event is
received by the transceiver unit from an event routing manager (ERM) module.
[0022] Another aspect of the present disclosure may relate to a system for
implementing one or more corrective actions during a resource threshold error
20 event. The system comprises a Network Function Virtualization (NFV) Platform
Decision Analytics (NPDA) module. The NPDA module comprises a transceiver
unit configured to receive a resource threshold error event for a Network Function
(NF). The system further comprises a retrieval unit connected at least to the
transceiver unit. The retrieval unit is configured to retrieve a set of data related to
25 historical instances of resource threshold error events for the NF. The system further
comprises an evaluation unit connected at least to the retrieval unit. Based on the
retrieved set of data, the evaluation unit is configured to evaluate a hysteresis for
the resource threshold error event. The system further comprises a generation unit
connected at least to the evaluation unit. On evaluation of a positive hysteresis for
30 the resource threshold error event, the generation unit is configured to generate a
response message indicating an occurrence of the positive hysteresis.
7
[0023] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for implementing one or
more corrective actions during a resource threshold error event. The instructions
5 include executable code which, when executed by one or more units of a system,
causes a transceiver unit of the system to receive a resource threshold error event
for a Network Function (NF). Further, the instructions include executable code
which, when executed, causes a retrieval unit to retrieve a set of data related to
historical instances of resource threshold error events for the NF. Further, the
10 instructions include executable code which, when executed, causes an evaluation
unit to evaluate a hysteresis for the resource threshold error event, based on the
retrieved set of data. Further, the instructions include executable code which, when
executed, causes a generation unit to generate a response message indicating an
occurrence of the positive hysteresis, on evaluation of a positive hysteresis for the
15 resource threshold error event.
OBJECTS OF THE DISCLOSURE
[0024] Some of the objects of the present disclosure, which at least one
20 embodiment disclosed herein satisfies are listed herein below.
[0025] It is an object of the present disclosure to provide a system and a method for
implementing one or more corrective actions during a resource threshold error
event.
25
[0026] It is an object of the present disclosure to provide a system and a method for
automatic detection of scaling (In/Out) / healing operations.
[0027] It is another object of the present disclosure to provide a solution that makes
30 intelligent decisions in real-time through event-driven operation based on the
provisioned policies.
8
[0028] It is yet another object of the present disclosure to provide valuable solution
for addressing network issues, improving the overall stability and performance of
the network infrastructure, and facilitating efficient scaling/ healing processes and
5 enables swift and informed actions.
[0029] An object of the invention is to provide a solution that for notifying
automatic scale in/out request based on NPDA hysteresis threshold policies.
10 [0030] Another object of the invention is to provide a solution for generating and
storing a set of threshold-based policies associated from one or more network
functions of the network, wherein each threshold-based policy from the set of
threshold-based policies is associated with at least one network function from the
one or more network functions.
15
[0031] Another object of the invention is to provide a solution that receives least a
resource detail, and a resource threshold exceed event request triggered by a
microservice and fetch a threshold-based policy from the set of threshold-based
policies based on at least the resource detail.
20
[0032] Yet another object of the present invention is to provide a solution that
performs a hysteresis evaluation based on at least the resource detail and the
threshold-based policy associated with the resource detail and notify, an automaticscale In/Out request for the one or more network functions of the network based on
25 the hysteresis evaluation.
[0033] It is an object of the present disclosure to provide a system and a method for
transmitting automatic scaling or automatic-healing request to microservice servers
by NPDA server.
30
9
[0034] It is another object of the present disclosure to provide a solution that
informs scale-in/scale-out/healing of a microservice server in the event the gating
criteria is true, which usually happens when there is a breach in the reported load at
NPDA server.
5
[0035] It is yet another object of the present disclosure to provide a solution that
enables tracking of a microservice server load and informing a threshold-based
policy breach decision (scaling or healing) by NPDA server in real-time, thereby
mitigating any network resource failures.
10
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
15 and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
20 according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
25 [0037] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture;
[0038] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
30 exemplary implementation of the present disclosure;
10
[0039] FIG. 3 illustrates an exemplary block diagram of a system for implementing
one or more corrective actions during a resource threshold error event, in
accordance with exemplary implementations of the present disclosure; and
5 [0040] FIG. 4 illustrates a method flow diagram for implementing the one or more
corrective actions during the resource threshold error event, in accordance with
exemplary implementations of the present disclosure.
[0041] The foregoing shall be more apparent from the following more detailed
10 description of the disclosure.
DETAILED DESCRIPTION
[0042] In the following description, for the purposes of explanation, various
15 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
20 address any of the problems discussed above or might address only some of the
problems discussed above.
[0043] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
25 the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
30
11
[0044] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
5 may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0045] It should be noted that the terms "first", "second", "primary", "secondary",
"target" and the like, herein do not denote any order, ranking, quantity, or
10 importance, but rather are used to distinguish one element from another.
[0046] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
15 a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
20 [0047] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
25 designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
30 any additional or other elements.
12
[0048] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
5 of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
10 the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0049] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
15 “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
20 tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
25 [0050] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
30 types of machine-accessible storage media. The storage unit stores at least the data
13
that may be required by one or more units of the system to perform their respective
functions.
[0051] As used herein “interface” or “user interface refers to a shared boundary
5 across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
10
[0052] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
15 microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0053] As used herein the transceiver unit include at least one receiver and at least
20 one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
[0054] As discussed in the background section, the current known solutions have
25 several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing
method and system of implementing one or more corrective actions during a
resource threshold error event.
30 [0055] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture/platform [100], in accordance
14
with exemplary implementation of the present disclosure. The MANO architecture
[100] may be developed for managing telecom cloud infrastructure automatically,
managing design or deployment design, managing instantiation of a network
node(s) etc/service(s). The MANO architecture [100] deploys the network node(s)
5 in the form of Virtual Network Function (VNF) and Cloud-native/ Container
Network Function (CNF). The system as provided by the present disclosure may
comprise one or more components of the MANO architecture [100]. The MANO
architecture [100] may be used to automatically instantiate the VNFs into the
corresponding environment of the present disclosure so that it could help in
10 onboarding other vendor(s) CNFs and VNFs to the platform. In an implementation,
the system may comprise a NFV Platform Decision Analytics (NPDA) [1096]
component.
[0056] As shown in FIG. 1, the MANO architecture [100] comprises a user
15 interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platform foundation services
module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112] All the components may be assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
20 implementing features of the present disclosure.
[0057] The NFV and SDN design function module [104] comprises a VNF
lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services
catalogue [1046], a network slicing and service chaining manager [1048], a physical
25 and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] may be responsible for deciding on which
server of the communication network the microservice may be instantiated. The
VNF lifecycle manager (compute) [1042] may manage the overall flow of
incoming/ outgoing requests during interaction with the user. The VNF lifecycle
30 manager (compute) [1042] may be responsible for determining which sequence to
be followed for executing the process. For e.g. in an AMF network function of the
15
communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores
the information of the services that need to be run. The network slicing and service
5 chaining manager [1048] manages the slicing (an ordered and connected sequence
of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual resource manager [1050] stores
the logical and physical inventory of the VNFs. Just like the VNF lifecycle manager
(compute) [1042], the CNF lifecycle manager [1052] may be similarly used for the
10 CNFs lifecycle management.
[0058] The platforms foundation services module [106] comprises a
microservices elastic load balancer [1062], an identity & access manager [1064], a
command line interface (CLI) [1066], a central logging manager [1068], and an
15 event routing manager [1070]. The microservices elastic load balancer [1062]
may be used for maintaining the load balancing of the request for the services. The
identity & access manager [1064] may be used for logging purposes. The
command line interface (CLI) [1066] may be used to provide commands to
execute certain processes which requires changes during the run time. The central
20 logging manager [1068] may be responsible for keeping the logs of every service.
These logs are generated by the MANO platform [100]. These logs may be used for
debugging purposes. The event routing manager [1070] may be responsible for
routing the events i.e., the application programming interface (API) hits to the
corresponding services.
25
[0059] The platforms core services module [108] comprises NFV infrastructure
monitoring manager [1082], an assure manager [1084], a performance manager
[1086], a policy execution engine [1088], a capacity monitoring manager [1090], a
release management (mgmt.) repository [1092], a configuration manager & golden
30 configuration manager (GCT) [1094], an NFV platform decision analytics
[1096], a platform NoSQL DB [1098], a platform schedulers and cron jobs [1100],
16
a VNF backup & upgrade manager [1102], a micro service auditor [1104], and a
platform operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] may monitor the infrastructure part of
the NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure
5 manager [1084] may be responsible for supervising the alarms the vendor may be
generating. The performance manager [1086] may be responsible for managing
the performance counters. The policy execution engine (PEE) [1088] may be
responsible for managing all the policies. The capacity monitoring manager
(CMM) [1090] may be responsible for sending the request to the PEE [1088]. The
10 release management repository (RMR) [1092] may be responsible for managing
the releases and the images of all of the vendor’s network nodes. The configuration
manager & GCT [1094] manages the configuration and GCT of all the vendors.
The NFV platform decision analytics (NPDA) [1096] helps in deciding the
priority of using the network resources. It is further noted that the policy execution
15 engine (PEE) [1088], the configuration manager & (GCT) [1094] and the
(NPDA) [1096] work together. The platform NoSQL DB [1098] may be a platform
database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. It may be noted that the platform NoSQL DB
[1098] may be just a narrower implementation of the present disclosure, and any
20 other kind of structure for the database may be implemented for the platform
database such as relational or non-relational database. The platform schedulers
and cron jobs [1100] may schedule the task such as but not limited to triggering of
an event, traverse the network graph etc. The VNF backup & upgrade manager
[1102] takes backup of the images, binaries of the VNFs and the CNFs and produces
25 those backups on demand in case of server failure. The microservice auditor
[1104] audits the microservices. For e.g., in a hypothetical case, instances not being
instantiated by the MANO architecture [100] may be using the network resources.
In such case, the microservice auditor [1104] audits and informs the same so that
resources can be released for services running in the MANO architecture [100]. The
30 audit assures that the services only run on the MANO platform [100]. The platform
17
operations, administration and maintenance manager [1106] may be used for
newer instances that are spawning.
[0060] The platform resource adapters and utilities module [112] further
5 comprises a platform external API adaptor and gateway [1122], a generic decoder
and indexer (XML, CSV, JSON) [1124], a docker service adaptor [1126], an
OpenStack API adapter [1128], and a NFV gateway [1130]. The platform external
API adaptor and gateway [1122] may be responsible for handling the external
services (to the MANO platform [100]) that requires the network resources. The
10 generic decoder and indexer (XML, CSV, JSON) [1124] may get directly the data
of the vendor system in the XML, CSV, JSON format. The docker service adaptor
[1126] may be the interface provided between the telecom cloud and the MANO
architecture [100] for communication. The Docker Service Adapter (DSA) is a
microservices-based system designed to deploy and manage Container Network
15 Functions (CNFs) and their components (CNFCs) across Docker nodes. It offers
REST endpoints for key operations, including uploading container images to a
Docker registry, terminating CNFC instances, and creating Docker volumes and
networks. CNFs, which are network functions packaged as containers, may consist
of multiple CNFCs. The DSA facilitates the deployment, configuration, and
20 management of these components by interacting with Docker's API, ensuring
proper setup and scalability within a containerized environment. This approach
provides a modular and flexible framework for handling network functions in a
virtualized network setup.
25 [0061] The OpenStack API adapter [1128] may be used to connect with the
virtual machines (VMs). The NFV gateway [1130] may be responsible for
providing the path to each services going to/incoming from the MANO architecture
[100].
30 [0062] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
18
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method for
implementing one or more corrective actions during a resource threshold error event
utilising the system [300]. In another implementation, the computing device [200]
5 itself implements the method for implementing the one or more corrective actions
during the resource threshold error event using one or more units configured within
the computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
10 [0063] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a random15 access memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
20 accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
25
[0064] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
30 Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
19
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
5 information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
10 [0065] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
15 computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
20 process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
[0066] The computing device [200] also may include a communication interface
25 [218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
30 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
20
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
various types of information.
5
[0067] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
10 ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
15 [0068] Referring to FIG. 3, an exemplary block diagram of a system [300] for
implementing one or more corrective actions during a resource threshold error
event, is shown, in accordance with the exemplary implementations of the present
disclosure. In one example, the system [300] may be implemented as or within a
Network Function Virtualization (NFV) Platform Decision Analytics (NPDA)
20 module. In another example, as depicted in FIG. 3, the system [300] may include
the NPDA module [302]. The system [300] may also include additional components
in communication with the NPDA module [302], which have not been depicted in
FIG. 3, and would be understood to a person skilled in the art.
25 [0069] In another example, the system [300], may be in communication with a
Policy Execution Engine (not depicted in FIG. 3). Such PEE may be understood as
PEE [1088], as explained in conjunction with FIG. 1. In cases, where the system
[300] is implemented as or within the NPDA module, the system [300] and the PEE
[1088] may be in communication through a NA_PE interface. The NA_PE interface
30 may refer to an interface used for exchanging data between the NPDA module and
the PEE [1088] for facilitating the communication.
21
[0070] The system [300] may be in further communication with other network
entities/components known to a person skilled in the art. Such network
entities/components have not been depicted in FIG. 3 and not explained here for the
5 sake of brevity.
[0071] As depicted in FIG. 3, in an example, the system [300] may include at least
one transceiver unit [304], at least one retrieval unit [306], at least one evaluation
unit [308], and at least one generation unit [310]. In cases where the system [300]
10 may be implemented as the NPDA module, the aforementioned units may be a part
of the system [300].
[0072] Also, all of the components/ units of the system [300] are assumed to be
connected to each other unless otherwise indicated below. As shown in FIG.3, all
15 units shown within the system [300] should also be assumed to be connected to
each other. Also, in FIG. 3, only a few units are shown, however, the system [300]
may comprise multiple such units or the system [300] may comprise any such
numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [300] may be present in a user
20 device/ user equipment to implement the features of the present disclosure. The
system [300] may be a part of the user device/ or may be independent of but in
communication with the user device (may also referred herein as a UE). In another
implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
25 entity and partly in the user device.
[0073] The system [300] is configured for implementing the one or more corrective
actions during the resource threshold error event, with the help of the
interconnection between the components/units of the system [300].
30
22
[0074] As would be understood, the one or more corrective actions may refer to the
measures or service operations that may be used for correcting one or more
problems/issues, such as error event, in order to correctively apply scaling or
healing operations. Further, the error event may refer to a scenario where there
5 exists an error associated with reaching a performance capacity of a particular
network function.
[0075] In operation, for implementing the one or more corrective actions during the
resource threshold error event, the transceiver unit [304] receives a resource
10 threshold error event for a Network Function (NF) at a Network Function
Virtualization (NFV) Platform Decision Analytics (NPDA) module [302].
[0076] As would be understood, the resource threshold event for the NF may refer
to a scenario where the NF or its instance (or a processing component) reaches its
15 limits in terms of resources such as performance capabilities, storage capabilities,
etc. In an exemplary implementation of the present disclosure, the resource
threshold error event corresponds to an error event occurred upon consumption of
resources by the NF above a predefined threshold. In such implementations, the
predefined threshold may refer to the threshold limit indicating the performance
20 capabilities, storage capabilities, etc.
[0077] In another implementation of the present disclosure, the Network Function
(NF) is selected from a group of NFs comprising virtual network function (VNF),
container network function (CNF), and combinations thereof, wherein the VNF
25 further comprises one or more VNF components, and the CNF further comprises
one or more CNF components. As used herein, the VNF may refer to software
applications that deliver network functions such as directory services, routers,
firewalls, load balancers, etc. The CNF may be a component or a software service
that fulfils certain network functionalities while adhering to cloud-native design
30 principles without requiring any hardware or appliance to house it.
23
[0078] In an exemplary implementation of the present disclosure, the transceiver
unit [304] may receive the resource threshold error event from the event routing
manager (ERM) module [1070]. In another exemplary implementation of the
present disclosure, the resource threshold error event may be received from the
5 capacity monitoring manager (CMM) [1090] for the NF.
[0079] Continuing further, the retrieval unit [306] retrieves, at the NPDA module
[302], a set of data related to historical instances of resource threshold error events
for the NF. The set of data related to the historical instances of the error events may
10 be stored by certain components within the system architecture [100] and other
network entities such as the network data analytics function, etc. The set of data
provides the information associated with the occurrence of the resource threshold
error event in the past. The set of data related to historical instances of resource
threshold events for the NF may refer to the occurrences of reaching the resource
15 threshold events for the particular NF in the past. For example, the set of data may
indicate the performance levels and the threshold levels which may be used to
determine the corrective actions.
[0080] The evaluation unit [308] then evaluates, based on the retrieved set of data,
20 a hysteresis for the resource threshold error event. The hysteresis for the resource
threshold error event may refer to the probability of occurrence and the actions and
policies that were formed in case of the resource threshold error events occurred in
the past. The hysteresis may indicate a pattern in the occurrence of the error event
and may be used for making decisions based on the past data present in the set of
25 data. Accordingly, the hysteresis for the resource threshold event may refer to a
pattern in the past occurrences of the resource threshold events for analysis of the
frequency of the resource threshold events.
[0081] The generation unit [310] then generates, on evaluation of a positive
30 hysteresis for the resource threshold error event, generating, by a generation unit, a
response message indicating an occurrence of the positive hysteresis. As would be
24
understood, the positive evaluation of the hysteresis for the resource threshold
events may refer to an indication of repeated occurrence of resource threshold
events for the particular NF indicating that the particular NF instance is not able to
perform optimally, and requires a corrective action to be performed. The response
5 is generated in order to be provided as a notification. The notification enables
providing a notification that there exists the hysteresis for the error event. The user
is notified about the existence of the hysteresis for the error event. The notification
to the user allows the user to take corrective measures for the error event. The user
may manually perform the one or corrective actions based on the notification. Also,
10 the notification enables the user to analyze the hysteresis for the error event and
accordingly analyze the need for taking the corrective measures. It may be noted
that the notification may be sent as a popup message or a graphical user interface
on a user equipment of the user. For sending the notification, various other
alternatives may also be used as may be known in the art and obvious to a person
15 skilled in the art and shall not be considered to be limited in nature.
[0082] In a further implementation of the present disclosure, the transceiver unit
[304] transmits the response message to a user, such as a network administrator or
a network operator, wherein the user, based on the received response message, is to
20 implement the one or more corrective actions. The implementation of the one or
more corrective measures may be done manually by the user or may also be
automatically performed.
[0083] In another exemplary implementation of the present disclosure, the retrieval
25 unit [306] retrieves, based on the response message, at the NPDA module [302], a
resource threshold policy defined for the NF relating to the resource threshold error
event. Then the transceiver unit [304] transmits, to a Policy Execution Engine (PEE)
[1088], a request for one or more corrective actions to negate the resource threshold
error event. The request for the one or more corrective actions may be transmitted
30 in response to a positive evaluation of the hysteresis for the resource threshold error
event. As would be understood to a person skilled in the art, the request for
25
corrective action may refer to a request for performing the corrective actions and
may be in form of a command or a message. It may be noted that the request for
corrective action may be sent over the NA_PE interface. Further, the transceiver
unit [304] receives, from the PEE [1088], an indication of an implementation of the
5 one or more corrective actions by a Virtual Network Function Lifecycle Manager
(VLM) [1042]. The PEE [1088] is responsible to create the one or more corrective
actions and then transmit the one or more corrective actions to the VLM [1042].
The VLM [1042] is to implement the one or more corrective actions. Also, it may
be noted that request for the one or more corrective actions may be in form of a
10 command or a request message, etc.
[0084] In another exemplary implementation of the present disclosure, the PEE
[1088] is responsible to transmit the one or more corrective actions and a predefined
time instance data related to implementation of the one or more corrective actions.
15 It may be noted that the predefined time instance may refer to a period of time that
may be selected for performing the one or more corrective actions. In an example,
the predefined time instance may have a selected time and date for performance of
the one or more actions. The exemplary implementation of the present disclosure
also provides that the VLM [1042] may implement the one or more corrective
20 actions at the predefined time instance. The implementation of the one or more
corrective actions at the predefined time instance may, for example, be to perform
the scaling in operation at a specific time, during a scheduled maintenance, say on
25th January at 6:00 P.M.
25 [0085] In one of the implementations of the present disclosure, the one or more
corrective actions comprises scaling the NFs. As would be understood, the scaling
the NF may refer to scaling in or scaling out of the resources allocated to a particular
instance of the NF. The scaling in and scaling out may refer to increase or decrease
in the resource allocation of a particular NF instance, in order to manage the
30 performance requirements of the network function. It may be noted that the
implementation of the present disclosure may allow proactive scale-in/out which
26
may be done automatically scheduled and may be planned automatically. In another
example, the present disclosure may also be implemented manually such as ondemand by a network administrator or a network entity. While scaling, the
availability of the network resources are checked.
5
[0086] In an exemplary implementation of the present disclosure, the action of
scaling the Network Function is based on at least one of a total available resource
in the network, a minimum required resource, and a resource capacity of the NF.
Due to limitation of network resource, it is important that the scaling decision is
10 made based on the available network resources, and the requirement of the
resources based on the capacity of the NF. As would be understood, the total
available network resources may refer to a collective quantum of the resources
available within the network. For example, the total available network resources
may indicate a processing power, a storage capacity, bandwidth, etc. The minimum
15 required resources may refer to a resource requirement of a particular NF which is
required for keeping the NF operation alive and below such level of the minimum
resources should not be allocated. Further, the resource capacity of the NF may refer
to a set configurable limit allocated to a NF indicating a highest level of resources
that may be allocated to a particular resource.
20
[0087] For example, the corrective action may be to increase the quantity of
resources for that particular NF. Further, it may be noted that in a scenario where
the NF is not fully utilizing the allocated resources, then in such case, the quantity
of the resources allocated to the NF may also be reduced for efficient utilization of
25 resources. The one or more corrective actions in case of the resource threshold event
enables automatic scaling of the network functions. For example, by scaling in the
instance of the network function, increasing the resource allocation in order to meet
the requirements, since due to low resources, such resource threshold error events
may be happening repeatedly.
30
27
[0088] Referring to FIG. 4, an exemplary method flow diagram [400] for
implementing one or more corrective actions during an error event, in accordance
with exemplary implementations of the present disclosure is shown. In an
implementation the method [400] is performed by the NPDA module [302]. Further,
5 in an implementation, the NPDA module [302] may be present in a server device to
implement the features of the present disclosure. Also, as shown in FIG. 4, the
method [400] starts at step [402].
[0089] As would be understood, the one or more corrective actions may refer to the
10 measures or service operations that may be used for correcting one or more
problems/issues, such as error event, in order to correctively apply scaling or
healing operations. Further, the error event may refer to a scenario where there
exists an error associated with reaching a performance capacity of a particular
network function.
15
[0090] In operation, for implementing one or more corrective actions during an
error event, the method [400], at step [404], involves receiving, by a transceiver
unit [304] at a Network Function Virtualization (NFV) Platform Decision Analytics
(NPDA) module [302], a resource threshold error event for a Network Function
20 (NF).
[0091] As would be understood, the resource threshold event for the NF may refer
to a scenario where the NF or its instance (or a processing component) reaches its
limits in terms of resources such as performance capabilities, storage capabilities,
25 etc. In an exemplary implementation of the present disclosure, the resource
threshold error event corresponds to an error event occurred upon consumption of
resources by the NF above a predefined threshold. In such implementations, the
predefined threshold may refer to the threshold limit indicating the performance
capabilities, storage capabilities, etc.
30
28
[0092] In another implementation of the present disclosure, the Network Function
(NF) is selected from a group of NFs comprising virtual network function (VNF),
container network function (CNF), and combinations thereof, wherein the VNF
further comprises one or more VNF components, and the CNF further comprises
5 one or more CNF components. As used herein, the VNF may refer to software
applications that deliver network functions such as directory services, routers,
firewalls, load balancers, etc. The CNF may be a component or a software service
that fulfils certain network functionalities while adhering to cloud-native design
principles without requiring any hardware or appliance to house it.
10
[0093] In an exemplary implementation of the present disclosure, the transceiver
unit [304] may receive the resource threshold error event from the event routing
manager (ERM) module [1070]. In another exemplary implementation of the
present disclosure, the resource threshold error event may be received from the
15 capacity monitoring manager (CMM) [1090] for the NF.
[0094] Continuing further, at step [406], the method [400] comprises retrieving, by
a retrieval unit [306] at the NPDA module [302], a set of data related to historical
instances of resource threshold error events for the NF. The set of data related to the
20 historical instances of the error events may be stored by certain components within
the system architecture [100] and other network entities such as the network data
analytics function, etc. The set of data provides the information associated with the
occurrence of the error event in the past. The set of data related to historical
instances of resource threshold events for the NF may refer to the occurrences of
25 reaching the resource threshold events for the particular NF in the past. For
example, the set of data may indicate the performance levels and the threshold
levels which may be used to determine the corrective actions.
[0095] Then based on the retrieved set of data, at step [408], the method [400]
30 comprises, evaluating, by an evaluation unit [308], a hysteresis for the resource
threshold error event. The hysteresis for the resource threshold error event may refer
29
to the probability of occurrence and the actions and policies that were formed in
case of the resource threshold error events occurred in the past. The hysteresis may
indicate a pattern in the occurrence of the error event and may be used for making
decisions based on the past data present in the set of data. Accordingly, the
5 hysteresis for the resource threshold event may refer to a pattern in the past
occurrences of the resource threshold events for analysis of the frequency of the
resource threshold events.
[0096] Further, on evaluation of a positive hysteresis for the resource threshold
10 error event, then at step [410], the method [400] comprises generating, by a
generation unit [310], a response message indicating an occurrence of the positive
hysteresis. As would be understood, the positive evaluation of the hysteresis for the
resource threshold events may refer to an indication of repeated occurrence of
resource threshold events for the particular NF indicating that the particular NF
15 instance is not able to perform optimally, and requires a corrective action to be
performed. The response is generated in order to be provided as a notification. The
notification enables providing a notification that there exists the hysteresis for the
error event. The user is notified about the existence of the hysteresis for the error
event. The notification to the user allows the user to take corrective measures for
20 the error event. The user may manually perform the one or corrective actions based
on the notification. Also, the notification enables the user to analyze the hysteresis
for the error event and accordingly analyze the need for taking the corrective
measures. It may be noted that the notification may be sent as a popup message or
a graphical user interface on a user equipment of the user. For sending the
25 notification, various other alternatives may also be used as may be known in the art
and obvious to a person skilled in the art and shall not be considered to be limited
in nature.
[0097] In further implementation of the present disclosure, the method [400]
30 comprises transmitting, by the transceiver unit [304], the response message to a
user, wherein the user, based on the received response message, is to implement one
30
or more corrective actions. The implementation of the one or more corrective
measures may be done manually by the user or may also be automatically
performed.
5 [0098] In another exemplary implementation of the present disclosure, based on the
response message, the method [400] involves retrieving, by the retrieval unit [306]
at the NPDA module [302], a resource threshold policy defined for the NF relating
to the resource threshold error event. Then the method comprises transmitting by
the transceiver unit [304] to a Policy Execution Engine (PEE) [1088], a request for
10 one or more corrective actions to negate the resource threshold error event. The
request for the one or more corrective actions may be transmitted in response to a
positive evaluation of the hysteresis for the resource threshold error event. As would
be understood to a person skilled in the art, the request for corrective action may
refer to a request for performing the corrective actions and may be in form of a
15 command or a message. It may be noted that the request for corrective action may
be sent over the NA_PE interface. Further, the method [400] then moves to
receiving, by the transceiver unit [304] from the PEE [1088], an indication of an
implementation of the one or more corrective actions by a Virtual Network Function
Lifecycle Manager (VLM) [1042]. The PEE [1088] may be responsible for creating
20 the one or more corrective actions and then transmitting the one or more corrective
actions to the VLM [1042]. The VLM [1042] may then implement the one or more
corrective actions. Also, it may be noted that request for the one or more corrective
actions may be in form of a command or a request message, etc.
25 [0099] In another exemplary implementation of the present disclosure, the PEE
[1088] is responsible for transmitting the one or more corrective actions and a
predefined time instance data related to implementation of the one or more
corrective actions. It may be noted that the predefined time instance may refer to a
period of time that may be selected for performing the one or more corrective
30 actions. In an example, the predefined time instance may have a selected time and
date for performance of the one or more actions. In an example, the VLM [1042] is
31
to implement the one or more corrective actions at the predefined time instance.
The implementation of the one or more corrective actions at the predefined time
instance may, for example, be to perform the scaling in operation at a specific time,
say on 25th January at 6:00 P.M.
5
[0100] In one of the implementations of the present disclosure, the one or more
corrective actions comprises scaling the NFs. As would be understood, the scaling
the NF may refer to scaling in or scaling out of the resources allocated to a particular
instance of the NF. The scaling in and scaling out may refer to increase or decrease
10 in the resource allocation of a particular NF instance, in order to manage the
performance requirements of the network function. It may be noted that the
implementation of the present disclosure may allow proactive scale-in/out which
may be done automatically scheduled and may be planned automatically. In another
example, the present disclosure may also be implemented manually such as on15 demand by a network administrator or a network entity. While scaling, the
availability of the network resources are checked.
[0101] In an exemplary implementation of the present disclosure, the action of
scaling the Network Function is based on at least one of a total available resource
20 in the network, a minimum required resource, and a resource capacity of the NF.
Due to limitation of network resource, it is important that the scaling decision is
made based on the available network resources, and the requirement of the
resources based on the capacity of the NF. As would be understood, the total
available network resources may refer to a collective quantum of the resources
25 available within the network. For example, the total available network resources
may indicate a processing power, a storage capacity, bandwidth, etc. The minimum
required resources may refer to a resource requirement of a particular NF which is
required for keeping the NF operation alive and below such level the minimum
resources should not be allocated. Further, the resource capacity of the NF may refer
30 to a set configurable limit allocated to a NF indicating a highest level of resources
that may be allocated to a particular resource.
32
[0102] For example, the corrective action may be to increase the quantity of
resources for that particular NF. Further, it may be noted that in a scenario where
the NF is not fully utilizing the allocated resources, then in such case, the quantity
5 of the resources allocated to the NF may also be reduced for efficient utilization of
resources. The one or more corrective actions in case of the resource threshold event
enables automatic scaling of the network functions. For example, by scaling in the
instance of the network function, increasing the resource allocation in order to meet
the requirements, since due to low resources, such resource threshold error events
10 may be happening repeatedly.
[0103] Thereafter, at step [412], the method [400] is terminated.
[0104] The present disclosure further discloses a non-transitory computer readable
15 storage medium storing instructions for implementing one or more corrective
actions during a resource threshold error event. The instructions include executable
code which, when executed by one or more units of a system [300], causes a
transceiver unit [304] of the system [300] to receive a resource threshold error event
for a Network Function (NF). Further, the instructions include executable code
20 which, when executed, causes a retrieval unit [306] to retrieve a set of data related
to historical instances of resource threshold error events for the NF. Further, the
instructions include executable code which, when executed, causes an evaluation
unit [308] to evaluate a hysteresis for the resource threshold error event, based on
the retrieved set of data. Further, the instructions include executable code which,
25 when executed, causes a generation unit [310] to generate a response message
indicating an occurrence of the positive hysteresis, on evaluation of a positive
hysteresis for the resource threshold error event.
[0105] As is evident from the above, the present disclosure provides a technically
30 advanced solution for implementing one or more corrective actions during the
resource threshold error event. The present solution provides a technically advanced
33
solution for automatic detection of scaling (In/Out) / healing operations. The present
disclosure enables making intelligent decisions in real-time through event-driven
operation based on the provisioned policies. Further, it may be noted that the present
disclosure provides monitoring of the error events, analyses the error event data and
5 policies required for taking corrective actions, and also provides implementation of
the corrective actions to be taken. Thus, the present disclosure provides a solution
which is able to performs all of the steps, thereby resulting in a closed loop
automation. The present disclosure utilises closed loop automation and enables
addressing network issues, improving the overall stability and performance of the
10 network infrastructure, and facilitating efficient scaling / healing processes and also
enables swift and informed actions.
[0106] Further, the present solution provides a technically advanced solution for
notifying automatic scale in/out request based on NPDA hysteresis threshold
15 policies. The present solution offers a notable technical advantage of manifesting
in its capacity to execute intelligent, real-time decisions driven by meticulously
provisioned policies and hysteresis evaluation. This attribute sets it apart as a
formidable solution for tackling network challenges, ultimately bolstering the
stability and performance of the network infrastructure. The present disclosure
20 provides the ability to facilitate efficient scaling operations (In/Out) empowers
swift, well-informed actions, ensuring that network resources are optimally
allocated. By seamlessly integrating event-driven operations with predefined
policies, this innovation demonstrates its value in the realm of network
management, offering a dynamic and responsive approach to network optimization.
25 This, in turn, leads to a marked improvement in overall network resilience and
efficiency.
[0107] Also, the present disclosure provides a solution that informs scale-in/scaleout/healing of a microservice server in the event the gating criteria is true, which
30 usually happens when there is a breach in the reported load. The present disclosure
provides a solution that acts as a closed loop automation point which in real time
34
take informed decisions related to scaling or healing of a microservice server based
on an evaluated threshold-based policy breach decision. The present disclosure
provides a solution that enables tracking of a microservice server load and
informing a threshold-based policy breach decision (scaling or healing) by NPDA
5 server in real-time, thereby mitigating any network resource failures.
[0108] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
10 principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
15 [0109] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
20 functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
35
We Claim:
1. A method for implementing one or more corrective actions during a
resource threshold error event, the method comprising:
5 - receiving, by a transceiver unit [304] at a Network Function
Virtualization (NFV) Platform Decision Analytics (NPDA) module
[302], a resource threshold error event for a Network Function (NF);
- retrieving, by a retrieval unit [306] at the NPDA module [302], a set of
data related to historical instances of resource threshold error events for
10 the NF;
- based on the retrieved set of data, evaluating, by an evaluation unit
[308], a hysteresis for the resource threshold error event; and
- on evaluation of a positive hysteresis for the resource threshold error
event, generating, by a generation unit [310], a response message
15 indicating an occurrence of the positive hysteresis.
2. The method as claimed in claim 1, further comprising:
- transmitting, by the transceiver unit [304], the response message to a
user, wherein the user, based on the received response message, is to
20 implement one or more corrective actions.
3. The method as claimed in claim 1, further comprising:
- based on the response message, retrieving, by the retrieval unit [306] at
the NPDA module [302], a resource threshold policy defined for the NF
25 relating to the resource threshold error event;
- transmitting by the transceiver unit [304] to a Policy Execution Engine
(PEE) [1088], a request for one or more corrective actions to negate the
resource threshold error event; and
- receiving, by the transceiver unit [304] from the PEE [1088], an
30 indication of an implementation of the one or more corrective actions
36
by a Virtual Network Function Lifecycle Manager (VLM) [1042],
wherein the PEE [1088] is to:
o create the one or more corrective actions; and
o transmit the one or more corrective actions to the VLM [1042],
5 wherein the VLM [1042] is to implement the one or more
corrective actions.
4. The method as claimed in claim 3, wherein the PEE [1088] is to transmit
the one or more corrective actions and a predefined time instance data related to
10 implementation of the one or more corrective actions, and wherein the VLM [1042]
is to implement the one or more corrective actions at the predefined time instance.
5. The method as claimed in claim 2 or 3, wherein the one or more corrective
actions comprises scaling the NFs.
15
6. The method as claimed in claim 5, wherein the action of scaling the Network
Function is based on at least one of a total available resources in the network, a
minimum required resources, and a resource capacity of the NF.
20 7. The method as claimed in claim 3, wherein the NPDA module [302] and the
PEE [1088] are in communication through a NA_PE interface.
8. The method as claimed in claim 1, wherein the resource threshold error
event corresponds to an error event occurred upon consumption of resources by the
25 NF above a predefined threshold.
9. The method as claimed in claim 1, wherein the resource threshold error
event for the NF is received from a Capacity Monitoring Manager (CMM) [1090].
30 10. The method as claimed in claim 1, wherein the Network Function (NF) is
selected from a group of NFs comprising virtual network function (VNF), container
37
network function components (CNF), and combinations thereof, wherein the VNF
further comprises one or more VNF components, and the CNF further comprises
one or more CNF components.
5 11. The method as claimed in claim 1, wherein the error event is received by
the transceiver unit from an event routing manager (ERM) module [1070].
12. A system [300] for implementing one or more corrective actions during a
resource threshold error event, the system [300] comprising a Network Function
10 Virtualization (NFV) Platform Decision Analytics (NPDA) module [302], wherein
the NPDA module [302] comprises:
- a transceiver unit [304] configured to receive a resource threshold error
event for a Network Function (NF);
- a retrieval unit [306] connected at least to the transceiver unit [304], the
15 retrieval unit [306] configured to retrieve a set of data related to
historical instances of resource threshold error events for the NF;
- an evaluation unit [308] connected at least to the retrieval unit [306], the
evaluation unit [308] configured to evaluate, based on the retrieved set
of data, a hysteresis for the resource threshold error event; and
20 - a generation unit [310] connected at least to the evaluation unit [308],
the generation unit [310] configured to generate, on evaluation of a
positive hysteresis for the resource threshold error event, a response
message indicating an occurrence of the positive hysteresis.
25 13. The system [300] as claimed in claim 12, wherein the transceiver unit [304]
is further configured to transmit the response message to a user, wherein the user,
based on the received response message, is to implement one or more corrective
actions.
30 14. The system [300] as claimed in claim 12, wherein:
38
- the retrieval unit [306] is further configured to retrieve, based on the
response message, a resource threshold policy defined for the NF
relating to the resource threshold error event;
- the transceiver unit [304] is further configured to transmit, to a Policy
5 Execution Engine (PEE) [1088], a request for one or more corrective
actions to negate the resource threshold error event; and
- the transceiver unit [304] is further configured to receive, from the PEE
[1088], an indication of an implementation of the one or more corrective
actions by a Virtual Network Function Lifecycle Manager (VLM)
10 [1042], wherein the PEE [1088] is to:
o create the one or more corrective actions; and
o transmit the one or more corrective actions to the VLM [1042],
wherein the VLM [1042] is to implement the one or more
corrective actions.
15
15. The system [300] as claimed in claim 14, wherein the PEE [1088] is to
transmit the one or more corrective actions and a predefined time instance data
related to implementation of the one or more corrective actions, and wherein the
VLM [1042] is to implement the one or more corrective actions at the predefined
20 time instance.
16. The system [300] as claimed in claim 13 or 14, wherein the one or more
corrective actions comprises scaling the NFs.
25 17. The system [300] as claimed in claim 16, wherein the action of scaling the
Network Function is based on at least one of a total available resources in the
network, a minimum required resources, and a resource capacity of the NF.
18. The system [300] as claimed in claim 14, wherein the NPDA module [302]
30 and the PEE [1088] are in communication through a NA_PE interface.
39
19. The system [300] as claimed in claim 12, wherein the resource threshold
error event corresponds to an error event occurred upon consumption of resources
by the NF above a predefined threshold.
5 20. The system [300] as claimed in claim 12, wherein the resource threshold
error event for the NF is received from a Capacity Monitoring Manager (CMM)
[1090].
21. The system [300] as claimed in claim 12, wherein the Network Function
10 (NF) is selected from a group of NFs comprising virtual network function (VNF),
container network function components (CNF), and combinations thereof, wherein
the VNF further comprises one or more VNF components, and the CNF further
comprises one or more CNF components.
15 22. The system [300] as claimed in claim 12, wherein the transceiver unit [304]
is further configured to receive the error event from an event routing manager
(ERM) module [1070].

Documents

Application Documents

# Name Date
1 202321063845-STATEMENT OF UNDERTAKING (FORM 3) [22-09-2023(online)].pdf 2023-09-22
2 202321063845-PROVISIONAL SPECIFICATION [22-09-2023(online)].pdf 2023-09-22
3 202321063845-POWER OF AUTHORITY [22-09-2023(online)].pdf 2023-09-22
4 202321063845-FORM 1 [22-09-2023(online)].pdf 2023-09-22
5 202321063845-FIGURE OF ABSTRACT [22-09-2023(online)].pdf 2023-09-22
6 202321063845-DRAWINGS [22-09-2023(online)].pdf 2023-09-22
7 202321063845-Proof of Right [19-01-2024(online)].pdf 2024-01-19
8 202321063845-FORM-5 [21-09-2024(online)].pdf 2024-09-21
9 202321063845-ENDORSEMENT BY INVENTORS [21-09-2024(online)].pdf 2024-09-21
10 202321063845-DRAWING [21-09-2024(online)].pdf 2024-09-21
11 202321063845-CORRESPONDENCE-OTHERS [21-09-2024(online)].pdf 2024-09-21
12 202321063845-COMPLETE SPECIFICATION [21-09-2024(online)].pdf 2024-09-21
13 202321063845-FORM 3 [07-10-2024(online)].pdf 2024-10-07
14 202321063845-Request Letter-Correspondence [08-10-2024(online)].pdf 2024-10-08
15 202321063845-Power of Attorney [08-10-2024(online)].pdf 2024-10-08
16 202321063845-Form 1 (Submitted on date of filing) [08-10-2024(online)].pdf 2024-10-08
17 202321063845-Covering Letter [08-10-2024(online)].pdf 2024-10-08
18 202321063845-CERTIFIED COPIES TRANSMISSION TO IB [08-10-2024(online)].pdf 2024-10-08
19 Abstract.jpg 2024-10-21
20 202321063845-ORIGINAL UR 6(1A) FORM 1 & 26-030125.pdf 2025-01-07