Sign In to Follow Application
View All Documents & Correspondence

Method And System For Automatic Scaling Of One Or More Nodes

Abstract: The present disclosure relates to a method and system for automatic scaling of nodes. The method comprises receiving a request for executing an automatic scaling policy for the nodes and fetching a set of data relating to the nodes. The method comprises sending a request to get current used resources, the allocated resource quota, and available resources and analysing a demand for one or more resources, for automatic scaling, based on the current used resources, the allocated resource quota, the available resources for the nodes, a set of automatic scaling constraints data, and the automatic scaling policy. The method further comprises transmitting a request for one of reserving and unreserving of the one or more resources and triggering the automatic scaling request based on a response on the request for one of the reserving and the unreserving of the one or more resources. FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 October 2023
Publication Number
20/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Rizwan Ahmad
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Kapil Gill
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Arpit Jain
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Shashank Bhushan
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Meenakshi Sarohi
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Kumar Debashish
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Supriya Kaushik De
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Gaurav Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Kishan Sahu
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Vinay Gayki
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Mohit Bhanwria
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Durgesh Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Rahul Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR AUTOMATIC SCALING OF
ONE OR MORE NODES”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR AUTOMATIC SCALING OF ONE OR
MORE NODES
FIELD OF THE DISCLOSURE
5
[0001] Embodiment of the present disclosure generally relates to the field of
network management. More particularly, the present disclosure may relate to
method and system for automatic scaling of one or more nodes.
10 BACKGROUND
[0002] The following description of related art is intended to provide background
information pertaining to the field of the disclosure. This section may include
certain aspects of the art that may be related to various features of the present
15 disclosure. However, it should be appreciated that this section be used only to
enhance the understanding of the reader with respect to the present disclosure, and
not as admissions of prior art.
[0003] In communication network such as 5G communication network, different
20 microservices perform different services, jobs and tasks in the network. Different
microservices have to perform their jobs based on operational parameters and
policies, in such a way, that it does not affect microservices’ own operations and
service network operations. However, in MANO system architecture, during
service operations, for fulfilling the requirements of policies and operational
25 parameters, it is required to provide sufficient resources for managing the virtual
network functions (VNF/VNFC) and/or containerized functions (CNF/CNFC)
component to handle service requests coming in the network. Policy Execution
Engine (PEEGN) provides functionality to support dynamic requirements of
resource management and network service orchestration in the virtualized and
30 containerized network. PEEGN service stores and provides policies for resource
security, availability, and scalability of VNFs. It executes automatic scaling and
3
healing functionality of VNF and automatic scaling of CNF. For implementing
proper resources allocations, there are some challenges, such as excessive
provisioning of resources, insufficient provisioning of resources, resource failures,
resource mismanagement, performance degradation, conflict while reservation and
5 allocation of resources, unavailability of Policy Execution Engine Service, time
consumed in reservation and allocation of VNF/VNFC/CNFC/CNF resources and
cost increment, which may happen in the network and affects the network
performance and operational efficiency.
10 [0004] Hence, in view of these and other existing limitations, there arises an
imperative need to provide an efficient solution to overcome the above-mentioned
and other limitations and to provide a method and a system for automatic scaling of
one or more nodes with automatic scaling constraints.
15 SUMMARY
[0005] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
20 subject matter.
[0006] An aspect of the present disclosure may relate to a method for automatic
scaling of one or more nodes. The method comprises receiving, by a transceiver
unit at a Policy Execution Engine (PEEGN), from a Network Function
25 Virtualization Platform Decision and Analytics (NPDA), a request for executing an
automatic scaling policy for the one or more nodes. Further, the method comprises
fetching, by the transceiver unit at the PEEGN, a set of data relating to the one or
more nodes. The method further comprises sending, by the transceiver unit, from
the PEEGN a request to get one or more current used resources by each of the one
30 or more nodes, the allocated resource quota for each of the one or more nodes, and
one or more available resources for the one or more nodes, to a physical and virtual
4
inventory manager (PVIM). Further, the method comprises analysing, by a
processing unit at the PEEGN, a demand for one or more resources, for automatic
scaling of the one or more nodes, based on the one or more current used resources
by each of the one or more nodes, the allocated resource quota for each of the one
5 or more nodes, the one or more available resources for the one or more nodes, a set
of automatic scaling constraints data, and the automatic scaling policy.
Furthermore, the method comprises transmitting, by the transceiver unit from the
PEEGN, to the PVIM, a request for one of reserving and unreserving the one or
more resources. Thereafter, the method comprises triggering, by the transceiver
10 unit, from the PEEGN, the automatic scaling request to a node manager, based on
a response from the PVIM on the request for one of the reserving and the
unreserving of the one or more resources.
[0007] In an exemplary aspect of the present disclosure, the one or more nodes
15 comprise at least one of virtual network functions (VNFs), virtual network function
components (VNFCs), container network functions (CNFs), and container network
function components (CNFCs).
[0008] In an exemplary aspect of the present disclosure, fetching, by the transceiver
20 unit at the PEEGN node, the set of data relating to the one or more nodes comprises
at least one of transmitting, by the transceiver unit, from the PEEGN a request to
one or more node components associated with the one or more nodes to fetch the
set of data related to the one or more nodes and saving, by a storage unit, at the
PEEGN, the set of data related to the one or more nodes in a database.
25
[0009] In an exemplary aspect of the present disclosure, the set of automatic scaling
constraints data comprises at least one of a total number of CPUs, a virtual memory
size, and a disk size with the one or more nodes.
5
[0010] In an exemplary aspect of the present disclosure, automatic-scaling of the
one or more nodes comprises at least one of scale-in and scale-out of the one or
more nodes.
5 [0011] In an exemplary aspect of the present disclosure, the response from the
PVIM on the request for one of the reserving and the unreserving of the one or more
resources comprises one or more tokens for each of the one or more nodes.
[0012] In an exemplary aspect of the present disclosure, the triggering, by the
10 transceiver unit, from the PEEGN, the automatic scaling request to a node manager
comprises the one or more tokens for each of the one or more nodes.
[0013] In an exemplary aspect of the present disclosure, prior to transmitting, by
the transceiver unit from the PEEGN, to the PVIM, a request for reserving the one
15 or more resources, the method comprises updating, by the storage unit, at the
PEEGN, the one or more current used resources by each of the one or more nodes,
in the database.
[0014] In an exemplary aspect of the present disclosure, the method further
20 comprises receiving, by the transceiver unit, at the PEEGN, an acknowledgement
response from the node manager and transmitting, by the transceiver unit, from the
PEEGN, a response to the NPDA, of the automatic scaling of the one or more nodes.
[0015] Another aspect of the present disclosure may relate to a system for
25 automatic scaling of one or more nodes. The system comprises a transceiver unit
configured to receive at a Policy Execution Engine (PEEGN), from a Network
Function Virtualization Platform Decision and Analytics (NPDA), a request for
executing an automatic scaling policy for the one or more nodes. The transceiver
unit is further configured to fetch at the PEEGN, a set of data relating to the one or
30 more nodes. Further, the transceiver unit is configured to send from the PEEGN a
request to get one or more current used resources by each of the one or more nodes,
6
the allocated resource quota for each of the one or more nodes and one or more
available resources for the one or more nodes, to a physical and virtual inventory
manager (PVIM). Further, the system comprises a processing unit, configured to
analyse at the PEEGN, a demand for one or more resources, for automatic scaling
5 of the one or more nodes, based on the one or more current used resources by each
of the one or more nodes, the allocated resource quota for each of the one or more
nodes, the one or more available resources for the one or more nodes a set of
automatic scaling constraints data, and the automatic scaling policy. Furthermore,
the transceiver unit is configured to transmit from the PEEGN, to the PVIM, a
10 request for one of reserving and unreserving the one or more resources. Moreover,
the transceiver unit is configured to trigger from the PEEGN, the automatic scaling
request to a node manager, based on a response from the PVIM on the request for
one of the reserving and the unreserving of the one or more resources.
15 [0016] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing one or more instructions for automatic
scaling of one or more nodes, the instructions include executable code which, when
executed by one or more units of a system, causes a transceiver unit of the system
to receive at a Policy Execution Engine (PEEGN), from a Network Function
20 Virtualization Platform Decision and Analytics (NPDA), a request for executing an
automatic scaling policy for the one or more nodes. Further, the executable code
when executed causes the transceiver unit to fetch at the PEEGN, a set of data
relating to the one or more nodes. Further, the executable code when executed
causes the transceiver unit to send from the PEEGN a request to get one or more
25 current used resources by each of the one or more nodes, the allocated resource
quota for each of the one or more nodes and one or more available resources for the
one or more nodes, to a physical and virtual inventory manager (PVIM). The
executable code when further executed causes a processing unit of the system to
analyse at the PEEGN, a demand for one or more resources, for automatic scaling
30 of the one or more nodes, based on the one or more current used resources by each
of the one or more nodes, the allocated resource quota for each of the one or more
7
nodes, the one or more available resources for the one or more nodes a set of
automatic scaling constraints data, and the automatic scaling policy. Furthermore,
the executable code when executed causes the transceiver unit to transmit from the
PEEGN, to the PVIM, a request for one of reserving and unreserving the one or
5 more resources. Moreover, the executable code when executed causes the
transceiver unit to trigger from the PEEGN, the automatic scaling request to a node
manager, based on a response from the PVIM on the request for one of the reserving
and the unreserving of the one or more resources.
10 OBJECT OF THE DISCLOSURE
[0017] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
15 [0018] It is an object of the present disclosure to provide a method and a system for
automatic scaling of one or more nodes.
[0019] It is another object of the present disclosure to provide a solution to apply
automatic scale constraints based on policies that are applicable for
20 VNF/VNFC/CNC/CNFC for automatic scaling of resources.
[0020] It is yet another object of the present disclosure to provide a solution to
apply automatic scale constraints based on affinity, anti-affinity, dependent and
deployment flavor.
25
[0021] It is yet another object of the present disclosure to provide a solution that
leads to zero data loss policies while VNF/VNFC/CNC/CNFC resources are scaling
up.
30 [0022] It is yet another object of the present disclosure to provide a solution that
supports event driven scaling.
8
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute
5 a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
10 limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
15
[0024] FIG. 1 illustrates an exemplary block diagram of a management and
orchestration (MANO) architecture, in accordance with exemplary implementation
of the present disclosure.
20 [0025] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
[0026] FIG. 3 illustrates an exemplary block diagram of a system for automatic
25 scaling of one or more nodes, in accordance with exemplary implementations of the
present disclosure.
[0027] FIG. 4 illustrates an exemplary flow diagram of a method for automatic
scaling of one or more nodes, in accordance with exemplary implementations of the
30 present disclosure.
9
[0028] FIG. 5 illustrates an exemplary of session flow diagram for automatic
scaling of one or more nodes, in accordance with exemplary implementations of the
present disclosure.
5 [0029] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
10 [0030] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
15 another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
[0031] The ensuing description provides exemplary embodiments only, and is not
20 intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
25 disclosure as set forth.
[0032] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
30 specific details. For example, circuits, systems, processes, and other components
10
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0033] Also, it is noted that individual embodiments may be described as a process
5 which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
10 included in a figure.
[0034] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
15 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
20 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0035] As used herein, a “processing unit” or “processor” or “operating processor”
25 includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific
30 Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
11
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
5 [0036] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
10 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
15 a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
[0037] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
20 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
25 functions.
[0038] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also refer to a set of rules or protocols that define
30 communication or interaction of one or more modules or one or more units with
12
each other, which also includes the methods, functions, or procedures that may be
called.
[0039] All modules, units, components used herein, unless explicitly excluded
5 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
10 circuits (FPGA), any other type of integrated circuits, etc.
[0040] As used herein the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
15 and/or connected with the system.
[0041] As used herein, Physical and Virtual Inventory Manager (PVIM) module
maintains the inventory and its resources. After getting a request to reserve
resources from PEEGN, PVIM adds up the resources consumed by particular
20 network function as used resources and removes them from free resources. Further,
the PVIM updates this in the NoSQL database.
[0042] As used herein, Container Network Function (CNF) Life Cycle Manager
(CNF-LM) may capture the details of vendors, CNFs, and Container Network
25 Function Components (CNFCs) via create, read, and update APIs exposed by the
service itself. The captured details are stored in a database and can be further used
by SA service. CNF-LM may create CNF or individual CNFC instances. CNFLM
may scale-out the CNFs or individual CNFCs.
30 [0043] As used herein, Policy Execution Engine (PEEGN) provides a network
function virtualisation (NFV) software defined network (SDN) platform
13
functionality to support dynamic requirements of resource management and
network service orchestration in the virtualized network. Further, the PEEGN is
involved during CNF instantiation flow to check for CNF policy and to reserve
resources required to instantiate CNF at PVIM. The PEEGN supports scaling policy
5 for CNFC.
[0044] As used herein, Capacity Manager Platform (CMP) creates a task to monitor
the performance metrics data received for that VNF, VNFC and CNFC. Wherever
there is a threshold breach, CMP sends a trigger to NFV Platform and Decision
10 Analytics (NPDA).
[0045] As discussed in the background section, the current known solutions have
several shortcomings. For implementing proper resources allocations, there are
some challenges, such as excessive provisioning of resources, insufficient
15 provisioning of resources, resource failures, resource mismanagement,
performance degradation, conflict while reservation and allocation of resources,
unavailability of Policy Execution Engine Service, time consumed in reservation
and allocation of VNF/VNFC/CNFC/CNF resources and cost increment. The
present disclosure aims to overcome the above-mentioned and other existing
20 problems in this field of technology by providing method and system for automatic
scaling of one or more nodes. More particularly, the present disclosure provides a
solution to apply automatic scale constraints based on policies that are applicable
for VNF/VNFC/CNC/CNFC for automatic scaling of resources. Further, the
present disclosure provides a solution to apply automatic scale constraints based on
25 affinity, anti-affinity, dependent and deployment flavor. Further, the present
disclosure provides a solution that leads to zero data loss policies while
VNF/VNFC/CNC/CNFC resources are scaling up. Furthermore, the present
disclosure provides a solution that supports event driven scaling.
30 [0046] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
14
[0047] Referring to FIG. 1 an exemplary block diagram representation of a
management and orchestration (MANO) architecture/ platform [100], in
accordance with exemplary implementation of the present disclosure is illustrated.
5 The MANO architecture [100] may be developed for managing telecom cloud
infrastructure automatically, managing design or deployment design, managing
instantiation of network node(s)/ service(s) etc. The MANO architecture [100]
deploys the network node(s) in the form of Virtual Network Function (VNF) and
Cloud-native/ Container Network Function (CNF). The system as provided by the
10 present disclosure may comprise one or more components of the MANO
architecture [100]. The MANO architecture [100] may be used to auto-instantiate
the VNFs into the corresponding environment of the present disclosure so that it
could help in onboarding other vendor(s) CNFs and VNFs to the platform.
15 [0048] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a Network Function Virtualization (NFV) and Software
Defined Network (SDN) Design Function module [104], a Platform Foundation
Services module [106], a Platform Core Services module [108] and a Platform
Resource Adapters and Utilities module [112]. All the components are assumed to
20 be connected to each other in a manner as obvious to the person skilled in the art
for implementing features of the present disclosure.
[0049] The NFV and SDN design function module [104] comprises a VNF
lifecycle manager (compute) [1042], a VNF catalogue [1044], a network services
25 catalogue [1046], a network slicing and service chaining manager [1048], a physical
and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] may be responsible for deciding on which
server of the communication network, and the microservice will be instantiated.
The VNF lifecycle manager (compute) [1042] may manage the overall flow of
30 incoming/ outgoing requests during interaction with the user. The VNF lifecycle
manager (compute) [1042] may be responsible for determining which sequence to
15
be followed for executing the process. For e.g., in an AMF network function of the
communication network (such as a 5G network), sequence for execution of
processes P1 and P2 etc. The VNF catalogue [1044] stores the metadata of all the
VNFs (also CNFs in some cases). The network services catalogue [1046] stores the
5 information of the services that need to be run. The network slicing and service
chaining manager [1048] manages the slicing (an ordered and connected sequence
of network service/ network functions (NFs)) that must be applied to a specific
networked data packet. The physical and virtual inventory manager (PVIM) [1050]
stores the logical and physical inventory of the VNFs. Just like the VNF lifecycle
10 manager (compute) [1042], the CNF lifecycle manager [1052] may be used for the
CNFs lifecycle management.
[0050] The platforms foundation services module [106] comprises a microservices
elastic load balancer [1062], an identity & access manager [1064], a command line
15 interface (CLI) [1066], a central logging manager [1068], and an event routing
manager [1070]. The microservices elastic load balancer [1062] may be used for
maintaining the load balancing of the request for the services. The identity and
access manager [1064] may be used for logging purposes. The command line
interface (CLI) [1066] may be used to provide commands to execute certain
20 processes which require changes during the run time. The central logging manager
[1068] may be responsible for keeping the logs of every service. These logs are
generated by the MANO platform [100]. These logs are used for debugging
purposes. The event routing manager [1070] may be responsible for routing the
events i.e., the application programming interface (API) hits to the corresponding
25 services.
[0051] The platforms core services module [108] comprises NFV infrastructure
monitoring manager [1082], an assure manager [1084], a performance manager
[1086], a policy execution engine [1088], a capacity monitoring manager (CMM)
30 [1090], a release management (mgmt.) repository [1092], a configuration manager
& GCT [1094], an NFV platform decision analytics [1096], a platform NoSQL DB
16
[1098]; a Platform Schedulers and Cron Jobs (PSC) service [1100], a VNF backup
& Restore manager [1102], a microservice auditor [1104], and a platform
operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
5 NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
[1084] may be responsible for supervising the alarms the vendor may be generating.
The performance manager [1086] may be responsible for managing the
performance counters. The Policy Execution Engine (PEEGN) [1088] may be
responsible for managing all of the policies. The capacity monitoring manager
10 (CMM) [1090] may be responsible for sending the request to the PEEGN [1088].
The release management (mgmt.) repository (RMR) [1092] may be responsible for
managing the releases and the images of all of the vendor's network nodes. The
configuration manager & (GCT) [1094] manages the configuration and GCT of all
the vendors. The NFV Platform Decision Analytics (NPDA) [1096] helps in
15 deciding the priority of using the network resources. It may be further noted that
the policy execution engine (PEEGN) [1088], the configuration manager & GCT
[1094] and the NPDA [1096] work together. The platform NoSQL DB [1098] may
be a database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. The platform schedulers and cron jobs (PSC)
20 service [1100] schedules the task such as but not limited to triggering of an event,
traverse the network graph etc. The VNF backup & restore manager [1102] takes
backup of the images, binaries of the VNFs and the CNFs and produces those
backups on demand in case of server failure. The microservice auditor [1104] audits
the microservices. For e.g., in a hypothetical case, instances not being instantiated
25 by the MANO architecture [100] may be using the network resources. In such cases,
the microservice auditor [1104] audits and informs the same so that resources can
be released for services running in the MANO architecture [100]. The audit assures
that the services only run on the MANO platform [100]. The platform operations,
administration and maintenance manager [1106] may be used for newer instances
30 that are spawning.
17
[0052] The platform resource adapters and utilities module [112] further comprises
a platform external API adapter and gateway [1122], a generic decoder and indexer
(XML, CSV, JSON) [1124], a service adapter [1126], an API adapter [1128], and a
NFV gateway [1130]. The platform external API adapter and gateway [1122] may
5 be responsible for handling the external services (to the MANO platform [100]) that
require the network resources. The generic decoder and indexer (XML, CSV,
JSON) [1124] gets directly the data of the vendor system in the XML, CSV, JSON
format. The service adapter [1126] may be the interface provided between the
telecom cloud and the MANO architecture [100] for communication. The API
10 adapter [1128] may be used to connect with the virtual machines (VMs). The NFV
gateway [1130] may be responsible for providing the path to each service going
to/incoming from the MANO architecture [100].
[0053] The service adapter (SA) [1126] is a microservices-based system designed
15 to deploy and manage Container Network Functions (CNFs) and their components
(CNFCs) across nodes. The SA [1126] offers REST endpoints for key operations,
including uploading container images to a registry, terminating CNFC instances,
and creating volumes and networks. CNFs, which are network functions packaged
as containers, may consist of multiple CNFCs. The SA [1126] facilitates the
20 deployment, configuration, and management of these components by interacting
with API, ensuring proper setup and scalability within a containerized environment.
This approach provides a modular and flexible framework for handling network
functions in a virtualized network setup.
25 [0054] Referring to FIG. 2, an exemplary block diagram of a computing device
[200] (also referred herein as a computer system [200]) upon which the features of
the present disclosure may be implemented in accordance with exemplary
implementation of the present disclosure, is illustrated. In an implementation, the
computing device [200] may also implement a method for performing one or more
30 corrective actions on one or more Network Functions (NFs) utilising the system. In
another implementation, the computing device [200] itself implements the method
18
for performing one or more corrective actions on one or more Network Functions
(NFs) using one or more units configured within the computing device [200],
wherein said one or more units are capable of implementing the features as
disclosed in the present disclosure.
5
[0055] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with the bus [202] for processing information. The
hardware processor [204] may be, for example, a general-purpose microprocessor.
10 The computing device [200] may also include a main memory [206], such as a
random-access memory (RAM), or other dynamic storage device, coupled to the
bus [202] for storing information and instructions to be executed by the processor
[204]. The main memory [206] also may be used for storing temporary variables or
other intermediate information during execution of the instructions to be executed
15 by the processor [204]. Such instructions, when stored in non-transitory storage
media accessible to the processor [204], render the computing device [200] into a
special-purpose machine that is customized to perform the operations specified in
the instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
20 information and instructions for the processor [204].
[0056] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
25 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
30 [204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
19
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
5
[0057] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
10 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
15 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
20 [0058] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
25 a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
30 electromagnetic or optical signals that carry digital data streams representing
various types of information.
20
[0059] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
5 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
10
[0060] Referring to FIG. 3 an exemplary block diagram of a system for automatic
scaling of one or more nodes, in accordance with exemplary implementations of the
present disclosure is illustrated. The system comprises at least one policy execution
engine (PEEGN) [1088] and at least one database [308]. Further, the PEEGN [1088]
15 comprises at least one transceiver unit [302], at least one processing unit [304] and
at least one storage unit [306]. Also, all of the components/ units of the system [300]
are assumed to be connected to each other unless otherwise indicated below. Also,
in FIG. 3 only a few units are shown, however, the system [300] may comprise
multiple such units or the system [300] may comprise any such numbers of said
20 units, as required to implement the features of the present disclosure. In an
implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
entity.
25 [0061] The system [300] is configured for automatic scaling of one or more nodes,
with the help of the interconnection between the components/units of the system
[300].
[0062] In operation, the transceiver unit [302] may receive at the policy execution
30 engine (PEEGN) [1088], from a Network Function Virtualization Platform
Decision and Analytics (NPDA) [1096], a request for executing an automatic
21
scaling policy for the one or more nodes. In an implementation, the one or more
nodes may comprise at least one of one or more virtual network functions (VNFs),
one or more virtual network function components (VNFCs), one or more container
network functions (CNFs), and one or more container network function components
5 (CNFCs). The system [300] provides an intelligent scaling framework which helps
to scale the VNFC/CNFC instances as per the traffic requirements. Whenever a
breach event is detected related to VNF/CNF instances, the NPDA [1096] evaluates
a policy relating to the breach event. It may be noted that the policy and the set of
data related to historical instances of breach event of the VNF/CNF instances may
10 be retrieved by the NPDA [1096]. Based on the retrieved policy and the set of data,
the NPDA [1096] evaluates a hysteresis for the breach event. Further, the NPDA
[1096] based on the scaling policy of the VNF/VNFC/CNF/CNFC for which the
threshold breach event is detected, executes a hysteresis. If the hysteresis meets the
criteria, the NPDA [1096] requests the PEEGN [1088] to execute a scaling policy.
15 Further, in an example, the attributes associated with a policy as defined by a user
comprise policyId, policyVersion, instanceId.
[0063] As would be understood, the PEEGN [1088] is a system that may create,
manage and enforce the polices and rules to regulate the behaviour and the
20 operations of the network function. Further, the PEEGN [1088] may ensure that the
network function and its components may function and operate as per the
predefined polices and rules. The PEEGN [1088] calculates required resources for
any VNF /CNF, does quota check and updates PVIM [1050] based on affinity/antiaffinity and other policies which are defined at PEEGN [1088].
25
[0064] In an implementation, the request received by the transceiver unit [302] at
the PEEGN [1088] may be an INVOKE_POLICY event. It is to be noted that the
events are generated based on the predefined policies and rules. The
INVOKE_POLICY event may be a trigger that may initiate any pre-defined policy
30 to be executed for the one or more nodes. Considering an example, traffic load on
a node increases and crosses a predefined threshold. The INVOKE_POLICY event
22
may be triggered, and more resources may be allocated to handle the extra load.
Thereby, ensuring the optimal performance of the node. In an example, the
INVOKE_POLICY event comprises attributes such as: policy Id, VIM Id, VNF Id,
VNF Version, VNF Instance id, host Id, policy Action (e.g., VNF-scale-out/VNFC5 scale-out/healing /manual-VNFC-scale-out). Here, VIM Id refers to identifier of a
Virtualized Infrastructure Manager (VIM) instance on which the
VNF/VNFC/CNF/CNFC is to be spawned for scale-out or from which the
VNF/VNFC/CNF/CNFC instance needs to be removed for scale-in.
10 [0065] Further, as would be understood the automatic scaling policy may refer to
rules and policies that may automatically and/or dynamically allocate or de-allocate
resources based on the demand to ensure optimal performance of the one or more
nodes. Further, automatic-scaling of the one or more nodes comprises at least one
of scale-in and scale-out of the one or more nodes. As would be understood, the
15 scale-in may refer to a process to reduce the number of active instances and the
resources allocated to the network function, in response to the decreased demand
and usage of the resources. Whereas the scale-out may refer to the process where
new instances are created to handle the workload on the existing instances as
demand of the resources may increase.
20
[0066] Continuing further, the transceiver unit [302] may fetch at the PEEGN
[1088], a set of data relating to the one or more nodes. Further, to fetch the set of
data relating to the one or more nodes, the transceiver unit [302] may transmit from
the PEEGN [1088] a request to one or more node components associated with the
25 one or more nodes to fetch the set of data related to the one or more nodes.
[0067] In an implementation, the request to fetch the set of data related to the one
or more nodes may be a GET_VNF_DETAILS event. The PEEGN [1088] may send
the GET_VNF_DETAILS to the one or more node catalogues to fetch details
30 related to the one or more nodes and the one or more node components. In an
implementation, the one or more node catalogues are a VNF catalogues, or a CNF
23
catalogues. Also, the set data may include such as, but not limited to, performance
status, workload, capacity and resource consumption. It is to be noted that the above
mentioned set of data is exemplary and in no manner limiting the scope of present
disclosure. Further, the set of data may include any other data obvious to the person
5 skilled in art to implement the solution of the present disclosure. Furthermore, a
storage unit [306] may save at the PEEGN [1088], the set of data related to the one
or more nodes in a database [308]. Further, in an example, the
GET_VNF_DETAILS event is associated with the Attributes such as, VNF Id, VNF
Version, VNF Description, Product Id and VNFC/CNFC data.
10
[0068] Continuing further, the transceiver unit [302] may send from the PEEGN
[1088] a request to get one or more current used resources by each of the one or
more nodes, the allocated resource quota for each of the one or more nodes and one
or more available resources for the one or more nodes, to a physical and virtual
15 inventory manager (PVIM) [1050]. The PVIM [1050] maintains the complete
inventory including the physical and virtual resources as well as the VNFs/CNFs
instantiated by the platform. The VNF/CNF inventory is created as and when they
are instantiated. It maintains the total resources and its detail which are reserved for
the VNF/CNF during the instantiation. The PVIM [1050] detects addition of any
20 new physical resources added to the VIMs, the added physical resources are
translated to virtual resources and are added to the free resource pool maintained at
the PVIM [1050].
[0069] In an implementation, the PEEGN [1088] may send a
25 PROVIDE_VIM_AZ_HA_DETAIL to PVIM [1050] to get the one or more current
used resources by each of the one or more nodes, the allocated resource quota for
each of the one or more nodes and the one or more available resources for the one
or more nodes. As would be understood, the one or more current used resources by
each of the one or more nodes may refer to the resources currently used by the one
30 or more node from the allocated resources. Further, the allocated resource quota for
each of the one or more nodes may refer to the predefined limit of the resources
24
allocated to the one or more nodes. After successful instantiation of a VNF or a
CNF instance, PEEGN [1088] allocates resources for the VNF/CNF components
and asks inventory to reserve the resources for the same. The resources reserved at
the time of instantiation define the allocated resource quote for each VNF and CNF
5 instance. Furthermore, the one or more available resources for the one or more
nodes may represent the amount of available resources in the network for the one
or more node components.
[0070] Further, in an implementation, the PEEGN [1088] sends
10 PROVIDE_VIM_AZ_HA_DETAIL request to PVIM [1050] to provide available
PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and
the used & free resources in each HA. The Availability zones (AZ) are end user
visible logical abstractions for partitioning of the cloud services. The logical
partition comprises block storage, compute services and network services. The
15 logical partition requires a particular host to be present in an Availability Zone. In
other words, AZ are isolated or separated data centres located within specific
regions in which cloud services originate and operate. Moreover, AZ refers to a
specific or an isolated location in a data centre or in a cloud environment. The
isolated location ensures that in case of failure of one zone, services in another zone
20 may remain functional or operational. In an implementation, the Host Aggregate
(HA) refers to an aggregate or group of physical hosts in a virtualised environment.
Further, HA are used to define where specific virtual network functions (VNFs) can
be deployed. HA can be created based on the hardware profile of the physical hosts.
Further, each Availability zone may have an association of multiple host aggregates,
25 which in turn may have a list of hosts associated with it. In an example, the attributes
of the PROVIDE_VIM_AZ_HA_DETAIL event comprise VIM Id, VNF Id and
VNF Version. Here, VIM Id refers to identifier of a Virtualized Infrastructure
Manager (VIM) instance on which the VNF/VNFC/CNF/CNFC is to be spawned
for scale-out or from which the VNF/VNFC/CNF/CNFC instance needs to be
30 removed for scale-in.
25
[0071] Continuing further, the processing unit [304] may analyse at the PEEGN
[1088], a demand for one or more resources, for automatic scaling of the one or
more nodes, based on the one or more current used resources by each of the one or
more nodes, the allocated resource quota for each of the one or more nodes, the one
5 or more available resources for the one or more nodes, a set of automatic scaling
constraints data, and the automatic scaling policy. The automatic scale constraints
is the total virtual resource that a particular VNF/VNFC/CNF/CNFC can consume
considering its instantiations as well as automatic scaling requirements. The system
[300] ensures that the VNF/VNFC/CNF/CNFC do not consume more than the
10 resources specified under any circumstances. This is to ensure that no
VNF/VNFC/CNF/CNFC instance is able to hog complete network resource.
Further, in an implementation, the set of automatic scaling constraints data
comprises at least one of a total number of CPUs, a virtual memory size, and a disk
size which can be allocated to the one or more nodes to scale.
15
[0072] In an implementation, the processing unit [304], after the analysis at the
PEEGN [1088], may use the automatic scaling policy to scale with the required
resources based on one or more affinities, one or more anti-affinities, all
dependents, and deployment flavors. During the onboarding of VNF/CNF, policy
20 rules specific to VNF/CNF instantiation, and VNFC/CNFC scaling, VNF/CNF
healing, VNFC/CNFC dependencies, VNFC/CNFC affinity / anti-affinity and
VNF/CNF termination are created. These rules are persisted by the MANO platform
as shown in FIG. 1. As would be understood, the one or more affinities, the one or
more anti-affinities, all dependents, and the deployment flavors are essential rules
25 that may manage the placement of resources in a scalable, fault tolerant and
optimized network environment. Further, the one or more anti-affinities, all
dependents, and the deployment flavors may determine that one or more resources
may run together, one or more resources may not run together, and one or more
resources are dependent on other one or more resources. The deployment flavor
30 refers to the compute, memory, and storage capacity required by a VNF/CNF
instance. Therefore, based on the one or more anti-affinities, all dependents, and the
26
deployment flavors, the automatic scaling policy may be invoked for the optimized
performance of all the resources or the one or more nodes in the network..
[0073] Continuing further, the transceiver unit [302] may transmit from the PEEGN
5 [1088], to the PVIM [1050], a request for one of reserving and unreserving of the
one or more resources. After receiving response from PVIM and analyzed that there
are enough resources for VNF/VNFC based on scaling/dependent
VNFC/affinity/anti-affinity policies and deployment flavor to scale, the PEEGN
[1088] shall trigger PVIM [1050] to reserve the resources in its inventory using
10 RESERVE_RESOURCES_IN_VIM_AZ_HA event.
[0074] As would be understood, the request for reserving the one or more
resources, may be a request to allocate the one or more resources to the one or more
nodes to ensure the performance, and the availability of the one or more nodes.
15 Whereas the request for unreserving the one or more resources, may be a request to
de-allocate the one or more resources to avoid flapping of the resources or to
migrate the one or more resources from one instance (or site) to another instance.
PVIM [1050] reserves resources or unreserves resources on that VIM which was
selected by PEEGN [1088] and sends response to PEEGN [1088]. Further, the
20 response from the PVIM [1050] on the request for one of the reserving and the
unreserving of the one or more resources comprises one or more tokens for each of
the one or more nodes. As would be understood, the token is a header that may be
used to validate identity of the nodes. Furthermore, the response may have token
that may be validated, for e.g. tk55f9a1-19a6-adf3-8514-b0b150asdfd0 (UUID) is
25 an example of a token which is a universal unique identifier.
[0075] Continuing further, the transceiver unit [302], may trigger from the PEEGN
[1088], the automatic scaling request to a node manager, based on a response from
the PVIM [1050] on the request for one of the reserving and the unreserving of the
30 one or more resources.
27
[0076] In an implementation, the node manager may be a VNF/CNF life cycle
manager (VNF-LM [1042]/CNF-LM [1052]). Further, the VNF-LM [1042] may be
a microservice responsible for lifecycle management of VNF instances. The VNFLM [1042] may instantiate/terminate/scale resources. Furthermore, the CNF-LM
5 [1052] may be responsible for creating a CNF or individual CNFC instances. Also,
it may be responsible for Healing and Scaling out CNF’s or individual CNFC’s.
[0077] In an implementation, the trigger, by the transceiver unit [302], from the
PEEGN [1088], for the automatic scaling request to the node manager comprises
10 the one or more tokens for each of the one or more nodes. The PEEGN [1088], upon
successful receipt of the response from the PVIM [1050], may send the one or more
tokens, received from the PVIM [1050], to the node manager (i.e., VNF-LM
[1042]/CNF-LM [1052]) to automatic scale the one or more nodes. Further, the
trigger may be an event, such as
15 TRIGGER_VNF_SCALING/TRIGGER_VNFC_SCALING event. The trigger
events will comprise the tokens received from the PVIM [1050] in the response.
These tokens are passed to the VNF-LM [1042]/CNF-LM [1052] to validate the
request.
20 [0078] Continuing further, the transceiver unit [302] may receive at the PEEGN
[1088], an acknowledgement response from the node manager, and the transceiver
unit [302] may transmit from the PEEGN [1088], a response to the NPDA [1096],
of the automatic scaling of the one or more nodes.
25 [0079] Referring to FIG. 4 an exemplary flow diagram of a method for automatic
scaling of one or more nodes, in accordance with exemplary implementations of the
present disclosure is illustrated. In an implementation the method [400] is
performed by the system [300]. Also, as shown in FIG. 4, the method [400] initiates
at step [402].
30
28
[0080] At step [404], the method [400] comprises receiving, by a transceiver unit
[302] at a policy execution engine (PEEGN) [1088], from a Network Function
Virtualization platform decision and analytics (NPDA) [1096], a request for
executing an automatic scaling policy for the one or more nodes. In an
5 implementation, the one or more nodes may comprise at least one of one or more
virtual network functions (VNFs), one or more virtual network function
components (VNFCs), one or more container network functions (CNFs), and one
or more container network function components (CNFCs). The system [300]
provides an intelligent scaling framework which helps to scale the VNFC/CNFC
10 instances as per the traffic requirements. Whenever a breach event is detected
related to VNF/CNF instances, the NPDA [1096] evaluates a policy relating to the
breach event. It may be noted that the policy and the set of data related to historical
instances of breach event of the VNF/CNF instances may be retrieved by the NPDA
[1096]. Based on the retrieved policy and the set of data, the NPDA [1096] evaluates
15 a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling
policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is
detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096]
requests the PEEGN [1088] to execute a scaling policy.
20 [0081] As would be understood, the PEEGN [1088] is a system that may create,
manage and enforce the polices and rules to regulate the behaviour and the
operations of the network function. Further, the PEEGN [1088] may ensure that the
network function and its components may function and operate as per the
predefined polices and rules. The PEEGN [1088] calculates required resources for
25 any VNF /CNF, does quota check and updates PVIM [1050] based on affinity/antiaffinity and other policies which are defined at PEEGN [1088].
[0082] In an implementation, the request received by the transceiver unit [302] at
the PEEGN [1088] may be an INVOKE_POLICY event. It is to be noted that the
30 events are generated based on the predefined policies and rules. The
INVOKE_POLICY event may be a trigger that may initiate any pre-defined policy
29
to be executed for the one or more nodes. Considering an example, traffic load on
a node increases and crosses a predefined threshold. The INVOKE_POLICY event
may be triggered, and more resources may be allocated to handle the extra load.
Thereby, ensuring the optimal performance of the node.
5
[0083] Further, as would be understood the automatic scaling policy may refer to
rules and policies that may automatically and/or dynamically allocate or de-allocate
resources based on the demand to ensure optimal performance of the one or more
nodes. Further, automatic-scaling of the one or more nodes comprises at least one
10 of scale-in and scale-out of the one or more nodes. as would be understood, the
scale-in may refer to a process to reduce the number of active instances and the
resources allocated to the network function, in response to the decreased demand
and usage of the resources. Whereas the scale-out may refer to the process where
new instances are created to handle the workload on the existing instances as
15 demand of the resources may increase.
[0084] Next, at step [406], the method [400] comprises fetching, by the transceiver
unit [302] at the PEEGN [1088], a set of data relating to the one or more nodes.
Further, to fetch the set of data relating to the one or more nodes, the transceiver
20 unit [302] may transmit from the PEEGN [1088] a request to one or more node
components associated with the one or more nodes to fetch the set of data related
to the one or more nodes.
[0085] In an implementation, the request to fetch the set of data related to the one
25 or more nodes may be a GET_VNF_DETAILS event. The PEEGN [1088] may send
the GET_VNF_DETAILS to the one or more node catalogues to fetch details
related to the one or more nodes and the one or more node components. In an
implementation, the one or more node catalogues are a VNF catalogues, or a CNF
catalogues. Also, the set data may include such as, but not limited to, performance
30 status, workload, capacity and resource consumption. It is to be noted that the above
mentioned set of data is exemplary and in no manner limiting the scope of present
30
disclosure. Further, the set of data may include any other data obvious to the person
skilled in art to implement the solution of the present disclosure. Furthermore, a
storage unit [306] may save at the PEEGN [1088], the set of data related to the one
or more nodes in a database [308].
5
[0086] Further, at step [408], the method [400] comprises sending, by the
transceiver unit [302], from the PEEGN [1088] a request to get one or more current
used resources by each of the one or more nodes, the allocated resource quota for
each of the one or more nodes, and one or more available resources for the one or
10 more nodes, to a physical and virtual inventory manager (PVIM) [1050].
[0087] In an implementation, the PEEGN [1088] may send a
PROVIDE_VIM_AZ_HA_DETAIL to PVIM [1050] to get the one or more current
used resources by each of the one or more nodes, the allocated resource quota for
15 each of the one or more nodes and the one or more available resources for the one
or more nodes. As would be understood, the one or more current used resources by
each of the one or more nodes may refer to the resources currently used by the one
or more node from the allocated resources. Further, the allocated resource quota for
each of the one or more nodes may refer to the predefined limit of the resources
20 allocated to the one or more nodes. After successful instantiation of a VNF or a
CNF instance, PEEGN [1088] allocates resources for the VNF/CNF components
and asks inventory to reserve the resources for the same. The resources reserved at
the time of instantiation define the allocated resource quote for each VNF and CNF
instance. Furthermore, the one or more available resources for the one or more
25 nodes may represent the amount of available resources in the network for the one
or more node components.
[0088] Further, in an implementation, the PEEGN [1088] sends
PROVIDE_VIM_AZ_HA_DETAIL request to PVIM [1050] to provide available
30 PVIM details against each Availability Zone (AZ) and Host Aggregate (HA) and
the used & free resources in each HA.
31
[0089] Further, at step [410], the method [400] comprises analysing, by a
processing unit [304] at the PEEGN [1088], a demand for one or more resources,
for automatic scaling of the one or more nodes, based on the one or more current
5 used resources by each of the one or more nodes, the allocated resource quota for
each of the one or more nodes, the one or more available resources for the one or
more nodes, a set of automatic scaling constraints data, and the automatic scaling
policy. The automatic scale constraints is the total virtual resource that a particular
VNF/VNFC/CNF/CNFC can consume considering its instantiations as well as
10 automatic scaling requirements. The system [300] ensures that the
VNF/VNFC/CNF/CNFC do not consume more than the resources specified under
any circumstances. This is to ensure that no VNF/VNFC/CNF/CNFC instance is
able to hog complete network resource. Further, in an implementation, the set of
automatic scaling constraints data comprises at least one of a total number of CPUs,
15 a virtual memory size, and a disk size for the one or more nodes to scale.
[0090] In an implementation, the processing unit [304], after the analysis at the
PEEGN [1088], may use the automatic scaling policy to scale with the required
resources based on one or more affinities, one or more anti-affinities, all
20 dependents, and deployment flavors. During the onboarding of VNF/CNF, policy
rules specific to VNF/CNF instantiation, and VNFC/CNFC scaling, VNF/CNF
healing, VNFC/CNFC dependencies, VNFC/CNFC affinity / anti-affinity and
VNF/CNF termination are created. These rules are persisted by the platform. As
would be understood, the one or more affinities, the one or more anti-affinities, all
25 dependents, and the deployment flavors are essential tools that may manage the
placement of resources in a scalable, fault tolerant and optimized network
environment. Further, the one or more anti-affinities, all dependents, and the
deployment flavors may determine that one or more resources may run together,
one or more resources may not run together, and one or more resources are
30 dependent on other one or more resources. The deployment flavor refers to the
compute, memory, and storage capacity required by a VNF/CNF instance.
32
Therefore, based on the one or more anti-affinities, all dependents, and the
deployment flavors, the automatic scaling policy may be invoked for the optimized
performance of all the resources or the one or more nodes in the network.
5 [0091] Further, at step [412], the method [400] comprises transmitting, by the
transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a request for
one of reserving and unreserving the one or more resources. As, would be
understood, the request for reserving the one or more resources, may be a request
to allocate the one or more resources to the one or more nodes to ensure the
10 performance, and the availability of the one or more nodes. Whereas the request for
unreserving the one or more resources, may be a request to de-allocate the one or
more resources to avoid flapping of the resources or to migrate the one or more
resources from one instance (or site) to another instance. Further, the response from
the PVIM [1050] on the request for one of the reserving and the unreserving of the
15 one or more resources comprises one or more tokens for each of the one or more
nodes. As would be understood, the token is a header that may be used to validate
one or more nodes. Furthermore, the response may have token that may be
validated, for e.g. tk55f9a1-19a6-adf3-8514-b0b150asdfd0 (UUID). The trigger
events will comprise the tokens received from the PVIM [1050] in the response.
20 These tokens are passed to the VNF-LM [1042]/CNF-LM [1052] to validate the
request.
[0092] Furthermore, at step [414], the method [400] comprises triggering, by the
transceiver unit [302], from the PEEGN [1088], the automatic scaling request to a
25 node manager, based on a response from the PVIM [1050] on the request for one
of the reserving and the unreserving of the one or more resources.
[0093] In an implementation, the node manager may be a VNF/CNF life cycle
manager (VNF-LM [1042]/CNF-LM [1052]). Further, the VNF-LM [1042] may be
30 a microservice responsible for lifecycle management of VNF instances. The VNFLM [1042] may instantiate/terminate/scale resources. Furthermore, the CNF-LM
33
[1052] may be responsible for creating a CNF or individual CNFC instances. Also,
it may be responsible for healing and scaling out CNF’s or individual CNFC’s.
[0094] In an implementation, the trigger, by the transceiver unit [302], from the
5 PEEGN [1088], for the automatic scaling request to the node manager comprises
the one or more tokens for each of the one or more nodes. The PEEGN [1088], upon
successful receipt of the response from the PVIM [1050], may send the one or more
tokens, received from the PVIM [1050], to the node manager (i.e., VNF-LM
[1042]/CNF-LM [1052]) to automatic scale the one or more nodes. Further, the
10 trigger may be an event, such as
TRIGGER_VNF_SCALING/TRIGGER_VNFC_SCALING event.
[0095] Continuing further, the transceiver unit [302] may receive at the PEEGN
[1088], an acknowledgement response from the node manager and the transceiver
15 unit [302] may transmit from the PEEGN [1088], a response to the NPDA [1096],
of the automatic scaling of the one or more nodes.
[0096] Thereafter, the method [400] may terminate at step [416].
20 [0097] Referring to FIG. 5 an exemplary of session flow diagram for automatic
scaling of one or more nodes, in accordance with exemplary implementations of the
present disclosure is illustrated. In an implementation the session [500] is
performed at the system [300].
25 [0098] At step 502, the policy execution engine (PEEGN) [1088] may receive a
request for reservation and allocation from an Analytics. In an exemplary
implementation, the Analytics may be a Network Function Virtualization Platform
Decision and Analytics (NPDA) [1096]. Whenever a breach event is detected
related to VNF/CNF instances, the NPDA [1096] evaluates a policy relating to the
30 breach event. It may be noted that the policy and the set of data related to historical
instances of breach event of the VNF/CNF instances may be retrieved by the NPDA
34
[1096]. Based on the retrieved policy and the set of data, the NPDA [1096] evaluates
a hysteresis for the breach event. Further, the NPDA [1096] based on the scaling
policy of the VNF/VNFC/CNF/CNFC for which the threshold breach event is
detected, executes a hysteresis. If the hysteresis meets the criteria, the NPDA [1096]
5 requests the PEEGN [1088] to execute a scaling policy.
[0099] Next, at step 504, the PEEGN [1088] may handle events for fetching the
VNF/CNF details and resource details from the Inventory (PVIM [1050]). In an
exemplary implementation, the inventory of the VNF/CNF may maintain the details
10 of the VNF/CNF. Further, the details of the VNF/CNF may include such as, but not
limited to, a VNF/CNF name, a VNF/CNF version, etc. In an implementation, the
request to fetch the set of data related to the one or more nodes may be a
GET_VNF_DETAILS event. The PEEGN [1088] may send the
GET_VNF_DETAILS to the one or more node catalogues to fetch details related to
15 the one or more nodes and the one or more node components. In an implementation,
the one or more node catalogues are a VNF catalogues, or a CNF catalogues. Also,
the set data may include such as, but not limited to, performance status, workload,
capacity and resource consumption.
20 [0100] Further, at step 506, after receiving all the information for the VNF/CNF
based on events, the PEEGN [1088], may save the response in the database [308]
for further processing.
[0101] Further, at step 508, the PEEGN [1088] may send request to PVIM [1050]
25 for resources reservation, allocation, or unreserve resources. The PEEGN [1088]
may consult the PVIM [1050] to check the current used resources against the total
allocated Quota. For this the PEEGN [1088] may send a
PROVIDE_VIM_AZ_HA_DETAIL to the PVIM [1050]. Further, in an
implementation, the PEEGN [1088] sends PROVIDE_VIM_AZ_HA_DETAIL
30 request to PVIM [1050] to provide available PVIM details against each Availability
Zone (AZ) and Host Aggregate (HA) and the used & free resources in each HA.
35
[0102] Furthermore, at step 510, upon receipt of the response of corresponding
event, the PEEGN [1088] may allow the logical handling of automatic scale
constraints. The automatic scale constraints may include in an example, such as,
5 but not limited to, total No Of CPU: 120 (in cores), vMemorySize:100 (gb),
diskSize:23.
[0103] Thereafter, at step 512, the PEEGN [1088] after processing the scaling
request based on automatic scale constraints, may save the updated details for the
10 VNF/CNF in the database [308] for further processing. Also, the PEEGN [1088]
may repeat the entire session until the request of all the events is served.
[0104] Moreover, at step 514, the PEEGN [1088] may receive and generate
tokenizer response for the automatic scaling response. The tokenizer response may
15 comprise one or more tokens. The token is header that is used to validate one or
more nodes. All the responses will have token that will be validated. Finally, after
receiving the response the PEEGN [1088] may send the acknowledgement response
to the NPDA [1096].
20 [0105] The present disclosure may further relate to a non-transitory computer
readable storage medium storing one or more instructions for automatic scaling of
one or more nodes, the instructions include executable code which, when executed
by one or more units of a system [300], causes a transceiver unit [302] of the system
[300] to receive at a policy execution engine (PEEGN) [1088], from a Network
25 Function Virtualization platform decision and analytics (NPDA) [1096], a request
for executing an automatic scaling policy for the one or more nodes. Further, the
executable code when executed causes the transceiver unit [302] to fetch at the
PEEGN [1088], a set of data relating to the one or more nodes. Further, the
executable code when executed causes the transceiver unit [302] to send from the
30 PEEGN [1088] a request to get one or more current used resources by each of the
one or more nodes, the allocated resource quota for each of the one or more nodes
36
and one or more available resources for the one or more nodes, to a physical and
virtual inventory manager (PVIM) [1050]. The executable code when further
executed causes a processing unit [304] of the system [300] to analyse at the
PEEGN [1088], a demand for one or more resources, for automatic scaling of the
5 one or more nodes, based on the one or more current used resources by each of the
one or more nodes, the allocated resource quota for each of the one or more nodes,
the one or more available resources for the one or more nodes a set of automatic
scaling constraints data, and the automatic scaling policy. Furthermore, the
executable code when executed causes the transceiver unit [302] to transmit from
10 the PEEGN [1088], to the PVIM [1050], a request for reserving the one or more
resources. Moreover, the executable code when executed causes the transceiver unit
[302] to trigger from the PEEGN [1088], the automatic scaling request to a node
manager, based on a response from the PVIM [1050] on the request for reserving
the one or more resources.
15
[0106] As is evident from the above, the present disclosure provides a technically
advanced solution for automatic scaling of one or more nodes. More particularly,
the present solution applies automatic scale constraints based on policies that are
applicable for VNF/VNFC/CNC/CNFC for automatic scaling of resources. Further,
20 the present solution leads to zero data loss policies while VNF/VNFC/CNC/CNFC
resources are scaling up. The present solution also supports event driven scaling.
Additionally, automatic scale constraints address several critical problems in the
MANO architecture. Below are some key problems that are solved by automatic
scale constraints:
25 • Excessive provisioning of resources.
• Insufficient provisioning of resources.
• Resource failures.
• Resource Mismanagement.
• Performance degradation
30 • Conflict while reservation and allocation of resources.
• Unavailability of Policy Execution Engine Service
37
• Time consumed in reservation and allocation of VNF/VNFC/CNFC/CNF
resources.
• Increased Cost.
5 [0107] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
10 be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
[0108] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
15 implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
20 arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
38
We Claim:
1. A method for automatic scaling of one or more nodes, the method
comprising:
- receiving, by a transceiver unit [302] at a policy execution engine
5 (PEEGN) [1088], from a Network Function Virtualization platform
decision and analytics (NPDA) [1096], a request for executing an
automatic scaling policy for the one or more nodes;
- fetching, by the transceiver unit [302] at the PEEGN [1088], a set of
data relating to the one or more nodes;
10 - sending, by the transceiver unit [302], from the PEEGN [1088] a
request to get one or more current used resources by each of the one
or more nodes, the allocated resource quota for each of the one or more
nodes, and one or more available resources for the one or more nodes,
to a physical and virtual inventory manager (PVIM) [1050];
15 - analysing, by a processing unit [304] at the PEEGN [1088], a demand
for one or more resources, for automatic scaling of the one or more
nodes, based on the one or more current used resources by each of the
one or more nodes, the allocated resource quota for each of the one or
more nodes, the one or more available resources for the one or more
20 nodes, a set of automatic scaling constraints data, and the automatic
scaling policy;
- transmitting, by the transceiver unit [302] from the PEEGN [1088], to
the PVIM [1050], a request for one of reserving and unreserving of
the one or more resources;
25 - triggering, by the transceiver unit [302], from the PEEGN [1088], the
automatic scaling request to a node manager, based on a response from
the PVIM [1050] on the request for one of the reserving and the
unreserving of the one or more resources.
30 2. The method as claimed in claim 1, wherein the one or more nodes comprise
at least one of one or more virtual network functions (VNFs), one or more
39
virtual network function components (VNFCs), one or more container
network functions (CNFs), and one or more container network function
components (CNFCs).
5 3. The method as claimed in claim 1, wherein fetching, by the transceiver unit
[302] at the PEEGN [1088], the set of data relating to the one or more nodes
comprises at least one of:
- transmitting, by the transceiver unit [302], from the PEEGN [1088] a
request to one or more node catalogues associated with the one or
10 more nodes to fetch the set of data related to the one or more nodes;
and
- saving, by a storage unit [306], at the PEEGN [1088], the set of data
related to the one or more nodes in a database [308].
15 4. The method as claimed in claim 1, wherein the set of automatic scaling
constraints data comprises at least one of a total number of CPUs, a virtual
memory size, and a disk size with the one or more nodes.
5. The method as claimed in claim 1, wherein automatic-scaling of the one or
20 more nodes comprises at least one of scale-in and scale-out of the one or
more nodes.
6. The method as claimed in claim 1, wherein the response from the PVIM
[1050] on the request for one of the reserving and the unreserving of the one
25 or more resources comprises one or more tokens for each of the one or more
nodes.
7. The method as claimed in claim 6, wherein the triggering, by the transceiver
unit [302], from the PEEGN [1088], the automatic scaling request to the
30 node manager comprises the one or more tokens for each of the one or more
nodes.
40
8. The method as claimed in claim 1 & 3, wherein prior to transmitting, by the
transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a
request for one of the reserving and the unreserving of the one or more
5 resources, the method comprises:
- updating, by the storage unit [306], at the PEEGN [1088], the one or
more current used resources by each of the one or more nodes, in the
database [308].
10 9. The method as claimed in claim 1, further comprises:
- receiving, by the transceiver unit [302], at the PEEGN [1088], an
acknowledgement response from the node manager; and
- transmitting, by the transceiver unit [302], from the PEEGN [1088], a
response to the NPDA [1096], of the automatic scaling of the one or
15 more nodes.
10. A system for automatic scaling of one or more nodes, the system
comprising:
- a transceiver unit [302] configured to receive at a policy execution
20 engine (PEEGN) [1088], from a Network Function Virtualization
platform decision and analytics (NPDA) [1096], a request for
executing an automatic scaling policy for the one or more nodes;
- the transceiver unit [302] configured to fetch at the PEEGN [1088], a
set of data relating to the one or more nodes;
25 - the transceiver unit [302], configured to send from the PEEGN [1088]
a request to get one or more current used resources by each of the one
or more nodes, the allocated resource quota for each of the one or more
nodes and one or more available resources for the one or more nodes,
to a physical and virtual inventory manager (PVIM) [1050];
30 - a processing unit [304], configured to analyse at the PEEGN [1088],
a demand for one or more resources, for automatic scaling of the one
41
or more nodes, based on the one or more current used resources by
each of the one or more nodes, the allocated resource quota for each
of the one or more nodes, the one or more available resources for the
one or more nodes a set of automatic scaling constraints data, and the
5 automatic scaling policy;
- the transceiver unit [302], configured to transmit from the PEEGN
[1088], to the PVIM [1050], a request for one of reserving and
unreserving of the one or more resources;
- the transceiver unit [302], configured to trigger from the PEEGN
10 [1088], the automatic scaling request to a node manager, based on a
response from the PVIM [1050] on the request for one of reserving
and unreserving of the one or more resources.

11. The system as claimed in claim 10, wherein the one or more nodes comprise
15 at least one of one or more virtual network functions (VNFs), one or more
virtual network function components (VNFCs), one or more container
network functions (CNFs), and one or more container network function
components (CNFCs).
20 12. The system as claimed in claim 10, wherein fetching, by the transceiver unit
[302] at the PEEGN [1088], the set of data relating to the one or more nodes
comprises:
- the transceiver unit [302], configured to transmit from the PEEGN
[1088] a request to one or more node catalogues associated with the
25 one or more nodes to fetch the set of data related to the one or more
nodes; and
- a storage unit [306], configured to save at the PEEGN [1088], the set
of data related to the one or more nodes in a database [308].
42
13. The system as claimed in claim 10, wherein the set of automatic scaling
constraints data comprises at least one of a total number of CPUs, a virtual
memory size, and a disk size with the one or more nodes.
5 14. The system as claimed in claim 10, wherein automatic-scaling of the one or
more nodes comprises at least one of scale-in and scale-out of the one or
more nodes.
15. The system as claimed in claim 10, wherein the response from the PVIM
10 [1050] on the request for one of the reserving and the unreserving of the one
or more resources comprises one or more tokens for each of the one or more
nodes.
16. The system as claimed in claim 15, wherein the triggering, by the
15 transceiver unit [302], from the PEEGN [1088], the automatic scaling
request to the node manager comprises the one or more tokens for each of
the one or more nodes.
17. The system as claimed in claim 10 & 12, wherein prior to transmitting, by
20 the transceiver unit [302] from the PEEGN [1088], to the PVIM [1050], a
request for one of the reserving and the unreserving of the one or more
resources, the system comprises:
- the storage unit [306], configured to update at the PEEGN [1088], the
one or more current used resources by each of the one or more nodes
25 in the database [308].
18. The system as claimed in claim 10, wherein the system further comprises:
- the transceiver unit [302], configured to receive at the PEEGN [1088],
an acknowledgement response from the node manager; and
43
- the transceiver unit [302], configured to transmit from the PEEGN
[1088], a response to the NPDA [1096], of the automatic scaling of
the one or more nodes.

Documents

Application Documents

# Name Date
1 202321066607-STATEMENT OF UNDERTAKING (FORM 3) [04-10-2023(online)].pdf 2023-10-04
2 202321066607-PROVISIONAL SPECIFICATION [04-10-2023(online)].pdf 2023-10-04
3 202321066607-POWER OF AUTHORITY [04-10-2023(online)].pdf 2023-10-04
4 202321066607-FORM 1 [04-10-2023(online)].pdf 2023-10-04
5 202321066607-FIGURE OF ABSTRACT [04-10-2023(online)].pdf 2023-10-04
6 202321066607-DRAWINGS [04-10-2023(online)].pdf 2023-10-04
7 202321066607-Proof of Right [09-02-2024(online)].pdf 2024-02-09
8 202321066607-FORM-5 [04-10-2024(online)].pdf 2024-10-04
9 202321066607-ENDORSEMENT BY INVENTORS [04-10-2024(online)].pdf 2024-10-04
10 202321066607-DRAWING [04-10-2024(online)].pdf 2024-10-04
11 202321066607-CORRESPONDENCE-OTHERS [04-10-2024(online)].pdf 2024-10-04
12 202321066607-COMPLETE SPECIFICATION [04-10-2024(online)].pdf 2024-10-04
13 202321066607-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321066607-Request Letter-Correspondence [24-10-2024(online)].pdf 2024-10-24
15 202321066607-Power of Attorney [24-10-2024(online)].pdf 2024-10-24
16 202321066607-Form 1 (Submitted on date of filing) [24-10-2024(online)].pdf 2024-10-24
17 202321066607-Covering Letter [24-10-2024(online)].pdf 2024-10-24
18 202321066607-CERTIFIED COPIES TRANSMISSION TO IB [24-10-2024(online)].pdf 2024-10-24
19 Abstract.jpg 2024-12-04
20 202321066607-ORIGINAL UR 6(1A) FORM 1 & 26-030125.pdf 2025-01-07