Abstract: The present disclosure relates to a method and a system for managing one or more services. The present disclosure encompasses: sending, by a processing unit [304] from a policy execution engine (PEEGN) [1088], an event trigger to a virtual network function lifecycle manager (VLM) [1042] for performing one or more actions on a virtual network function (VNF) through an interface [306]; receiving, by the processing unit [304] at the PEEGN [1088], from the VLM [1042], an event acknowledgement as a response after performing the one or more actions on the VNF; storing, by the processing unit [304] at a data storage unit [302], data related to the event trigger and event acknowledgment; and sending, by the processing unit [304] from the PEEGN, a response associated with the event trigger to a NFV Platform Decision Analytics (NPDA) [1096]. [FIG. 4]
1
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
5 THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
10 “METHOD AND SYSTEM FOR MANAGING ONE OR MORE
SERVICES”
15 We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre
Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
20
The following specification particularly describes the invention and the manner in
which it is to be performed.
25
2
METHOD AND SYSTEM FOR MANAGING ONE OR MORE SERVICES
FIELD OF INVENTION
5 [0001] The present disclosure generally relates to network performance
management systems. More particularly, embodiments of the present disclosure
relate to methods and systems for managing one or more services.
BACKGROUND
10
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
15 to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] In communication network such as 5G communication network, different
microservices (hereinafter may be referred as to “MS”) perform different services,
20 jobs, and tasks in the network. For example, a Policy Execution Engine
(PE/PEEGN) provides a Network Function Virtualization Software-Defined
Network (NVF SDN) Platform functionality to support dynamic requirements of
resource management and network service orchestration in the virtualized and
containerized network. Also, PE service stores and provides policies for Resource,
25 Security, Availability, and Scalability of Virtual network functions (VNFs). It
executes automatic scaling and healing functionality of the VNF(s) and Network
Service(s).
[0004] Moreover, for achieving instantiate/terminate/scale functionality in
30 Network Functions Virtualization (NFV) platform, the VLM (VNF Lifecycle
Manager) micro service is developed which is responsible for lifecycle
3
management of VNF instances. In the existing systems there is no real time
communication of the VLM with the policy execution engine (PE/PEEGN). The
Policy Execution Engine (PE/PEEGN) therefore currently failed to send in real
time, VNF Scaling/Healing event to the VLM (VNF Lifecycle Manager) and
5 currently PEEGN also failed to receive response from the VNF in real time.
[0005] Thus, there exists an imperative need in the art to provide an efficient system
and method that can overcome the limitation of the existing systems and provide a
method and system for managing one or more services that can perform its
10 operations efficiently. The present disclosure provides such a method and system
and thus overcome the limitations of the existing arts.
SUMMARY
15 [0006] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
20 [0007] An aspect of the present disclosure may relate to a method for managing
one or more services. The method includes sending, by a processing unit from a
policy execution engine (PEEGN), an event trigger to a virtual network function
lifecycle manager (VLM) for performing one or more actions on a virtual network
function (VNF) through an interface. The method further includes receiving, by the
25 processing unit at the PEEGN, from the VLM, an event acknowledgement as a
response after performing the one or more actions on the VNF. The method further
includes storing, by the processing unit at a data storage unit, data related to the
event trigger and event acknowledgment. The method further includes sending, by
the processing unit from the PEEGN, a response associated with the event trigger
30 to a NFV Platform Decision Analytics (NPDA).
4
[0008] In an exemplary aspect of the present disclosure, the one or more services
comprises at least: the virtual network function life manager (VLM), and the
PEEGN.
5 [0009] In an exemplary aspect of the present disclosure, the one or more actions
comprises at least one of: scaling of the VNF, healing of the VNF, and termination
of the VNF.
[0010] In an exemplary aspect of the present disclosure, the interface is a PE_VN
10 interface wherein, PE_VN interface is used when the one or more actions on the
VNF are being performed.
[0011] In an exemplary aspect of the present disclosure, the healing of the VNF
corresponds to restoring of a failed VNF based on one or more healing policies.
15
[0012] In an exemplary aspect of the present disclosure, for the healing of the VNF,
the method further comprises transmitting, by the PEEGN, one or more healing
policies to the VLM to one of: restart or migrate a VNF instance to a host upon
sending the event trigger for healing the VNF; and sending, from the PEEGN, an
20 update instance status event to the VLM for healing.
[0013] In an exemplary aspect of the present disclosure, the scaling of the VNF
corresponds to optimizing utilization of one or more resources for the VNF, wherein
the scaling comprises scale-in action and scale-out action for the VNF.
25
[0014] In an exemplary aspect of the present disclosure, for the termination of the
VNF, the method further comprises sending, from the VLM, a free VNF resource
event to the PEEGN to unreserve one or more resources at a physical and virtual
inventory manager (PVIM); sending, from the PEEGN, a free allocated resource
30 event to the PVIM for requesting the one or more resources from an allocation pool
to a free pool related to the VNF; receiving, at the PEEGN, from the PVIM, an
5
event acknowledgment, after releasing the one or more resources for the VNF; and
sending, from the PEEGN, a response back to the VLM, upon receiving the event
acknowledgement from the PVIM.
5 [0015] Another aspect of the present disclosure may relate to a system for
managing one or more services. The system comprises a data storage unit. The
system further comprises a processing unit connected with the data storage unit.
The processing unit is configured to send, from a policy execution engine
(PEEGN), an event trigger to a virtual network function lifecycle manager (VLM)
10 for performing one or more actions on a virtual network function (VNF) through an
interface. The processing unit is further configured to receive, at the PEEGN, from
the VLM, an event acknowledgement as a response after performing the one or
more actions on the VNF. The processing unit is further configured to store, at the
data storage unit, data related to the event trigger and event acknowledgment. The
15 processing unit is further configured to send, from the PEEGN, a response
associated with the event trigger to a NFV Platform Decision Analytics (NPDA).
[0016] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for managing one or more
20 services, the instructions include executable code which, when executed by one or
more units of a system, causes: a processing unit to send, from a policy execution
engine (PEEGN), an event trigger to a virtual network function lifecycle manager
(VLM) for performing one or more actions on a virtual network function (VNF)
through an interface. The executable code when executed further causes the
25 processing unit to receive, at the PEEGN, from the VLM, an event
acknowledgement as a response after performing the one or more actions on the
VNF. The executable code when executed further causes the processing unit to
store, at the data storage unit, data related to the event trigger and event
acknowledgment. The executable code when executed further causes the processing
30 unit to send, from the PEEGN, a response associated with the event trigger to a
NFV Platform Decision Analytics (NPDA).
6
OBJECTS OF THE DISCLOSURE
[0017] Some of the objects of the present disclosure, which at least one
5 embodiment disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a system and a method
that provides an interface (referred herein as a PE_VN interface) between a VLM
(VNF Lifecycle Manager) and a policy execution engine (PE or PEEGN) by which
10 various operation at the PE can be performed.
[0019] It is another object of the present disclosure to provide a system and a
method for providing a communication between a VLM and a policy execution
engine (PE/PEEGN).
15
[0020] It is another object of the present disclosure to provide a system and a
method that can enable the PEEGN to 1) send VNF Scaling/Healing event to VLM
(VNF Lifecycle Manager), 2) receive response to the VNF Scaling/Healing event
accordingly, 3) store the received response, and 4) send the response back to NFV
20 Platform Decision Analytics (NPDA) in case of Healing and Scaling.
[0021] It is yet another object of the present disclosure to provide a system and a
method that can provide an interface that can be used during VNF termination to
instruct a physical and virtual inventory manager (PVIM) to free VNF resources.
25
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
30 and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
7
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
5 disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
[0023] FIG. 1 illustrates an exemplary an exemplary block diagram representation
10 of a management and orchestration (MANO) architecture, in accordance with
exemplary implementations of the present disclosure.
[0024] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
15 exemplary implementation of the present disclosure.
[0025] FIG. 3 illustrates an exemplary block diagram of a system for managing one
or more services, in accordance with exemplary implementations of the present
disclosure.
20
[0026] FIG. 4 illustrates a method flow diagram for managing one or more services
in accordance with exemplary implementations of the present disclosure.
[0027] FIG. 5 illustrates an exemplary block diagram of a system architecture for
25 managing one or more services, in accordance with exemplary implementations of
the present disclosure.
[0028] FIG. 6 an exemplary process flow diagram for managing one or more
services for the termination of the VNF.
30
8
[0029] FIG. 7 illustrates a process flow diagram for managing one or more services
in accordance with exemplary implementations of the present disclosure.
[0030] The foregoing shall be more apparent from the following more detailed
5 description of the disclosure.
DETAILED DESCRIPTION
[0031] In the following description, for the purposes of explanation, various
10 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
15 address any of the problems discussed above or might address only some of the
problems discussed above.
[0032] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
20 the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
25
[0033] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
30 may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
9
[0034] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
5 a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
10 [0035] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
15 designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
20 any additional or other elements.
[0036] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
25 purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
30 input/output processing, and/or any other functionality that enables the working of
10
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0037] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
5 “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
10 phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
15
[0038] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
20 magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
25 [0039] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
30 called.
11
[0040] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), a plurality of microprocessors, one or more
5 microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0041] As used herein the transceiver unit include at least one receiver and at least
10 one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
[0042] As used herein, the virtual network function (VNF) refers to a network
15 function module that operates in virtualized environments such as virtual machines
or containers. This virtualization allows for dynamic scaling and rapid adaptation
to changing network conditions, improving reducing hardware requirement.
[0043] As used herein, the virtual network function component (VNFC) refers to a
20 sub-component within a virtual network function (VNF) that performs a specific
task or set of tasks related to the overall network function. VNFCs reduces VNFs
into smaller units, each responsible for unique functions, such as packet inspection,
policy enforcement, etc.
25 [0044] As used herein, the HTTP (Hypertext Transfer Protocol) is the set of rules
for transferring files such as text, images, sound, video, and other multimedia files
over the web.
[0045] As discussed in the background section, the current known solutions have
30 several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing
12
method and system for providing communication between a virtual network
function life manager (VLM) and a policy execution engine (PEEGN).
[0046] Referring to FIG. 1 an exemplary block diagram representation of a
5 management and orchestration (MANO) architecture [100], in accordance with
exemplary implementation of the present disclosure is illustrated. The MANO
architecture [100] is developed for managing telecom cloud infrastructure
automatically, managing design or deployment design, managing instantiation of a
network node(s) etc. The MANO architecture [100] deploys the network node(s) in
10 the form of Virtual Network Function (VNF) and Cloud-native/ Container Network
Function (CNF). The MANO architecture [100] is used to auto-instantiate the VNFs
into the corresponding environment of the present disclosure so that it could help
in onboarding other vendor(s) CNFs and VNFs to the platform.
15 [0047] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104]; a platform foundation services
module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112], wherein all the components are assumed to be
20 connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
[0048] The NFV and SDN design function module [104] further comprises a VNF
lifecycle manager (compute) [1042]; a VNF catalog [1044]; a network services
25 catalog [1046]; a network slicing and service chaining manager [1048]; a physical
and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] is responsible for determining on which server
of the communication network the microservice will be instantiated. The VNF
lifecycle manager (compute) [1042] will manage the overall flow of incoming/
30 outgoing requests during interaction with the user. The VNF lifecycle manager
(compute) [1042] is responsible for determining which sequence to be followed for
13
executing the process. For e.g., in an AMF network function of the communication
network (such as a 5G network), sequence for execution of processes P1 and P2
etc. The VNF catalog [1044] stores the metadata of all the VNFs (also CNFs in
some cases). The network services catalog [1046] stores the information of the
5 services that need to be run. The network slicing and service chaining manager
[1048] manages the slicing (an ordered and connected sequence of network service/
network functions (NFs) that must be applied to a specific networked data packet.
The physical and virtual resource manager [1050] stores the logical and physical
inventory of the VNFs. In an example, logical inventory of VNFs include
10 virtualized instances of network functions like firewalls, load balancers, and
routers. In an example, physical inventory includes hardware such as servers,
storage devices, and network equipment that host VNFs. Also, physical inventory
includes software OpenStack that allows multiple virtual machines to run on a
single physical server. Just like the VNF lifecycle manager (compute) [1042], the
15 CNF lifecycle manager [1052] is similarly used for the CNFs lifecycle
management.
[0049] The platform foundation services module [106] further comprises a
microservices edge load balancer [1062]; an identity & access manager [1064]; a
20 command line interface (CLI) [1066]; a central logging manager [1068]; and an
event routing manager (ERM) [1070] (alternatively referred to as ERM unit [1070]
herein). The microservices elastic load balancer [1062] is used for maintaining the
load balancing of the request for the services. The identity & access manager [1064]
is used for logging purposes. The command line interface (CLI) [1066] is used to
25 provide commands to execute certain processes which require changes during the
run time. The central logging manager [1068] is responsible for keeping the logs of
every service. The logs are generated by the MANO architecture [100]. The logs
are used for debugging purposes. The ERM unit [1070] is responsible for routing
the events i.e., the application programming interface (API) hits to the
30 corresponding services.
14
[0050] The platform core services module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
performance manager [1086]; a policy execution engine (PEEGN) [1088]; a
capacity monitoring manager (CP) [1090]; a release management (mgmt.)
5 repository [1092]; a configuration manager & Golden Configuration Template
(GCT) [1094]; an NFV platform decision analytics [1096]; a platform NoSQL DB
[1098]; a platform schedulers and cron jobs (PSC) service [1100]; a VNF backup
& upgrade manager [1102]; a microservice auditor [1104]; and a platform
operations, administration and maintenance manager [1106]. The NFV
10 infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
[1084] is responsible for supervising the alarms the vendor is generating. The
performance manager [1086] is responsible for manging the performance counters.
The PEEGN [1088] is responsible for creating and managing all the policies. In an
15 example, the PEEGN [1088] may be responsible for policies such as scaling
policies, corrective policies, alarm policies, resource threshold policies, other
examples may be like policies related to instantiation process, healing process and
the like. The capacity monitoring manager (CP) [1090] is responsible for sending
the request to the PEEGN [1088]. The capacity monitoring manager (CP) [1090] is
20 capable of monitoring usage of network resources such as but not limited to CPU
utilization, RAM utilization and storage utilization across all the instances of the
virtual infrastructure manager (VIM) or simply the NFV infrastructure monitoring
manager [1082]. The capacity monitoring manager (CP) [1090] is also capable of
monitoring said network resources for each instance of the VNF. The capacity
25 monitoring manager (CP) [1090] is responsible for constantly tracking the network
resource utilization. The release management (mgmt.) repository [1092] is
responsible for managing the releases and the images of all the vendor network
nodes. The configuration manager & GCT [1094] manages the configuration and
GCT of all the vendors. The NFV platform decision analytics [1096] helps in
30 deciding the priority of using the network resources. It is further noted that the
PEEGN [1088], the configuration manager & GCT [1094] and the NFV platform
15
decision analytics [1096] work together. The platform NoSQL DB [1098] is a
database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. The platform schedulers and cron jobs (PSC)
service [1100] schedules the task such as but not limited to triggering of an event,
5 traversing the network graph etc. The VNF backup & upgrade manager [1102] takes
backup of the images, binaries of the VNFs and the CNFs and produces those
backups on demand in case of server failure. The microservice auditor [1104] audits
the microservices. For e.g., in a hypothetical case, instances not being instantiated
by the MANO architecture [100] and using the network resources then the
10 microservice auditor [1104] audits and informs the same so that resources can be
released for services running in the MANO architecture [100], thereby assuring the
services only run on the MANO architecture [100]. The platform operations,
administration, and maintenance manager [1106] is used for newer instances that
are spawning.
15
[0051] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter [1128];
and a NFV gateway [1130]. The platform external API adaptor and gateway [1122]
20 is responsible for handling the external services (to the MANO architecture [100])
that require the network resources. The generic decoder and indexer (XML, CSV,
JSON) [1124] directly gets the data of the vendor system in the XML, CSV, JSON
format. The docker service adaptor [1126] is the interface provided between the
telecom cloud and the MANO architecture [100] for communication. The API
25 adapter [1128] is used to connect with the virtual machines (VMs). The NFV
gateway [1130] is responsible for providing the path to each service going
to/incoming from the MANO architecture [100].
[0052] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
30 upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
16
implementation, the computing device [200] may also implement a method for
managing one or more services utilising the system [300]. In another
implementation, the computing device [200] itself implements the method for
managing one or more services using one or more units configured within the
5 computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
[0053] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
10 processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
15 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
20 instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0054] A storage device [210], such as a magnetic disk, optical disk, or solid-state
25 drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
30 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
17
[204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
5 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0055] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
10 and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
15 instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
20 software instructions.
[0056] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway data communication coupling to a network link [220] that is connected to a
25 local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
30 compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
18
electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0057] The computing device [200] can send messages and receive data, including
5 program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
10 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0058] The computing device [200] encompasses a wide range of electronic
devices capable of processing data and performing computations. Examples of
15 computing device [200] include, but are not limited only to, personal computers,
laptops, tablets, smartphones, servers, and embedded systems. The devices may
operate independently or as part of a network and can perform a variety of tasks
such as data storage, retrieval, and analysis. Additionally, computing device [200]
may include peripheral devices, such as monitors, keyboards, and printers, as well
20 as integrated components within larger electronic systems, showcasing their
versatility in various technological applications.
[0059] Referring to FIG. 3, an exemplary block diagram of a system [300] for
managing one or more services, is shown, in accordance with the exemplary
25 implementations of the present disclosure. The system [300] comprises at least one
data storage unit [302], at least one processing unit [304], and at least one interface
[306]. Also, all of the components/ units of the system [300] are assumed to be
connected to each other unless otherwise indicated below. As shown in the figures
all units shown within the system [300] should also be assumed to be connected to
30 each other. Also, in FIG. 3 only a few units are shown, however, the system [300]
may comprise multiple such units or the system [300] may comprise any such
19
numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [300] may be present in a user
device/ user equipment to implement the features of the present disclosure. The
system [300] may be a part of the user device or may be independent of but in
5 communication with the user device (may also referred herein as a UE). In another
implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
entity and partly in the user device.
10 [0060] The system [300] is configured for providing communication between a
virtual network function life manager (VLM) and a policy execution engine
(PEEGN), with the help of the interconnection between the components/units of the
system [300].
15 [0061] The system [300] comprises a data storage unit [302]. The system further
comprises a processing unit [304] connected with the data storage unit [302]. The
processing unit [304] is configured to send, from a policy execution engine
(PEEGN) [1088], an event trigger to a virtual network function lifecycle manager
(VLM) [1042] for performing one or more actions on a virtual network function
20 (VNF) through an interface [306].
[0062] The processing unit sends from the PEEGN [1088] to the VLM [1042], the
event trigger for performing one or more actions on the VNF. In one example, event
triggers are sent when a specific event has occurred, for example if there is high
25 CPU or random-access memory usage associated with Virtual Network Function
(VNF) etc. In an exemplary aspect, when a specific event has occurred the network
administrator inputs the event trigger using the interface for performing one or more
actions in order to resolve the issues caused by the event.
30 [0063] In an exemplary aspect, the one or more services comprises at least: the
virtual network function life manager (VLM) [1042], and the PEEGN [1088].
20
[0064] As used herein, the VNF lifecycle manager [1042] is responsible for
determining on which server of the communication network the microservice will
be instantiated. The VNF lifecycle manager [1042] will manage the overall flow of
5 incoming/ outgoing requests during interaction with the user.
[0065] As used herein, the PEEGN [1088] is responsible for managing all the
policies associated with VNF.
10 [0066] In an exemplary aspect, the one or more actions comprises at least one of:
scaling of the VNF, healing of the VNF, and termination of the VNF.
[0067] In one example, scaling of the VNF means adjusting the resources allocated
to a VNF. The resources may include such as but not limited only to CPU, RAM,
15 storage, etc.
[0068] In an exemplary aspect, the scaling of the VNF corresponds to optimizing
utilization of one or more resources for the VNF, wherein the scaling comprises
scale-in action and scale-out action for the VNF.
20
[0069] In an exemplary aspect, the scaling includes scale-in action for the VNF
which refers to the action of reducing the number of resources, such as CPU,
memory, or storage, allocated to the VNF, if there decrease in demand or workload.
25 [0070] Similarly, the scaling includes scale-out action for the VNF which refers to
the action of increasing the number of resources such as CPU, memory, or storage,
allocated to the VNF, in order to handle increased demand for the VNF.
[0071] In an exemplary aspect, the resources may be associated with VNF itself. In
30 another example, resources may be associated with virtual network function
components (VNFC)/ containerized network function (CNF)/ containerized
21
network function components (CNFC) and further for each of them CPU, memory
or storage can be reduced using scale-in action and enhanced using scale-out action.
[0072] In an exemplary aspect, the healing of the VNF corresponds to restoring of
5 a failed VNF based on one or more healing policies.
[0073] In an exemplary aspect, healing of VNF means to automatically recovering
or restoring virtual network function (VNF) when it encounters issues or failures in
order to restore normal operations quickly and minimize downtime or performance
10 degradation.
[0074] In an exemplary aspect, a fault tolerance for any event failure, the interface
[306] works in a high availability mode and if one policy execution engine
(PEEGN) [1088] instance went down during request processing then next available
15 instance will take care of the request.
[0075] In an exemplary aspect, to perform the healing of the VNF, the processing
unit [304] is further configured to transmit, by the PEEGN [1088], one or more
healing policies to the VLM [1042] to one of: restart or migrate a VNF instance to
20 a host upon sending the event trigger for healing the VNF.
[0076] The processing unit [304] transmits one or more healing policies to the VLM
[1042] to either restart or migrate the VNF Instance to host upon sending the event
trigger for healing the VNF. In one example, restarting involves starting the failed
25 VNF instance again. In another example, migrating or switching the VNF instance
involves shifting the VNF instance from one host to another e.g., from an
overloaded server to a less utilized one to balance the load and improve the overall
performance of the system [300].
30 [0077] The processing unit [304] is further configured to send, from the PEEGN
[1088], an update instance status event to the VLM [1042] for the healing.
22
[0078] Upon receiving the healing policies from the PEEGN, the processing unit
[304] sends the update instance status event for the healing from the PEEGN [1088]
to the VLM [1042].
5
[0079] In an exemplary aspect, for the termination of the VNF, the processing unit
[304] is further configured to send, from the VLM [1042], a free VNF resource
event to the PEEGN [1088] to unreserve one or more resources at a physical and
virtual inventory manager (PVIM) [1050].
10
[0080] In an exemplary aspect, if a VNF is to be terminated, the processing unit
[304] sends from the VLM [1042] to the PEEGN [1088], the free VNF resource
event suggesting that the VNF is not in use and thus there is the need to unreserve
the resources that were allocated to it.
15
[0081] In an exemplary aspect, the processing unit [304] is further configured to
send, from the PEEGN [1088], a free allocated resource event to the PVIM [1050]
for requesting the one or more resources from an allocation pool to a free pool
related to the VNF.
20
[0082] On receiving the free VNF resource event at the PEEGN [1088], the
processing unit [304] further sends from the PEEGN [1088] to the PVIM [1050]
the free allocated resource event for releasing resources that were previously
allocated to a VNF in order to be terminated. In an exemplary aspect, the free
25 allocated resource event suggests that certain resources such as but not limited to
CPU, RAM, storage etc. that were dedicated to the VNF are no longer needed, needs
to be moved from an allocation pool i.e. the VNF instances where resources are
currently assigned to active services to a free pool are readily available as and when
required.
30
23
[0083] In an exemplary aspect, the processing unit [304] is further configured to
receive, at the PEEGN [1088], from the PVIM [1050], an event acknowledgment
to the PEEGN [1088] after releasing the one or more resources for the VNF.
5 [0084] On receiving the free allocated resource event at the PVIM [1050] and
releasing the one or more resources for the VNF, the processing unit [304 receives
from the PVIM [1050] at the PEEGN [1088], an event acknowledgement
confirming that the resources have been successfully released and are readily
available as and when required for the other VNF instances.
10
[0085] In an exemplary aspect, the processing unit [304] is further configured to
send, from the PEEGN [1088], a response back to the VLM [1042], upon receiving
the event acknowledgement from the PVIM [1050].
15 [0086] On receiving the event acknowledgement at PEEGN [1088] from the PVIM
[1050], the processing unit [304] sends the response back to the VLM [1042]
indicating that the termination process is complete and that the resources have been
freed.
20 [0087] In one example, the termination further involves migrating or switching the
VNF instance involves shifting the VNF instance from one host to another e.g.,
from an overloaded server to a less utilized one to balance the load and improve the
overall performance of the system [300].
25 [0088] In an exemplary aspect, the interface is a PE_VN interface wherein, PE_VN
interface is used when the one or more actions on the VNF are being performed.
[0089] In one example, PE_VN interface acts as a communication bridge between
VLM [1042] and PEEGN [1088] where one or more actions on the VNF are
30 performed.
24
[0090] In an exemplary aspect, the PE_VN interface uses async event-based
implementation to utilize interface efficiently using HTTP based request which
includes such as but not limited to JSON/XML type to carry information via HTTP.
5 [0091] Furthermore, the PEEGN [1088] and VLM [1042] are communicatively
coupled using PE_VN interface. The PE_VN interface can comprise at least one of
HTTP and web-socket based connections. In an embodiment, the PE_VN interface
is configured to facilitate exchange of information using hypertext transfer protocol
(HTTP) rest application programming interface (API). In an embodiment, the
10 HTTP rest API is used in conjunction with JSON and/or XML communication
media. In another embodiment, the PE_VN interface is configured to facilitate
exchange of information by establishing a web-socket connection between the
PEEGN [1088], and the VLM [1042]. A web-socket connection may involve
establishing a persistent connectivity between the PEEGN [1088], and the VLM
15 [1042]. An example of the web-socket based communication includes, without
limitation, a transmission control protocol (TCP) connection. In such a connection,
information, such as operational status, health, etc. of different components may be
exchanged through the interface using a ping-pong based communication.
20 [0092] The processing unit [304] is further configured to receive, at the PEEGN
[1088], from the VLM [1042], an event acknowledgement as a response after
performing the one or more actions on the VNF.
[0093] The processing unit [304] receives, at the PEEGN [1088], from the VLM
25 [1042] the event acknowledgement as the response after performing the one or more
actions on the VNF. In an exemplary aspect, the acknowledgment serves as
feedback that requested actions on the VNF have been successfully completed.
[0094] The processing unit [304] is further configured to store, at the data storage
30 unit [302], data related to the event trigger and event acknowledgment.
25
[0095] The processing unit [304] stores data related to the event trigger and event
acknowledgment at the data storage unit [302] for troubleshooting any event that
may arise in the future for quick resolution.
5 [0096] The processing unit [304] is further configured to send, from the PEEGN
[1088], a response associated with the event trigger to a NFV Platform Decision
Analytics (NPDA) [1096].
[0097] The processing unit [304] send the response associated with the event
10 trigger from the PEEGN [1088] to NPDA [1096] for performing one or more
actions specifically actions related to scaling and healing.
[0098] In an exemplary aspect, all the requests and responses are in JSON format
using REST API.
15
[0099] Referring to FIG. 4, an exemplary method flow diagram [400] for managing
one or more services, in accordance with exemplary implementations of the present
disclosure is shown. In an implementation the method [400] is performed by the
system [300]. Further, in an implementation, the system [300] may be present in a
20 server device to implement the features of the present disclosure. Also, as shown in
FIG. 4, the method [400] starts at step [402].
[0100] At step 404, the method [400] comprises sending, by the processing unit
[304] from a policy execution engine (PEEGN) [1088], an event trigger to a virtual
25 network function lifecycle manager (VLM) [1042] for performing one or more
actions on a virtual network function (VNF) through an interface [306].
[0101] The processing unit sends from the PEEGN [1088] to the VLM [1042], the
event trigger for performing one or more actions on the VNF. In one example, event
30 triggers are sent when a specific event has occurred, for example if there is high
CPU or random-access memory usage associated with Virtual Network Function
26
(VNF) etc. In an exemplary aspect, when a specific event has occurred the network
administrator inputs the event trigger using the interface for performing one or more
actions in order to resolve the issues caused by the event.
5 [0102] In an exemplary aspect, the one or more services comprises at least: the
virtual network function life manager (VLM) [1042], and the PEEGN [1088].
[0103] As used herein, the VNF lifecycle manager [1042] is responsible for
determining on which server of the communication network the microservice will
10 be instantiated. The VNF lifecycle manager [1042] will manage the overall flow of
incoming/ outgoing requests during interaction with the user.
[0104] As used herein, the PEEGN [1088] is responsible for managing all the
policies associated with VNF.
15
[0105] In an exemplary aspect, the one or more actions comprises at least one of:
scaling of the VNF, healing of the VNF, and termination of the VNF.
[0106] In one example, scaling of the VNF means adjusting the resources allocated
20 to the VNF. The resources may include such as but not limited only to CPU, RAM,
storage, etc.
[0107] In an exemplary aspect, the scaling of the VNF corresponds to optimizing
utilization of one or more resources for the VNF, wherein the scaling comprises
25 scale-in action and scale-out action for the VNF.
[0108] In an exemplary aspect, the scaling includes scale-in action for the VNF
which refers to the action of reducing the number of resources, such as CPU,
memory, or storage, allocated to the VNF, if there decrease in demand or workload.
30
27
[0109] Similarly, the scaling includes scale-out action for the VNF which refers to
the action of increasing the number of resources such as CPU, memory, or storage,
allocated to the VNF, in order to handle increased demand for the VNF.
5 [0110] In an exemplary aspect, the resources may be associated with VNF itself. In
another example, resources may be associated with virtual network function
components (VNFC)/ containerized network function (CNF)/ containerized
network function components (CNFC) and further for each of them CPU, memory
or storage can be enhanced using scale-in action and reduced using scale-out action.
10
[0111] In an exemplary aspect, the healing of the VNF corresponds to restoring of
a failed VNF based on one or more healing policies.
[0112] In an exemplary aspect, healing of VNF means to automatically recovering
15 or restoring virtual network function (VNF) when it encounters issues or failures in
order to restore normal operations quickly and minimize downtime or performance
degradation.
[0113] In an exemplary aspect, a fault tolerance for any event failure, the interface
20 [306] works in a high availability mode and if one policy execution engine
(PEEGN) [1088] instance went down during request processing then next available
instance will take care of the request.
[0114] In an exemplary aspect, for the healing of the VNF, the method further
25 comprises transmitting, by the PEEGN [1088], one or more healing policies to the
VLM [1042] to one of: restart or migrate a VNF instance to a host upon sending the
event trigger for healing the VNF.
[0115] The processing unit [304] transmits one or more healing policies to the VLM
30 [1042] to either restart or migrate the VNF Instance to host upon sending the event
trigger for healing the VNF. In one example, restarting involves starting the failed
28
VNF instance again. In another example, migrating the VNF instance involves
shifting the VNF instance from one host to another e.g., from an overloaded server
to a less utilized one to balance the load and improve the overall performance of the
system [300].
5
[0116] The method further comprises sending, from the PEEGN, an update
instance status event to the VLM for healing.
[0117] Upon receiving the healing policies from the PEEGN, the processing unit
10 [304] sends the update instance status event for the healing from the PEEGN [1088]
to the VLM [1042].
[0118] In an exemplary aspect, for the termination of the VNF, the method [400]
further comprises sending, from the VLM [1042], a free VNF resource event to the
15 PEEGN [1088] to unreserve one or more resources at a physical and virtual
inventory manager (PVIM) [1050].
[0119] In an exemplary aspect, if a VNF is to be terminated, the processing unit
[304] sends from the VLM [1042] to the PEEGN [1088], the free VNF resource
20 event suggesting that the VNF is not in use and thus there is the need to unreserve
the resources that were allocated to it.
[0120] The method [400] further comprises sending, from the PEEGN [1088], a
free allocated resource event to the PVIM [1050] for requesting the one or more
25 resources from an allocation pool to a free pool related to the VNF.
[0121] On receiving the free VNF resource event at the PEEGN [1088], the
processing unit [304] further sends from the PEEGN [1088] to the PVIM [1050]
the free allocated resource event for releasing resources that were previously
30 allocated to a VNF in order to be terminated. In an exemplary aspect, the free
allocated resource event suggests that certain resources such as but not limited to
29
CPU, RAM, storage etc. that were dedicated to the VNF are no longer needed, needs
to be moved from an allocation pool i.e. the VNF instances where resources are
currently assigned to active services to a free pool are readily available as and when
required.
5
[0122] The method [400] further comprises receiving, at the PEEGN [1088], from
the PVIM [1050], an event acknowledgment, after releasing the one or more
resources for the VNF.
10 [0123] On receiving the free allocated resource event at the PVIM [1050] and
releasing the one or more resources for the VNF, the processing unit [304] receives
from the PVIM [1050] at the PEEGN [1088], an event acknowledgement
confirming that the resources have been successfully released and are readily
available as and when required for the other VNF instances.
15
[0124] The method [400] further comprises sending, from the PEEGN [1088], a
response back to the VLM [1042], upon receiving the event acknowledgement from
the PVIM [1050].
20 [0125] On receiving the event acknowledgement at PEEGN [1088] from the PVIM
[1050], the processing unit [304] sends the response back to the VLM [1042]
indicating that the termination process is complete and that the resources have been
freed.
25 [0126] In one example, the termination further involves migrating or switching the
VNF instance involves shifting the VNF instance from one host to another e.g.,
from an overloaded server to a less utilized one to balance the load and improve the
overall performance of the system [300].
30 [0127] On receiving the event acknowledgement at PEEGN [1088] from the PVIM
[1050], the processing unit [304] sends the response back to the VLM [1042]
30
indicating that the termination process is complete and that the resources have been
freed.
[0128] In an exemplary aspect, the interface is a PE_VN interface. The PE_VN
5 interface is used when the one or more actions on the VNF are being performed.
[0129] In one example, the PE_VN interface acts as a communication bridge
between VLM [1042] and PEEGN [1088] where one or more actions on the VNF
are performed. In addition, the PE_VN interface uses async event-based
10 implementation to utilize interface efficiently using HTTP based request which
includes such as but not limited to JSON/XML type to carry information via HTTP.
[0130] Furthermore, the PEEGN [1088] and VLM [1042] are communicatively
coupled using PE_VN interface. The PE_VN interface can comprise at least one of
15 HTTP and web-socket based connections. In an embodiment, the PE_VN interface
is configured to facilitate exchange of information using hypertext transfer protocol
(HTTP) rest application programming interface (API). In an embodiment, the
HTTP rest API is used in conjunction with JSON and/or XML communication
media. In another embodiment, the PE_VN interface is configured to facilitate
20 exchange of information by establishing a web-socket connection between the
PEEGN [1088], and the VLM [1042]. A web-socket connection may involve
establishing a persistent connectivity between the PEEGN [1088], and the VLM
[1042]. An example of the web-socket based communication includes, without
limitation, a transmission control protocol (TCP) connection. In such a connection,
25 information, such as operational status, health, etc. of different components may be
exchanged through the interface using a ping-pong-based communication.
[0131] At step [406], the method [400] further comprises receiving, by the
processing unit [304] at the PEEGN [1088], from the VLM [1042], an event
30 acknowledgement as a response after performing the one or more actions on the
VNF.
31
[0132] The processing unit [304] receives, at the PEEGN [1088], from the VLM
[1042] the event acknowledgement as the response after performing the one or more
actions on the VNF. In an exemplary aspect, the acknowledgment serves as
5 feedback that requested actions on the VNF have been successfully completed.
[0133] At step [408], the method [400] further comprises storing, by the processing
unit [304] at a data storage unit [302], data related to the event trigger and event
acknowledgment.
10
[0134] The processing unit [304] stores data related to the event trigger and event
acknowledgment at the data storage unit [302] for troubleshooting any event that
may arise in the future for quick resolution.
15 [0135] At step [410], the method [400] further comprises sending, by the
processing unit [304] from the PEEGN, a response associated with the event trigger
to a NFV Platform Decision Analytics (NPDA) [1096].
[0136] The processing unit [304] send the response associated with the event
20 trigger from the PEEGN [1088] to NPDA [1096] for performing one or more
actions specifically actions related to scaling and healing.
[0137] In an exemplary aspect, all the requests and responses are in JSON format
using REST API.
25
[0138] At step [412], the method [400] terminates.
[0139] Referring to FIG. 5, an exemplary block diagram of a system architecture
[500] for managing one or more services, is shown, in accordance with the
30 exemplary implementations of the present disclosure.
32
[0140] The system architecture [500] comprises a PEEGN [1088] provided to send
an event trigger to a virtual network function lifecycle manager (VLM) [1042] for
performing one or more actions on a virtual network function (VNF) through an
interface [306]. In an exemplary aspect, the one or more actions comprises at least
5 one of: scaling of the VNF, healing of the VNF, and termination of the VNF.
[0141] The VLM [1042] send the response to the PEEGN [1088] an event
acknowledgement as a response after performing the one or more actions on the
VNF.
10
[0142] The system architecture [500] further comprises the database (also referred
to herein as the data storage unit [302]) provided to store data related to the event
trigger and event acknowledgment for troubleshooting any event that may arise in
the future for quick resolution.
15
[0143] The PEEGN [1088] further send a response back associated with the event
trigger to VLM [1042] which further sends it to the NFV Platform Decision
Analytics (NPDA) [1096] for performing one or more actions specifically actions
related to scaling and healing.
20
[0144] Referring to FIG. 6, an exemplary process flow diagram [600] for managing
one or more services for the termination of the VNF.
[0145] At step S1, the process [600] comprises sending, from VLM [1042], the free
25 VNF resource event to the PEEGN [1088] to unreserve or free one or more
resources at a physical and virtual inventory manager (PVIM) [1050] in case of
termination of the VNF.
[0146] At step S2, the process [600] further comprises sending, from the PEEGN
30 [1088], a free allocated resource event to the PVIM [1050] for requesting the one
or more resources from an allocation pool to a free pool related to the VNF.
33
[0147] At step S3, the process [600] further comprises receiving, from the PVIM
[1050], the event acknowledgment response after releasing the one or more
resources for the VNF at the PEEGN [1088].
5
[0148] At step S4, upon receiving the event acknowledgement from the PVIM
[1050], the process [600] further comprises sending, from the PEEGN [1088], a
response back to the VLM [1042].
10 [0149] Referring to FIG. 7, an exemplary process flow diagram [400] for managing
one or more services, in accordance with exemplary implementations of the present
disclosure is shown. Also, as shown in FIG. 7, the process [700] starts at step [702].
[0150] At step [704], the process [700] comprises performing one or more actions
15 such as healing and scaling action on the resources associated with the VNF. In an
exemplary aspect, PE_VN interface is provided which acts as a communication
bridge between VLM [1042] and PEEGN [1088] where one or more actions on the
VNF are performed.
20 [0151] At step [706], for this PEEGN [1088] the process [700] comprises sending
an event trigger in the form of
TRIGGER_VNF_SCALING/TRIGGER_VNFC_SCALING to VLM for scaling
VNF/VNFC.
25 [0152] Furthermore, the process [700] comprises updating VNF instance by
imputing a command in the form of UPDATE_VNF_INSTANCE_STATUS for
healing VNF.
[0153] At step [708], the process [700] comprises sending by the VLM [1042] send
30 event ack as response to PEEGN [1088] after scale or heal the VNF.
34
[0154] At step [710], the process [700] comprises storing by the PEEGN [1088] the
details/data in its Database (DB) (data storage unit [302]) and further sending
scaling response to NPDA [1096].
5 [0155] At step [712], the process [700] terminates.
[0156] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for managing one or more services, the
instructions include executable code which, when executed by one or more units of
10 a system, causes: a processing unit to send, from a policy execution engine
(PEEGN), an event trigger to a virtual network function lifecycle manager (VLM)
for performing one or more actions on a virtual network function (VNF) through an
interface. The executable code when executed further causes the processing unit to
receive, at the PEEGN, from the VLM, an event acknowledgement as a response
15 after performing the one or more actions on the VNF. The executable code when
executed further causes the processing unit to store, at the data storage unit, data
related to the event trigger and event acknowledgment. The executable code when
executed further causes the processing unit to send, from the PEEGN, a response
associated with the event trigger to a NFV Platform Decision Analytics (NPDA).
20
[0157] As is evident from the above, the present disclosure provides a technically
advanced solution for managing one or more service. The present invention
provides a solution for enabling the PEEGN to send VNF/VNFC scaling request to
the VLM at run time using PE_VN interface between the PEEGN and the VLM
25 after successfully reserving resources at the PVIM. Furthermore, the present
invention provides a solution for enabling the PEEGN to 1) send VNF healing event
to the VLM, and 2) instructing the VLM to do required action(s) for the VNF based
on a healing policy. Furthermore, the present invention provides a solution for
decreasing time consumption at the PEEGN while performing certain operations by
30 providing direct communication with the VLM at a run time. Furthermore, the
present invention provides a solution for deleting VNF resources at the PVIM side
35
which was instructed by the PEEGN micro service during termination of VNF
instance on cloud infrastructure (VIM).
[0158] Further, in accordance with the present disclosure, it is to be acknowledged
5 that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
10 as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
15 [0159] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
20 be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
36
We claim:
1. A method for managing one or more services, the method comprises:
- sending, by a processing unit [304] from a policy execution engine
5 (PEEGN) [1088], an event trigger to a virtual network function lifecycle manager
(VLM) [1042] for performing one or more actions on a virtual network function
(VNF) through an interface [306];
- receiving, by the processing unit [304] at the PEEGN [1088], from the VLM
[1042], an event acknowledgement as a response after performing the one or more
10 actions on the VNF;
- storing, by the processing unit [304] at a data storage unit [302], data related
to the event trigger and event acknowledgment; and
- sending, by the processing unit [304] from the PEEGN [1088], a response
associated with the event trigger to a NFV Platform Decision Analytics (NPDA)
15 [1096].
2. The method as claimed in claim 1, wherein the one or more services
comprises at least: the virtual network function life manager (VLM) [1042], and the
PEEGN [1088].
20
3. The method as claimed in claim 1, wherein the one or more actions
comprises at least one of: scaling of the VNF, healing of the VNF, and termination
of the VNF.
25 4. The method as claimed in claim 1, wherein the interface is a PE_VN
interface wherein, PE_VN interface is used when the one or more actions on the
VNF are being performed.
5. The method as claimed in claim 3, wherein the healing of the VNF
30 corresponds to restoring of a failed VNF based on one or more healing policies.
37
6. The method as claimed in claim 5, wherein, for the healing of the VNF, the
method further comprises:
5 - transmitting, by the PEEGN [1088], one or more healing policies to the
VLM [1042] to one of: restart or migrate a VNF instance to a host upon sending the
event trigger for healing the VNF; and
- sending, from the PEEGN [1088], an update instance status event to the
VLM [1042] for healing.
10
7. The method as claimed in claim 3, wherein the scaling of the VNF
corresponds to optimizing utilization of one or more resources for the VNF, wherein
the scaling comprises scale-in action and scale-out action for the VNF.
15 8. The method as claimed in claim 3, wherein for the termination of the VNF,
the method further comprises:
- sending, from the VLM [1042], a free VNF resource event to the PEEGN
[1088] to unreserve one or more resources at a physical and virtual inventory
manager (PVIM) [1050];
20 - sending, from the PEEGN [1088], a free allocated resource event to the
PVIM [1050] for requesting the one or more resources from an allocation pool to a
free pool related to the VNF;
- receiving, at the PEEGN [1088], from the PVIM [1050], an event
acknowledgment, after releasing the one or more resources for the VNF; and
25 - sending, from the PEEGN [1088], a response back to the VLM [1042], upon
receiving the event acknowledgement from the PVIM [1050].
9. A system for managing one or more services, the system comprises:
a data storage unit [302]; and
38
a processing unit [304] connected with the data storage unit [302], wherein the
processing unit [304] is configured to:
- send, from a policy execution engine (PEEGN) [1088], an event trigger to
a virtual network function lifecycle manager (VLM) [1042] for performing one or
5 more actions on a virtual network function (VNF) through an interface;
- receive, at the PEEGN [1088], from the VLM [1042], an event
acknowledgement as a response after performing the one or more actions on the
VNF;
- store, at the data storage unit [302], data related to the event trigger and
10 event acknowledgment; and
- send, from the PEEGN [1088], a response associated with the event trigger
to a NFV Platform Decision Analytics (NPDA) [1096].
10. The system as claimed in claim 9, wherein the one or more services
15 comprises at least: the virtual network function life manager (VLM) [1042], and the
PEEGN [1088].
11. The system as claimed in claim 9, wherein the one or more actions
comprises at least one of: scaling of the VNF, healing of the VNF, and termination
20 of the VNF.
12. The system as claimed in claim 9, wherein the interface is a PE_VN
interface, wherein the PE_VN interface is used when the one or more actions on the
VNF are being performed.
25
13. The system as claimed in claim 11, wherein the healing of the VNF
corresponds to restoring of a failed VNF based on one or more healing policies.
14. The system as claimed in claim 11, wherein, to perform the healing of the
30 VNF, the processing unit [304] is further configured to:
39
- transmit, by the PEEGN [1088], one or more healing policies to the VLM
[1042] to one of: restart or migrate a VNF instance to a host upon sending the event
trigger for the healing the VNF; and
- send, from the PEEGN [1088], an update instance status event to the VLM
5 [1042] for the healing.
15. The system as claimed in claim 11, wherein the scaling of the VNF
corresponds to optimizing utilization of one or more resources for the VNF, wherein
the scaling comprises scale-in action and scale-out action for the VNF.
10
16. The system as claimed in claim 11, wherein for the termination of the VNF,
the processing unit is further configured to:
- send, from the VLM [1042], a free VNF resource event to the PEEGN
[1088] to unreserve one or more resources at a physical and virtual inventory
15 manager (PVIM) [1050] in case of termination of the VNF;
- send, from the PEEGN [1088], a free allocated resource event to the PVIM
[1050] for requesting the one or more resources from an allocation pool to a free
pool related to the VNF;
- receive, at the PEEGN [1088], from the PVIM [1050], an event
20 acknowledgment to the PEEGN after releasing the one or more resources for the
VNF; and
- send, from the PEEGN [1088], a response back to the VLM [1042], upon
receiving the event acknowledgement from the PVIM.
| # | Name | Date |
|---|---|---|
| 1 | 202321066602-STATEMENT OF UNDERTAKING (FORM 3) [04-10-2023(online)].pdf | 2023-10-04 |
| 2 | 202321066602-PROVISIONAL SPECIFICATION [04-10-2023(online)].pdf | 2023-10-04 |
| 3 | 202321066602-POWER OF AUTHORITY [04-10-2023(online)].pdf | 2023-10-04 |
| 4 | 202321066602-FORM 1 [04-10-2023(online)].pdf | 2023-10-04 |
| 5 | 202321066602-FIGURE OF ABSTRACT [04-10-2023(online)].pdf | 2023-10-04 |
| 6 | 202321066602-DRAWINGS [04-10-2023(online)].pdf | 2023-10-04 |
| 7 | 202321066602-Proof of Right [09-02-2024(online)].pdf | 2024-02-09 |
| 8 | 202321066602-FORM-5 [04-10-2024(online)].pdf | 2024-10-04 |
| 9 | 202321066602-ENDORSEMENT BY INVENTORS [04-10-2024(online)].pdf | 2024-10-04 |
| 10 | 202321066602-DRAWING [04-10-2024(online)].pdf | 2024-10-04 |
| 11 | 202321066602-CORRESPONDENCE-OTHERS [04-10-2024(online)].pdf | 2024-10-04 |
| 12 | 202321066602-COMPLETE SPECIFICATION [04-10-2024(online)].pdf | 2024-10-04 |
| 13 | 202321066602-FORM 3 [08-10-2024(online)].pdf | 2024-10-08 |
| 14 | 202321066602-Request Letter-Correspondence [24-10-2024(online)].pdf | 2024-10-24 |
| 15 | 202321066602-Power of Attorney [24-10-2024(online)].pdf | 2024-10-24 |
| 16 | 202321066602-Form 1 (Submitted on date of filing) [24-10-2024(online)].pdf | 2024-10-24 |
| 17 | 202321066602-Covering Letter [24-10-2024(online)].pdf | 2024-10-24 |
| 18 | 202321066602-CERTIFIED COPIES TRANSMISSION TO IB [24-10-2024(online)].pdf | 2024-10-24 |
| 19 | Abstract.jpg | 2024-12-04 |
| 20 | 202321066602-ORIGINAL UR 6(1A) FORM 1 & 26-030125.pdf | 2025-01-07 |