Sign In to Follow Application
View All Documents & Correspondence

Method And System For Handling Of Event(s) Concerning Policies Related To One Or More Operations

Abstract: The present disclosure relates to a method and a system for handling of event(s) concerning policies related to one or more operations. The method comprises receiving, by a transceiver unit [302] at a policy execution engine (PEEGN) [1088], a request related to one or more events from one or more microservices. The method comprises performing, by a processing unit [304] at the PEEGN [1088], one or more logics. The method comprises operating, by an operations unit [306] at the PEEGN [1088], a physical data, and a logical data for storing a set of call flow details associated with each of the one or more events in a database [308]. The method comprises purging, by a purging unit [310] at the PEEGN [1088], the event information corresponding to each of the one or more events, from the database [308]. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 October 2023
Publication Number
20/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Adityakar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Ankit Murarka
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Yog Vashishth
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Meenakshi Rani
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Santosh Kumar Yadav
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Jugal Kishore
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Gaurav Saxena
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR HANDLING OF EVENT(S)
CONCERNING POLICIES RELATED TO ONE OR MORE
OPERATIONS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre
Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR HANDLING OF EVENT(S)
CONCERNING POLICIES RELATED TO ONE OR MORE
OPERATIONS
5 FIELD OF INVENTION
[0001] The present disclosure generally relates to network performance
management systems. More particularly, embodiments of the present disclosure
relate to methods and systems for handling of event(s) concerning policies related
10 to one or more operations.
BACKGROUND
[0002] The following description of the related art is intended to provide
15 background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
20
[0003] In communication network such as 5G communication network, different
microservices perform different services, jobs and tasks in the network. Different
microservices have to perform their jobs based on operational parameters and
policies in such a manner, that it does not affect microservices’ own operations and
25 service network operations. However, in MANO system architecture, during
service operations, for fulfilling the requirements of policies and operational
parameters, it is required to provide sufficient resources for managing the virtual
network functions (VNF/VNFC) and/or containerized functions (CNF/CNFC)
component to handle service requests coming in the network. There are certain
30 challenges, such as excessive provisioning of resources, insufficient provisioning
of resources, resource failures, resource mismanagement, performance degradation,
3
conflict while reservation and allocation of resources, unavailability of Policy
Execution Engine Service, excessive time consumption in reservation and
allocation of VNF/VNFC/CNFC/CNF resources and cost increment, which may
happen in the network and affects the network performance and operational
5 efficiency.
[0004] Thus, there exists an imperative need in the art to provide an efficient system
and method for handling event(s) concerning policies related to one or more
operations.
10
SUMMARY
[0005] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
15 This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0006] An aspect of the present disclosure may relate to a method for handling of
event(s) concerning policies related to one or more operations. The method includes
20 receiving, by a transceiver unit at a policy execution engine (PEEGN), a request
related to the one or more events from one or more microservices. The method
further includes performing, by a processing unit at the PEEGN, one or more logics.
The method includes purging, by a purging unit at the PEEGN, the event
information corresponding to each of the one or more events, from the database.
25
[0007] In an exemplary aspect of the present disclosure, the method comprises
displaying, by a user interface at the PEEGN, at least one of: an indexed data, a
vector data, and a graphical data from the database, for facilitating at least one of a
create operation, an update operation, a delete operation, and a get operation.
30
4
[0008] In an exemplary aspect of the present disclosure, the one or more events
relate to at least one of an instantiation call flow, a scaling call flow, and a healing
call flow.
[0009] In an exemplary aspect of the present 5 disclosure, the physical data is based
on an information of a set of resources associated with the virtual functions and
containerized functions.
[0010] In an exemplary aspect of the present disclosure, the logical data is based
10 on the performance of one or more logics.
[0011] In an exemplary aspect of the present disclosure, prior to the operating, by
the operations unit at the PEEGN, the physical data, and the logical data, the method
comprises generating, by the processing unit at the PEEGN, a unique identity for
15 the one or more events, wherein the events comprise asynchronous requests.
[0012] In an exemplary aspect of the present disclosure, the asynchronous requests
are served based on at least one of virtual function policies, and containerized
function policies.
20
[0013] In an exemplary aspect of the present disclosure, the method comprises
storing, by a storage unit, a set of CRUD operations of one or more policies, the
one or more policies comprising one or more of VNF policies and CNF policies for
being used during logical call flows.
25
[0014] Another aspect of the present disclosure may relate to a system for handling
of event(s) concerning policies related to one or more operations. The system
comprises a transceiver unit configured to receive a request related to one or more
events from one or more microservices. The system further comprises a processing
30 unit connected to at least the transceiver unit. The processing unit is configured to
perform one or more logics. The system further comprises an operations unit
5
connected to at least the transceiver unit. The operations unit is configured to
operate a physical data, and a logical data for storing a set of call flow details
associated with each of the one or more events in a database. The system further
comprises a purging unit connected to at least the operations unit. The purging unit
is configured to purge the event information 5 corresponding to each of the one or
more events, from the database.
[0015] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for handling of event(s)
10 concerning policies related to one or more operations, the instructions include
executable code which, when executed by one or more units of a system, causes a
transceiver unit to receive a request related to one or more events from one or more
microservices. The executable code when executed further causes a processing unit
to perform one or more logics. The executable code when executed further causes
15 an operations unit connected to operate a physical data, and a logical data for storing
a set of call flow details associated with each of the one or more events in a database.
The executable code when executed further causes a purging unit to purge the event
information corresponding to each of the one or more events, from the database.
20 OBJECTS OF THE DISCLOSURE
[0016] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
25 [0017] It is an object of the present disclosure to provide a system and a method for
handling of event(s) concerning policies related to one or more operations.
[0018] It is another object of the present disclosure to provide a solution for
Instantiation, Scaling and Healing of virtual functions (VNFs/VNFCs) and
30 containerized functions (CNFs/CNFCs).
6
[0019] It is yet another object of the present disclosure to provide a solution for
providing an interface for Instantiation, Scaling, and Healing of virtual functions
(VNFs/VNFCs) and containerized functions (CNFs/CNFCs).
[0020] It is yet another object of 5 the present disclosure to provide a solution for
providing zero data loss policies for instantiation scaling and healing call flow as
these flows are having many asynchronous calls.
[0021] It is yet another object of the present disclosure to provide a solution to
10 provide seamless user experience with a time complexity that enables quick
response to query events.
[0022] It is yet another object of the present disclosure to provide a single interface
to interact with all types of data.
15
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
20 and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
25 according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
30 [0024] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture.
7
[0025] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
5
[0026] FIG. 3 illustrates an exemplary block diagram of a system for handling of
event(s) concerning policies related to one or more operations, in accordance with
exemplary implementations of the present disclosure.
10 [0027] FIG. 4 illustrates a method flow diagram for handling of event(s) concerning
policies related to one or more operations, in accordance with exemplary
implementations of the present disclosure.
[0028] FIG. 5 illustrates an exemplary block diagram of a system architecture for
15 handling of event(s) concerning policies related to one or more operations, in
accordance with exemplary implementations of the present disclosure.
[0029] FIG. 6 illustrates an exemplary block diagram of PE_NS interface, in
accordance with exemplary implementations of the present disclosure.
20
[0030] FIG. 7 illustrates a process flow diagram for handling of event(s) concerning
policies related to one or more operations, in accordance with exemplary
implementations of the present disclosure.
25 [0031] FIG. 8 illustrates a process flow diagram for implementing PE_NS interface
policies, in accordance with exemplary implementations of the present disclosure.
[0032] FIG. 9 illustrates an exemplary call flow diagram for resource reservation
in a network during an instantiation operation, in accordance with exemplary
30 implementations of the present disclosure.
8
[0033] FIG. 10 illustrates an exemplary call flow diagram for resource reservation
in a network during a scaling operation, in accordance with exemplary
implementations of the present disclosure.
[0034] The foregoing shall be 5 more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
10 [0035] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
15 another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
[0036] The ensuing description provides exemplary embodiments only, and is not
20 intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
25 disclosure as set forth.
[0037] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
30 specific details. For example, circuits, systems, processes, and other components
9
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0038] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow 5 diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
10 included in a figure.
[0039] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
15 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
20 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0040] As used herein, a “processing unit” or “processor” or “operating processor”
25 includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
30 Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
10
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0041] As used herein, “a user equipment”, “a user device”, 5 “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
10 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
15 are required to implement the features of the present disclosure.
[0042] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
20 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
25
[0043] As used herein, “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
30 each other, which also includes the methods, functions, or procedures that may be
called.
11
[0044] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
digital signal processor (DSP), 5 a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
10 [0045] As used herein, the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
information, or a combination thereof between units/components within the system
and/or connected with the system.
15 [0046] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the abovementioned
and other existing problems in this field of technology by providing a
method and system for handling of event(s) concerning policies related to one or
more operations.
20
[0047] Referring to FIG. 1 an exemplary block diagram representation of a
management and orchestration (MANO) architecture [100], in accordance with
exemplary implementations of the present disclosure is illustrated. The MANO
architecture [100] is developed for managing telecom cloud infrastructure
25 automatically, managing design or deployment design, managing instantiation of a
network node(s) etc. The MANO architecture [100] deploys the network node(s) in
the form of Virtual Network Function (VNF) and Cloud-native/ Container Network
Function (CNF). The MANO architecture [100] is used to auto-instantiate the VNFs
into the corresponding environment of the present disclosure so that it could help
30 in onboarding other vendor(s) CNFs and VNFs to the platform.
12
[0048] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer, a network function virtualization (NFV) and software defined
network (SDN) design function module [104]; a platforms foundation services
module [106], a platform core services module [108] and a platform resource
adapters and utilities module [112], wherein 5 all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
[0049] The NFV and SDN design function module [104] further comprises a VNF
10 lifecycle manager (compute) [1042]; a VNF catalog [1044]; a network services
catalog [1046]; a network slicing and service chaining manager [1048]; a physical
and virtual resource manager [1050] and a CNF lifecycle manager [1052]. The VNF
lifecycle manager (compute) [1042] is responsible for determining on which server
of the communication network the microservice will be instantiated. The VNF
15 lifecycle manager (compute) [1042] will manage the overall flow of incoming/
outgoing requests during interaction with the user. The VNF lifecycle manager
(compute) [1042] is responsible for determining which sequence to be followed for
executing the process. For e.g., in an AMF network function of the communication
network (such as a 5G network), sequence for execution of processes P1 and P2
20 etc. The VNF catalog [1044] stores the metadata of all the VNFs (also CNFs in
some cases). The network services catalog [1046] stores the information of the
services that need to be run. The network slicing and service chaining manager
[1048] manages the slicing (an ordered and connected sequence of network service/
network functions (NFs) that must be applied to a specific networked data packet.
25 The physical and virtual resource manager [1050] stores the logical and physical
inventory of the VNFs. Just like the VNF lifecycle manager (compute) [1042], the
CNF lifecycle manager [1052] is similarly used for the CNFs lifecycle
management.
30 [0050] The platforms foundation services module [106] further comprises a
microservices edge load balancer [1062]; an identity & access manager [1064]; a
13
command line interface (CLI) [1066]; a central logging manager [1068]; and an
event routing manager (ERM) [1070] (alternatively referred to as ERM unit [1070]
herein). The microservices edge load balancer [1062] is used for maintaining the
load balancing of the request for the services. The identity & access manager [1064]
is used for logging purposes. The command line 5 interface (CLI) [1066] is used to
provide commands to execute certain processes which require changes during the
run time. The central logging manager [1068] is responsible for keeping the logs of
every service. The logs are generated by the MANO architecture [100]. The logs
are used for debugging purposes. The ERM unit [1070] is responsible for routing
10 the events i.e., the application programming interface (API) hits to the
corresponding services.
[0051] The platforms core services module [108] further comprises NFV
infrastructure monitoring manager [1082]; an assure manager [1084]; a
15 performance manager [1086]; a policy execution engine (PEEGN) [1088]; a
capacity monitoring manager (CP) [1090]; a release management (mgmt.)
repository [1092]; a configuration manager & Golden Configuration Template
(GCT) [1094]; an NFV platform decision analytics [1096]; a platform NoSQL DB
[1098]; a platform schedulers and cron jobs (PSC) service [1100]; a VNF backup
20 & upgrade manager [1102]; a microservice auditor [1104]; and a platform
operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
[1084] is responsible for supervising the alarms the vendor is generating. The
25 performance manager [1086] is responsible for manging the performance counters.
The PEEGN [1088] is responsible for managing all the policies. The capacity
monitoring manager (CP) [1090] is responsible for sending the request to the
PEEGN [1088]. The capacity monitoring manager (CP) [1090] is capable of
monitoring usage of network resources such as but not limited to CPU utilization,
30 RAM utilization and storage utilization across all the instances of the virtual
infrastructure manager (VIM) or simply the NFV infrastructure monitoring
14
manager [1082]. The capacity monitoring manager (CP) [1090] is also capable of
monitoring said network resources for each instance of the VNF. The capacity
monitoring manager (CP) [1090] is responsible for constantly tracking the network
resource utilization. The release management (mgmt.) repository [1092] is
responsible for managing the releases 5 and the images of all the vendor network
nodes. The configuration manager & GCT [1094] manages the configuration and
GCT of all the vendors. The NFV platform decision analytics [1096] helps in
deciding the priority of using the network resources. It is further noted that the
PEEGN [1088], the configuration manager & GCT [1094] and the NFV platform
10 decision analytics [1096] work together. The platform NoSQL DB [1098] is a
database for storing all the inventory (both physical and logical) as well as the
metadata of the VNFs and CNF. The platform schedulers and cron jobs (PSC)
service [1100] schedules the task such as but not limited to triggering of an event,
traversing the network graph etc. The VNF backup & upgrade manager [1102] takes
15 backup of the images, binaries of the VNFs and the CNFs and produces those
backups on demand in case of server failure. The microservice auditor [1104] audits
the microservices. For e.g., in a hypothetical case, instances not being instantiated
by the MANO architecture [100] and using the network resources then the
microservice auditor [1104] audits and informs the same so that resources can be
20 released for services running in the MANO architecture [100], thereby assuring the
services only run on the MANO architecture [100]. The platform operations,
administration, and maintenance manager [1106] is used for newer instances that
are spawning.
25 [0052] The platform resource adapters and utilities module [112] further comprises
a platform external API adaptor and gateway [1122]; a generic decoder and indexer
(XML, CSV, JSON) [1124]; a docker service adaptor [1126]; an API adapter [1128];
and a NFV gateway [1130]. The platform external API adaptor and gateway [1122]
is responsible for handling the external services (to the MANO architecture [100])
30 that require the network resources. The generic decoder and indexer (XML, CSV,
JSON) [1124] directly gets the data of the vendor system in the XML, CSV, JSON
15
format. The docker service adaptor [1126] is the interface provided between the
telecom cloud and the MANO architecture [100] for communication. The API
adapter [1128] is used to connect with the virtual machines (VMs). The NFV
gateway [1130] is responsible for providing the path to each service going
5 to/incoming from the MANO architecture [100].
[0053] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
10 implementation, the computing device [200] may also implement a method for
handling of event(s) concerning policies related to one or more operations utilising
the system [300]. In another implementation, the computing device [200] itself
implements the method for handling of event(s) concerning policies related to one
or more operations using one or more units configured within the computing device
15 [200], wherein said one or more units are capable of implementing the features as
disclosed in the present disclosure.
[0054] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
20 processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess
memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
25 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose
machine that is customized to perform the operations specified in the
30 instructions. The computing device [200] further includes a read only memory
16
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0055] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and 5 coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
10 alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
15 cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0056] The computing device [200] may implement the techniques described
20 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware,
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
25 sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
30 disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
17
[0057] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway
data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication 5 interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
10 compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing
various types of information.
15 [0058] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
20 [218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0059] The computing device [200] encompasses a wide range of electronic
25 devices capable of processing data and performing computations. Examples of
computing device [200] include, but are not limited only to, personal computers,
laptops, tablets, smartphones, servers, and embedded systems. The devices may
operate independently or as part of a network and can perform a variety of tasks
such as data storage, retrieval, and analysis. Additionally, computing device [200]
30 may include peripheral devices, such as monitors, keyboards, and printers, as well
18
as integrated components within larger electronic systems, showcasing their
versatility in various technological applications.
[0060] Referring to FIG. 3, an exemplary block diagram of a system [300] for
handling of event(s) concerning policies related 5 to one or more operations, is
shown, in accordance with exemplary implementations of the present disclosure.
The system [300] comprises at least one transceiver unit [302], at least one
processing unit [304], at least one operations unit [306], at least one database [308],
at least one purging unit [310], at least one user interface [312], and at least one
10 storage unit [314]. Also, all of the components/ units of the system [300] are
assumed to be connected to each other unless otherwise indicated below. As shown
in FIG. 3, all units shown within the system [300] should also be assumed to be
connected to each other. Also, in FIG. 3 only a few units are shown, however, the
system [300] may comprise multiple such units or the system [300] may comprise
15 any such numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [300] may be present in a user
device/ user equipment to implement the features of the present disclosure. The
system [300] may be a part of the user device or may be independent of but in
communication with the user device (may also referred herein as a UE). In another
20 implementation, the system [300] may reside in a server or a network entity. In yet
another implementation, the system [300] may reside partly in the server/ network
entity and partly in the user device.
[0061] The system [300] is configured for handling of event(s) concerning policies
25 related to one or more operations, with the help of the interconnection between the
components/units of the system [300].
[0062] In one implementation, the system [300] is implemented at a policy
execution engine (PEEGN) [1088]. The transceiver unit [302] of the system [300]
30 is configured to receive a request related to one or more events from one or more
19
microservices. In an exemplary aspect, the one or more events are related to one or
more operations associated with a call flow.
[0063] In an exemplary aspect, the one or more microservices may include but not
limited to PEEGN 5 [1088], containerized network functions (CNFs) and
containerized network function components (CNFCs), virtualized network
functions (VNFs) and virtualized network function components (VNFCs).
[0064] In an exemplary aspect, the one or more events relate to at least one of an
10 instantiation call flow, a scaling call flow, and a healing call flow.
[0065] In an example, the one or more events relate to an instantiation call flow.
The instantiation call flow herein refers to a call flow in which resources are initially
allocated or set up or created for a specific event within the one or more
15 microservices. In an exemplary aspect, the resources may include but not limited to
central processing unit (CPU), random access memory (RAM), and storage.
[0066] In another example, the one or more events relate to a scaling call flow. The
scaling call flow relates to adjusting the resources allocated to the one or more
20 microservices. Further, the scaling call flow may include a scale in call flow and a
scale out call flow.
[0067] In an exemplary aspect, the scale-in call flow refers to the action of reducing
the number of resources, such as CPU, memory, or storage, allocated to the one or
25 more microservices, if there is a decrease in demand or workload.
[0068] In an exemplary aspect, the scale out call flow refers to the action of
increasing the number of resources such as CPU, memory, or storage, allocated to
the one or more microservices, in order to handle increased demand for the one or
30 more microservices.
20
[0069] In an exemplary aspect, the one or more events includes a healing call flow.
The healing call flow represents automatic recovering or restoring of the operations
of the one or more microservices when it encounters issues such as errors, conflicts
that may result in failures, etc., in order to restore normal operations quickly.
5
[0070] The system [300] further comprises the processing unit [304] connected to
at least the transceiver unit [302]. The processing unit [304] is configured to
perform one or more logics.
10 [0071] In one example, the one or more logics may relate to rules, or policies
defined by network administrators. The one or more logics, when operated, may
enable the system [300] to store the set of call flow details associated with each of
the one or more events in the database [308]. In an exemplary aspect, the one or
more logics are applied when the instantiation call flow, healing call flow, etc. are
15 decided.
[0072] In an exemplary aspect, the one or more logics may include but not limited
to virtual network function (VNF) policies, VNF Affinity Policies/ Anti affinity
policies, VNF healing policies, containerized network function (CNF) policies,
20 containerized network function components (CNFC) policies, CNF/CNFC
dependency policies, CNF affinity policies/ anti affinity policies, , CNFC affinity /
anti affinity group policies, and compute flavour information.
[0073] In an example, VNF, CNF and CNFC healing policies refers to policies that
25 facilitates in executing healing of a process or service etc.
[0074] In an example, CNF affinity policies/ anti affinity policies create a
relationship between containerized machines and hosts.
30 [0075] In an example, the information may be stored in the form of VNFId,
VNFverison, VNFdescription, productid, and VNFC data.
21
[0076] The system [300] further comprises the operations unit [306] connected to
at least the processing unit [304]. The operations unit [306] is configured to operate
a physical data, and a logical data for storing a set of call flow details associated
with each of the one or more events 5 in the database [308]. In one implementation,
the logical data is based on the performance of the one or more logics.
[0077] Upon receiving the request related to the one or more events from the one
or more microservices, the operations unit [306] operates the physical data, and the
10 logical data for storing the set of call flow details associated with each of the one
more event in the database [308]. In an exemplary aspect, the physical data enables
the system [300] in interpreting as to how virtual and containerized functions
interact with the physical components of the system [300].
15 [0078] In an exemplary aspect, the database [308] stores the aggregated physical
data and logical data of the specific events. Further, the database [308] stores
information of events which are related to instantiation call flow, scaling call flow,
and healing call flow such that there is a zero data loss.
20 [0079] In an exemplary aspect, all the information is stored in the database [308]
which may be an elastic search database (also called as ES). In an exemplary aspect,
the database [308] may be a NoSQL type database [1098].
[0080] In one implementation, the physical data is based on an information of a set
25 of resources associated with the virtual functions and containerized functions.
[0081] The physical data may relate to the information of the set of resources (e.g.,
CPU, RAM, storage, etc.) associated with the virtual functions (e.g., virtual network
functions (VNFs), virtual network function components (VNFCs) etc.). Further, the
30 set of resources are associated with the containerized functions (e.g., containerized
22
network functions (CNFs), containerized network functions components (CNFCs),
etc.).
[0082] As used herein, the container network function (CNF) refers to a network
function that acts as a portable 5 container, which include all necessary
configurations. CNFs offer increased portability, and scalability compared to
traditional network functions.
[0083] As used herein, the container network function component (CNFC) refers
10 to a subcomponent of a container network function (CNF) that performs a specific
task or set of tasks within the broader network function. CNFCs are deployed in
containers, having same advantages as CNFs, which includes efficient resource
management.
15 [0084] As used herein, virtual network functions (VNFs) are virtualized network
functions running on standard server hardware in a virtualized environment.
This requires firmware-defined infrastructure that allows multiple virtual networks
to be created on top of shared physical infrastructure. Virtual network functions
(VNFs) may then be customized to comply with the needs of applications, services,
20 devices, and customers.
[0085] As used herein, virtual networking function component (VNFC) refers to
the modular building blocks of Virtualized Network Functions (VNFs). VNFCs
represent specific functional components that collectively form a firmware-based
25 network function running on virtualized infrastructure.
[0086] In an exemplary aspect, prior to the operating, by the operations unit [306]
at the PEEGN [1088], the physical data, and the logical data, the processing unit
[304] is configured to generate a unique identity for the one or more events. The
30 one or more events comprise asynchronous requests. In an exemplary aspect, the
asynchronous requests may include Hypertext transfer protocol (HTTP) requests
23
with REST APIs, which may use, in a non-limiting example, JSON/XML for
carrying information.
[0087] The processing unit [304] generates the unique identity for the one or more
event before operating the physical data 5 and the logical data. In asynchronous
request, execution of the one or more events is not dependent on another i.e., the
one or more events may run simultaneously.
[0088] In an exemplary aspect, the asynchronous requests are served based on at
10 least one of virtual function policies, and containerized function policies.
[0089] In an exemplary aspect, the asynchronous requests are served/processed
according to the established policies related to virtual functions and containerized
functions. In an exemplary aspect, the virtual function policies and containerized
15 function policies are predefined rules defined by the network administrator which
suggests as to how different operations are handled in the system [300].
[0090] In an exemplary aspect, the database [308] stores information related to
various events. Since the database [308] operations occur in parallel for multiple
20 events across different contexts (e.g., different microservice types and instances),
each event must be uniquely identified.
[0091] The system [300] comprises the purging unit [310] connected to at least the
operations unit [306]. The purging unit [310] is configured to purge the event
25 information corresponding to each of the one or more events, from the database
[308].
[0092] Once the physical data and the logical data is operated in order to
successfully store the set of call flow details associated with each of the one or more
30 events in the database [308], the purging unit [310] automatically purges or deletes
24
the event information corresponding to each of the one or more events from the
database [308].
[0093] The system [300] also comprises the user interface (UI) [312] connected to
at least the purging unit [310]. The user interface 5 [312] is configured to display at
least one of: an indexed data, a vector data, and a graphical data from the database
[308], for facilitating at least one of a create operation, an update operation, a delete
operation, and a get operation.
10 [0094] The UI [312] displays at least one of the indexed data, the vector data, and
the graphical data from the database [308], so as to enable the network administrator
to perform at least one of the create operation, the update operation, the delete
operation, and the get operation.
15 [0095] In an exemplary aspect, the indexed data ensures quick document retrieval
without shifting through vast unstructured data. The indexed data arranges data in
a specific way to support efficient query execution.
[0096] In an exemplary aspect, the vector data refers to data as high-dimensional
20 vector embeddings, capturing semantic meaning and relationships.
[0097] In an exemplary aspect, the graphical data refers to a way of displaying
numerical data that help in analysing and representing quantitative data visually.
The graphical data may be a kind of a chart where data is plotted as variables across
25 the coordinate.
[0098] In an exemplary aspect, the network administrator uses the displayed
indexed data, vector data, and graphical data to perform certain operations such as
create operation for adding a new record to the database [308], update operation
30 for updating existing document or documents in a collection, delete operation (i.e.,
25
the action of removing one or more objects that meet a specified condition), and get
operation (i.e., an operation of retrieving data from the database [308]).
[0099] The system [300] further comprises the storage unit [314] configured to
store a set of CRUD operations of the one or 5 more policies. The one or more policies
include one or more of VNF policies and CNF policies for being used during logical
call flows.
[0100] The storage unit [314] stores the set of CRUD operations of the one or more
10 policies. In an exemplary aspect, CRUD operations refer to create operations, read
operations, update operations, and delete operations. The one or more policies
comprises one or more of VNF policies and CNF policies that are used during
logical call flows.
15 [0101] For example, when scaling down in the call flow, the system [300] may first
reduce the allocated resources, such as CPU, memory, or storage, for the one or
more microservices. After executing the scale in call flow, CRUD operations are
performed in the following order: creating a scale in policy, reading the scale in
policy, updating the existing scale in policies with new ones, and deleting the
20 outdated scale in policies. These operations are then stored in the storage unit [314],
which can be utilized for future scale in operations as needed.
[0102] Referring to FIG. 4, an exemplary method flow diagram [400] for handling
of event(s) concerning policies related to one or more operations, in accordance
25 with exemplary implementations of the present disclosure is shown. In an
implementation, the method [400] is performed by the system [300]. Further, in an
implementation, the system [300] may be present in a server device to implement
the features of the present disclosure. Also, as shown in FIG. 4, the method [400]
starts at step [402].
30
26
[0103] At step [404], the method [400] comprises receiving, by the transceiver unit
[302] at the policy execution engine (PEEGN) [1088], the request related to the one
or more events from the one or more microservices. In an exemplary aspect, the one
or more events relate to at least one of an instantiation call flow, a scaling call flow,
5 and a healing call flow.
[0104] In an example, the one or more events relate to an instantiation call flow,
which refers to a call flow in which resources are initially allocated or set up or
created for a specific event within the one or more microservices.
10
[0105] In another example, the one or more events relate to a scaling call flow,
which means adjusting the resources allocated to the one or more microservices.
For example, the scaling call flow may include scale in call flow and scale out call
flow.
15
[0106] In yet another example, the one or more events relate to a healing call flow,
which means to automatically recover or restore the operations of the one or more
microservices when it encounters issues such as errors or conflicts, in order to
restore normal operations quickly.
20
[0107] At step [406], the method [400] comprises performing, by the processing
unit [304] at the PEEGN [1088], the one or more logics.
[0108] Upon receiving the request related to the one or more events, the processing
25 unit [304] performs the one or more logics associated with the one or more events.
[0109] At step [408], the method [400] comprises operating, by the operations unit
[306] at the PEEGN [1088], the physical data, and the logical data for storing a set
of call flow details associated with each of the one or more events in the database
30 [308].
27
[0110] Upon receiving the request related to the one or more events from the one
or more microservices, the operations unit [306] operates the physical data, and the
logical data for storing the set of call flow details associated with each of the one
more event in the database [308]. In one implementation, the logical data is based
on the performance of the one 5 or more logics. In one implementation, the physical
data is based on an information of a set of resources associated with the virtual
functions and containerized functions.
[0111] In an implementation, prior to the operating, by the operations unit [306] at
10 the PEEGN [1088], the physical data, and the logical data, the method [400]
comprises generating, by the processing unit [304] at the PEEGN [1088], a unique
identity for the one or more events. The events comprise asynchronous requests.
[0112] The processing unit [304] generates the unique identity for the one or more
15 events before operating the physical data and the logical data. In asynchronous
request, execution of the one or more events is not dependent on another i.e., the
one or more events may run simultaneously.
[0113] In an exemplary aspect, the asynchronous requests are served based on at
20 least one of virtual function policies, and containerized function policies.
[0114] At step [410], the method [400] comprises purging, by the purging unit
[310] at the PEEGN [1088], the event information corresponding to each of the one
or more events from the database [308].
25
[0115] Once the physical data and the logical data is operated in order to
successfully store the set of call flow details associated with each of the one or more
events in the database [308], the purging unit [310] automatically purges or deletes
the event information corresponding to each of the one or more events from the
30 database [308].
28
[0116] The method [400] further comprises displaying, by the user interface [312]
at the PEEGN [1088], at least one of: an indexed data, a vector data, and a graphical
data from the database [308], for facilitating at least one of a create operation, an
update operation, a delete operation, and a get operation.
5
[0117] The UI [312] displays at least one of the indexed data, the vector data, and
the graphical data from the database [308], so as to enable the network administrator
to perform at least one of the create operation, the update operation, the delete
operation and the get operation.
10
[0118] The method [400] also comprises storing, by the storage unit [314], the set
of CRUD operations of the one or more policies. The one or more policies include
one or more of VNF policies and CNF policies for being used during logical call
flows.
15
[0119] The storage unit [314] stores the set of CRUD operations of one or more
policies. In an exemplary aspect, CRUD operations refer to create operations, read
operations, update operations and delete operations. The one or more policies
comprises one or more of VNF policies and CNF policies that are used during
20 logical call flows.
[0120] Thereafter, the method [400] terminates at step [412].
[0121] Referring to FIG. 5, an exemplary block diagram of a system architecture
25 [500] for handling of event(s) concerning policies related to one or more operations,
is shown, in accordance with exemplary implementations of the present disclosure.
[0122] The system architecture [500] comprises PEEGN cluster [502] which
further comprises the PEEGN [1088] and the database [308] (e.g., NoSQL database
30 [1098]). In an exemplary aspect, the PEEGN cluster [502] receives a request related
to the one or more events from a microservice [504]. In an exemplary aspect, the
29
one or more events relate to at least one of an instantiation call flow, a scaling call
flow, and a healing call flow.
[0123] The PEEGN [1088] sends a response back to the microservice [504] as an
5 acknowledgement.
[0124] The PEEGN [1088] then sends a CRUD event request for performing one
or more CRUD operations. In an exemplary aspect, CRUD operations refer to
create operations, read operations, update operations and delete operations.
10
[0125] Referring to FIG. 6, an exemplary block diagram [600] of PE_NS interface,
in accordance with exemplary implementations of the present disclosure is shown.
[0126] In an exemplary aspect, the PEEGN [1088] and the database [308] (which
15 may also relate to a NoSQL database [1098]) are communicatively coupled via
PE_NS interface [602].
[0127] The PE_NS interface [602] is used to store and operate on all logical data
i.e., data as per output of logic. In an exemplary aspect, the PEEGN [1088] performs
20 the logic and stores it in the database [308] via the PE_NS interface [602].
[0128] Referring to FIG. 7, an exemplary process flow diagram [700] for handling
of event(s) concerning policies related to one or more operations, in accordance
with exemplary implementations of the present disclosure is shown. Also, as shown
25 in FIG. 7, the process [700] starts at step [702].
[0129] At step [704], the process [700] comprises receiving, at the policy execution
engine (PEEGN) [1088], a request related to the one or more events from a
microservice.
30
30
[0130] At step [706], the process [700] comprises handling the one or more
operations related to the one or more events.
[0131] At step [708], the process [700] comprises storing, in the database [308], the
one or more events. In an 5 exemplary aspect, the storing step is performed via the
PE_NS interface [602].
[0132] At step [710], the process [700] comprises sending the request to a PVIM
[1050].
10
[0133] At step [712], upon receiving a response/corresponding event, the process
[700] comprises handling the one or more operations related to the one or more
events. In an exemplary aspect, all the events are repeated until all requests of all
events are served successfully.
15
[0134] At step [714], the process [700] comprises storing, in the database [308], the
one or more events.
[0135] At step [716], the process [700] comprises sending response to the
20 microservice of initial event.
[0136] Thereafter, at step [718], the process [700] is terminated.
[0137] Referring to FIG. 8, an exemplary process flow diagram [800] for
25 implementing PE_NS interface policies, in accordance with exemplary
implementations of the present disclosure is shown. Also, as shown in FIG. 8, the
process [800] starts at step [802].
[0138] At step [804], the process [800] comprises receiving a request from the UI
30 [312] regarding virtual function policies and containerized function policies.
31
[0139] At step [806], the process [800] comprises handling the one or more
operations related to one or more events.
[0140] At step [808], the process [800] comprises storing the information related
to the handled one or more operations related 5 to one or more events in the database
[308]. In an exemplary aspect, storing in the database [308] is performed via the
PE_NS interface [602].
[0141] At step [810], the process [800] comprises sending a response back to the
10 UI [312].
[0142] Thereafter, at step [812], the process [800] is terminated.
[0143] Referring to FIG. 9, an exemplary call flow diagram [900] for resource
15 reservation in a network during an instantiation operation, in accordance with
exemplary implementations of the present disclosure is shown.
[0144] At step [902], the flow indicates that a user interacts with the system [300]
via the UI [312]. Herein, the UI [312] may initiate an instantiation of a containerized
20 network function (CNF) by sending a CNF instantiation request via the UI [312].
[0145] At step [904], the flow indicates that, post receiving the instantiation request
from the UI [312], the CNFLM module [1052] forwards the request to the PEEGN
[1088] for further processing. The CNFLM module [1052] sends a reserve CNF
25 resources request to the PEEGN [1088].
[0146] At step [906], the flow indicates that, post receiving the reserve CNF
resources request from the CNFLM module [1052], the PEEGN [1088]
communicates with the PVIM [1050] to check the availability of resources that are
30 needed for the instantiation of the CNF. Herein, the PVIM [1050] is responsible for
managing a repository of the available resources.
32
[0147] At step [908], the PVIM module [1050] processes the request, and checks
may verify that the required one or more resources for said CNF instantiation are
available within the repository. Further, if the required resources are available, the
PVIM module [1050] sends a confirmation 5 back to the PEEGN [1088] that the
required resources are now reserved and are ready to be utilized for the instantiation
of the CNF.
[0148] At step [910], post confirmation for the availability of the required
10 resources, the PEEGN [1088] reserves the required resources and simultaneously
generates a token (such as a CNF token) to confirm the reserved resources. Further,
the PEEGN [1088] sends a message back to the PVIM [1050] to confirm that the
CNF token is generated.
15 [0149] At step [912], the PVIM [1088] acknowledges the confirmation and updates
the repository to reflect that the required resources are now reserved for the CNF
instantiation.
[0150] At step [914], post reserving the required resources, the PEEGN [1088]
20 sends a confirmation back to the CNFLM [1052] indicating that the required
resources are successfully reserved for the instantiation operation.
[0151] At step [916], once the CNFLM [1052] receives the acknowledgment from
the PEEGN [1088], thereafter the CNFLM [1052] transmits a notification indicative
25 regarding the same to the UI [312], confirming that the required resources are now
reserved for the CNF instantiation.
[0152] Referring to FIG. 10, an exemplary call flow diagram [1000] for resource
reservation in a network during a scaling operation, in accordance with exemplary
30 implementations of the present disclosure is shown.
33
[0153] At step [1002], the flow indicates that the NPDA [1096] initiates a CNF
policy invocation. Herein, the CNF policy invocation is a scaling request for
adjusting the resources allocated to a CNF based on demand or policy changes.
[0154] At step [1004], the flow 5 indicates that post receiving the CNF policy
invocation from the NPDA [1096], the PEEGN [1088] may query the PVIM [1050]
to retrieve necessary details regarding the CNF. The PEEGN [1088] sends a “get
CNF details” request to the PVIM [1050]. The request may be for retrieving the
current state of resources and policies associated with the CNF.
10
[0155] At step [1006], the flow indicates that the PVIM [1050] processes the
request for CNF details and thereafter, the PVIM [1050] sends the requested
information to the PEEGN [1088]. Here, the requested information may include
information such as available resources, policies, and other details that are required
15 for scaling of resources at the CNF.
[0156] At step [1008], the flow indicates that the PVIM [1050] sends an
acknowledgment back to the PEEGN [1088], confirming that the requested CNF
details and resources are successfully provided, implying that PEEGN [1088] has
20 received the data required to proceed with the scaling operation.
[0157] At step [1010], the flow indicates that after processing the received CNF
details, the PEEGN [1088] reserves the necessary resources required for the scaling
operation.
25
[0158] Simultaneously, at step [1012], the PEEGN [1088] generates a CNF Token
to confirm that the necessary resources are successfully reserved, and then the
PEEGN [1088] communicates the generated CNF token back to the PVIM [1050].
34
[0159] At step [1014], the flow indicates that the PVIM [1050] acknowledges the
reservation of the CNF token. Further, the PVIM [1050] may update the associated
repository to reflect that the necessary resources are now reserved for scaling.
[0160] At step [1016], the flow indicates 5 that after the necessary resources are
reserved, the PEEGN [1088] sends a command to execute the CNF scaling
operation. The command may include at least one of an increasing and a decreasing
of the resource allocation based on the invoked CNF policy.
10 [0161] At step [1018], once the scaling operation is successfully triggered, the
CNFLM [1052] sends an acknowledgement to the PEEGN [1088] confirming
successful completion of the operation.
[0162] At step [1020], the PEEGN [352] forwards the acknowledgment to the
15 NPDA [1096], confirming that the CNF scaling operation is executed.
[0163] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for handling of event(s) concerning policies
related to the one or more operations, the instructions include executable code
20 which, when executed by one or more units of a system, causes: a transceiver unit
to receive a request related to one or more events from one or more microservices.
The executable code when executed further causes a processing unit to perform one
or more logics. The executable code when executed further causes an operations
unit connected to operate a physical data, and a logical data for storing a set of call
25 flow details associated with each of the one or more events in a database. The
executable code when executed further causes a purging unit to purge the event
information corresponding to each of the one or more events, from the database.
[0164] As is evident from the above, the present disclosure provides a technically
30 advanced solution for handling of event(s) concerning policies related to one or
more operations. The present solution provides a technically advanced solution for
35
instantiation, scaling, and healing of virtual functions (VNFs/VNFCs) and
containerized functions (CNFs/CNFCs).
[0165] While considerable emphasis has been placed herein on the disclosed
implementations, it will be 5 appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
10 and non-limiting.
[0166] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
15 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
20 functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
36
We claim:
1. A method for handling of event(s) concerning policies related to one or
more operations, the method comprising:
receiving, by a transceiver unit 5 [302] at a policy execution engine
(PEEGN) [1088], a request related to one or more events from one or more
microservices;
performing, by a processing unit [304] at the PEEGN [1088], one or
more logics;
10 operating, by an operations unit [306] at the PEEGN [1088], a
physical data, and a logical data for storing a set of call flow details
associated with each of the one or more events in a database [308]; and
purging, by a purging unit [310] at the PEEGN [1088], the event
information corresponding to each of the one or more events, from the
15 database [308].
2. The method as claimed in claim 1, comprising displaying, by a user
interface [312] at the PEEGN [1088], at least one of: an indexed data, a vector data,
and a graphical data from the database [308], for facilitating at least one of a create
20 operation, an update operation, a delete operation, and a get operation.
3. The method as claimed in claim 1, wherein the one or more events relate to
at least one of an instantiation call flow, a scaling call flow, and a healing call flow.
25 4. The method as claimed in claim 1, wherein the physical data is based on an
information of a set of resources associated with the virtual functions and
containerized functions.
5. The method as claimed in claim 1, wherein the logical data is based on the
30 performance of the one or more logics.
37
6. The method as claimed in claim 1, wherein prior to the operating, by the
operations unit [306] at the PEEGN [1088], the physical data, and the logical data,
the method comprises:
- generating, by the processing unit [304] at the PEEGN [1088], a unique
identity for the one or more events, 5 wherein the one or more events comprise
asynchronous requests.
7. The method as claimed in claim 6, wherein the asynchronous requests are
served based on at least one of virtual function policies, and containerized function
10 policies.
8. The method as claimed in claim 1, comprising:
- storing, by a storage unit [314], a set of CRUD operations of one or more
policies, the one or more policies comprising one or more of VNF policies and CNF
15 policies for being used during logical call flows.
9. A system [300] for handling of event(s) concerning policies related to one
or more operations, the system [300] comprising:
a transceiver unit [302], at a policy execution engine (PEEGN)
20 [1088], configured to receive a request related to one or more events from
one or more microservices;
a processing unit [304], at the policy execution engine (PEEGN)
[1088], connected to at least the transceiver unit [302], the processing unit
[304] configured to perform one or more logics;
25 an operations unit [306], at the policy execution engine (PEEGN)
[1088], connected to at least the processing unit [304], the operations unit
[306] configured to operate a physical data, and a logical data for storing a
set of call flow details associated with each of the one or more events in a
database [308]; and
30 a purging unit [310], at the policy execution engine (PEEGN)
[1088], connected to at least the operations unit [306], the purging unit [310]
38
configured to purge the event information corresponding to each of the one
or more events, from the database [308].
10. The system [300] as claimed in claim 9, comprising a user interface [312],
at the policy execution 5 engine (PEEGN) [1088], connected to at least the purging
unit [310], the user interface [312] configured to display at least one of: an indexed
data, a vector data, and a graphical data from the database [308], for facilitating at
least one of a create operation, an update operation, a delete operation, and a get
operation.
10
11. The system [300] as claimed in claim 9, wherein the one or more events
relate to at least one of an instantiation call flow, a scaling call flow, and a healing
call flow.
15 12. The system [300] as claimed in claim 9, wherein the physical data is based
on an information of a set of resources associated with the virtual functions and
containerized functions.
13. The system [300] as claimed in claim 9, wherein the logical data is based
20 on the performance of the one or more logics.
14. The system [300] as claimed in claim 9, wherein prior to the operating, by
the operations unit [306] at the PEEGN, the physical data, and the logical data, the
processing unit [304] is configured to:
25 - generate a unique identity for the one or more events, wherein the one or
more events comprise asynchronous requests.
15. The system [300] as claimed in claim 14, wherein the asynchronous requests
are served based on at least one of virtual function policies, and containerized
30 function policies.
39
16. The system [300] as claimed in claim 9, comprising a storage unit [314]
configured to:
- store a set of CRUD operations of one or more policies, the one or more
policies comprising one or more of VNF policies and CNF policies for being used
during logical call flows.

Documents

Application Documents

# Name Date
1 202321066597-STATEMENT OF UNDERTAKING (FORM 3) [04-10-2023(online)].pdf 2023-10-04
2 202321066597-PROVISIONAL SPECIFICATION [04-10-2023(online)].pdf 2023-10-04
3 202321066597-POWER OF AUTHORITY [04-10-2023(online)].pdf 2023-10-04
4 202321066597-FORM 1 [04-10-2023(online)].pdf 2023-10-04
5 202321066597-FIGURE OF ABSTRACT [04-10-2023(online)].pdf 2023-10-04
6 202321066597-DRAWINGS [04-10-2023(online)].pdf 2023-10-04
7 202321066597-Proof of Right [07-02-2024(online)].pdf 2024-02-07
8 202321066597-FORM-5 [04-10-2024(online)].pdf 2024-10-04
9 202321066597-ENDORSEMENT BY INVENTORS [04-10-2024(online)].pdf 2024-10-04
10 202321066597-DRAWING [04-10-2024(online)].pdf 2024-10-04
11 202321066597-CORRESPONDENCE-OTHERS [04-10-2024(online)].pdf 2024-10-04
12 202321066597-COMPLETE SPECIFICATION [04-10-2024(online)].pdf 2024-10-04
13 202321066597-FORM 3 [08-10-2024(online)].pdf 2024-10-08
14 202321066597-Request Letter-Correspondence [24-10-2024(online)].pdf 2024-10-24
15 202321066597-Power of Attorney [24-10-2024(online)].pdf 2024-10-24
16 202321066597-Form 1 (Submitted on date of filing) [24-10-2024(online)].pdf 2024-10-24
17 202321066597-Covering Letter [24-10-2024(online)].pdf 2024-10-24
18 202321066597-CERTIFIED COPIES TRANSMISSION TO IB [24-10-2024(online)].pdf 2024-10-24
19 Abstract.jpg 2024-12-05
20 202321066597-ORIGINAL UR 6(1A) FORM 1 & 26-060125.pdf 2025-01-10