Abstract: The present disclosure provides a method [400] and a system [350] for resource reservation in a network. The system [350] comprises: a transceiver unit [302] configured to receive, at an execution module [352], from a network node, a request for performing an operation on at least one of a network function (NF), and a network function component (NFC). Further, a retrieval unit [304] is configured to retrieve, at the execution module [352], from a lifecycle manager (LM) module [354], a set of details relating to at least one of the NF, and the NFC. The set of details comprises at least one or more policies relating to performing of the operation. Further, a reservation unit [306] is configured to reserve, at the execution module [352], one or more resources for performing the operation. Further, a processing unit [308] is configured to execute, at the LM module [354], the operation. [FIG. 3B]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR RESOURCE RESERVATION
IN A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre
Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR RESOURCE RESERVATION IN A
NETWORK
FIELD OF THE DISCLOSURE
5
[0001] Embodiments of the present disclosure generally relate to network
management systems. More particularly, embodiments of the present disclosure
relate to methods and systems for resource reservation in a network.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] In communication networks such as 5G communication networks,
20 different microservices perform different services, jobs and tasks in the network.
Different microservices have to perform their jobs in such a way based on
operational parameters and policies, that it does not affect microservices’ own
operation and service network operations. Cloud-native Network Function (CNF)
and Cloud-native Network Function Component (CNFC) microservices manage
25 how and where the functions run across clusters in the environment, which help in
service operation in the network. However, the current traditional methods are not
efficient for managing the CNF/CNFC deployments and scaling with efficient
resource utilization in the network.
3
[0004] Thus, there exists an imperative need in the art to provide an efficient
system and method for handling deployments and scaling of network functions, and
network function components, such as cloud-native network functions, and cloudnative network function components.
5
OBJECTS OF THE DISCLOSURE
[0005] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
10
[0006] It is an object of the present disclosure to provide a system and a method
for resource reservation in a network.
[0007] It is yet another object of the present disclosure to provide a solution
15 for performing fault tolerance in case of an event failure.
SUMMARY
[0008] This section is provided to introduce certain aspects of the present
20 disclosure in a simplified form that are further described below in the detailed
description. This summary is not intended to identify the key features or the scope
of the claimed subject matter.
[0009] An aspect of the present disclosure may relate to a method for resource
25 reservation in a network. The method comprises receiving, by a transceiver unit, at
an execution module, from a network node, a request for performing an operation
on at least one of a network function (NF), and a network function component
(NFC). Further, the method comprises retrieving, by a retrieval unit, at the
execution module, from a lifecycle manager (LM) module, a set of details relating
30 to at least one of the NF, and the NFC. Herein, the set of details comprises at least
4
one or more policies relating to performing of the operation. Next, the method
comprises reserving, by a reservation unit, at the execution module, one or more
resources for performing the operation. Thereafter, the method comprises
executing, by a processing unit, at the LM module.
5
[0010] In an exemplary aspect of the present disclosure, prior to the step of
reserving, at the execution module, the one or more resources, the method
comprises determining, by a determination unit, at the execution module, from an
inventory manager (IM) module, available resources.
10
[0011] In an exemplary aspect of the present disclosure, the method further
comprises calculating, by the reservation unit, at the execution module, one or more
resources required for performing the operation based at least on the set of details.
Further, in response to the available resources being at least equal to the required
15 one or more resources, the method comprises the step of reserving, at the execution
module, the one or more resources for performing the operation.
[0012] In an exemplary aspect of the present disclosure, the operation is a
deployment operation, and in response to the operation being the deployment
20 operation, prior to the step of receiving, at the execution module, the request, the
method comprises receiving, by the transceiver unit, at the LM module, from a user
interface (UI), the request, and further transmitting, by the transceiver unit, from
the LM module to the execution module, the request.
25 [0013] In an exemplary aspect of the present disclosure, the operation is a
scaling operation, and in response to the operation being the scaling operation, prior
to the step of receiving, at the execution module, the request, the method comprises:
receiving, by the transceiver unit, from a network function platform (NP), the
request.
30
5
[0014] In an exemplary aspect of the present disclosure, the method further
comprises transmitting, by the transceiver unit, from the execution module to the
LM module, an acknowledgement indicative of reserving, at the execution module,
the one or more resources for performing the operation.
5
[0015] In an exemplary aspect of the present disclosure, the method further
comprises transmitting, by the transceiver unit, from the execution module to the
network node, a notification indicative of performing the operation on at least one
of the NF, and the NFC.
10
[0016] In an exemplary aspect of the present disclosure, the execution module
and the LM module are communicably coupled by an interface, and wherein the
interface is a PE_CM interface.
15 [0017] In an exemplary aspect of the present disclosure, the execution module
and the LM module are communicably coupled to an operation manager (OM)
module, and the OM module is configured to facilitate communication between
available instances of the execution module and available instances of the LM
module.
20
[0018] Another aspect of the present disclosure may relate to a system for
resource reservation in a network. The system comprises a transceiver unit
configured to receive, at an execution module, from a network node, a request for
performing an operation on at least one of a network function (NF), and a network
25 function component (NFC). Further, the system comprises a retrieval unit
configured to retrieve, at the execution module, from a lifecycle manager (LM)
module, a set of details relating to at least one of the NF, and the NFC. Further, the
set of details comprises at least one or more policies relating to performing of the
operation. Further, the system comprises a reservation unit configured to reserve,
30 at the execution module, one or more resources for performing the operation.
6
Further, the system comprises a processing unit configured to execute, at the LM
module, the operation.
[0019] Yet another aspect of the present disclosure may relate to a non5 transitory computer readable storage medium storing one or more instructions for
resource reservation in a network, the one or more instructions include executable
code which, when executed by one or more units of a system, causes the one or
more units to perform certain functions. The one or more instructions when
executed causes a transceiver unit to receive, at an execution module, from a
10 network node, a request for performing an operation on at least one of a network
function (NF), and a network function component (NFC). Further, the executable
code which, when executed by one or more units of a system, causes a retrieval unit
to retrieve, at the execution module, from a lifecycle manager (LM) module, a set
of details relating to at least one of the NF, and the NFC. Herein, the set of details
15 comprises at least one or more policies relating to performing of the operation.
Further, the executable code which, when executed by one or more units of a
system, causes a reservation unit to reserve, at the execution module, one or more
resources for performing the operation. Further, the executable code which, when
executed by one or more units of a system, causes a processing unit to execute, at
20 the LM module, the operation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and
25 constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems in which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Also, the embodiments shown in the figures are
30 not to be construed as limiting the disclosure, but the possible variants of the method
7
and system according to the disclosure are illustrated herein to highlight the
advantages of the disclosure. It will be appreciated by those skilled in the art that
disclosure of such drawings includes disclosure of electrical components or
circuitry commonly used to implement such components.
5
[0021] FIG. 1 illustrates an exemplary block diagram of a manifestation and
orchestration (MANO) architecture.
[0022] FIG. 2 illustrates an exemplary block diagram of a computing device
10 upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure.
[0023] FIG. 3A illustrates an exemplary block diagram of a system for resource
reservation in a network, in accordance with exemplary implementations of the
15 present disclosure.
[0024] FIG. 3B illustrates an exemplary block diagram of a system for resource
reservation in a network, in accordance with exemplary implementations of the
present disclosure.
20
[0025] FIG. 4 illustrates a method flow diagram for resource reservation in a
network, in accordance with exemplary implementations of the present disclosure.
[0026] FIG. 5 illustrates an exemplary call flow diagram for deploying network
25 functions in a network, in accordance with exemplary implementations of the
present disclosure.
[0027] FIG. 6 illustrates an exemplary call flow diagram for scaling network
functions in a network, in accordance with exemplary implementations of the
30 present disclosure.
8
[0028] FIG. 7 illustrates an exemplary flow diagram for a process for resource
reservation in a network, in accordance with exemplary implementations of the
present disclosure.
5
[0029] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
10
[0030] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
15 details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
20 [0031] The ensuing description provides exemplary embodiments only, and is
not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
25 function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
[0032] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
30 of ordinary skill in the art that the embodiments may be practiced without these
9
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
5 [0033] It should be noted that the terms "first", "second", "primary",
"secondary", "target" and the like, herein do not denote any order, ranking, quantity,
or importance, but rather are used to distinguish one element from another.
[0034] Also, it is noted that individual embodiments may be described as a
10 process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations may be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional
15 steps not included in a figure.
[0035] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
20 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
25 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0036] As used herein, a “processing unit” or “processor” or “operating
30 processor” includes one or more processors, wherein processor refers to any logic
10
circuitry for processing instructions. A processor may be a general-purpose
processor, a special purpose processor, a conventional processor, a digital signal
processor, a plurality of microprocessors, one or more microprocessors in
association with a Digital Signal Processing (DSP) core, a controller, a
5 microcontroller, Application Specific Integrated Circuits, Field Programmable
Gate Array circuits, any other type of integrated circuits, etc. The processor may
perform signal coding data processing, input/output processing, and/or any other
functionality that enables the working of the system according to the present
disclosure. More specifically, the processor or processing unit is a hardware
10 processor.
[0037] As used herein, “a user equipment”, “a user device”, “a smart-userdevice”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld
device”, “a wireless communication device”, “a mobile communication device”, “a
15 communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
20 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
[0038] As used herein, “storage unit” or “memory unit” refers to a machine or
25 computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
30 that may be required by one or more units of the system to perform their respective
functions.
11
[0039] As used herein “interface” or “user interface” refers to a shared
boundary across which two or more separate components of a system exchange
information or data. The interface may also refer to a set of rules or protocols that
5 define communication or interaction of one or more modules or one or more units
with each other, which also includes the methods, functions, or procedures that may
be called.
[0040] All modules, units, components used herein, unless explicitly excluded
10 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
15 circuits (FPGA), any other type of integrated circuits, etc.
[0041] As used herein the transceiver unit includes at least one receiver and at
least one transmitter configured respectively for receiving and transmitting data,
signals, information, or a combination thereof between units/components within the
20 system and/or connected with the system.
[0042] As discussed in the background section, the current known solutions
have several shortcomings. The present disclosure aims to overcome the abovementioned and other existing problems in this field of technology by providing a
25 method and a system of resource reservation in a network. The present method and
system, upon receiving a request for resource allocation or scaling, further
processes the request, and ensures that all resource reservation, allocation, or
release operations are appropriately handled, with responses tracked and stored in
a database. The present method and system also generates periodic feedback of the
30 status on execution of the request without any interruptions or resource conflicts.
12
[0043] FIG. 1 illustrates an exemplary block diagram representation of a
management and orchestration (MANO) architecture [100], in accordance with
exemplary implementations of the present disclosure. The MANO architecture
5 [100] is developed for managing telecom cloud infrastructure automatically,
managing design or deployment design, managing instantiation of a network
node(s) etc. The MANO architecture [100] deploys the network node(s) in the form
of Virtual Network Function (VNF) and Cloud-native/ Container Network Function
(CNF). The system may comprise one or more components of the MANO
10 architecture. The MANO architecture [100] is used to auto-instantiate the VNFs
into the corresponding environment of the present disclosure so that it could help
in onboarding other vendor(s) CNFs and VNFs to the platform. In an
implementation, the system comprises a NFV Platform Decision Analytics (NPDA)
[1096] component.
15
[0044] As shown in FIG. 1, the MANO architecture [100] comprises a user
interface layer [102], a network function virtualization (NFV) and software defined
network (SDN) design function module [104], a platforms foundation services
module [106], a platform core services module [108] and a platform resource
20 adapters and utilities module [112], wherein all the components are assumed to be
connected to each other in a manner as obvious to the person skilled in the art for
implementing features of the present disclosure.
[0045] The NFV and SDN design function module [104] further comprises a
25 VNF lifecycle manager (compute) [1042], a VNF catalog [1044], a network
services catalog [1046], a network slicing and service chaining manager [1048], a
physical and virtual resource manager [1050] and a CNF lifecycle manager [1052].
The VNF lifecycle manager (compute) [1042] is responsible for on which server of
the network the microservice will be instantiated. The VNF lifecycle manager
30 (compute) [1042] will manage the overall flow of incoming/ outgoing requests
13
during interaction with the user. The VNF lifecycle manager (compute) [1042] is
responsible for determining which sequence to be followed for executing the
process. For e.g., in an AMF network function of the network (such as a 5G
network), sequence for execution of processes P1 and P2 etc. The VNF catalog
5 [1044] stores the metadata of all the VNFs (also CNFs in some cases). The network
services catalog [1046] stores the information of the services that need to be run.
The network slicing and service chaining manager [1048] manages the slicing (an
ordered and connected sequence of network service/ network functions (NFs)) that
must be applied to a specific networked data packet. The physical and virtual
10 resource manager [1050] stores the logical and physical inventory of the VNFs. Just
like the VNF lifecycle manager (compute) [1042], the CNF lifecycle manager
[1052] is similarly used for the CNFs lifecycle management.
[0046] The platforms foundation services module [106] further comprises a
15 microservices elastic load balancer [1062], an identify and access manager [1064],
a command line interface (CLI) [1066], a central logging manager [1068], and an
event routing manager [1070]. The microservices elastic load balancer [1062] is
used for maintaining the load balancing of the request for the services. The identify
and access manager [1064] is used for logging purposes. The command line
20 interface (CLI) [1066] is used to provide commands to execute certain processes
which require changes during the run time. The central logging manager [1068] is
responsible for keeping the logs of every service. These logs are generated by the
MANO platform [100]. These logs are used for debugging purposes. The event
routing manager [1070] is responsible for routing the events i.e., the application
25 programming interface (API) hits to the corresponding services.
[0047] The platforms core services module [108] further comprises NFV
infrastructure monitoring manager [1082], an assure manager [1084], a
performance manager [1086], a policy execution engine [1088], a capacity
30 monitoring manager [1090], a release management (mgmt.) repository [1092], a
configuration manager and GCT [1094], an NFV platform decision analytics
14
[1096], a platform NoSQL DB [1098], a platform schedulers and cron jobs [1100],
a VNF backup and upgrade manager [1102], a micro service auditor [1104], and a
platform operations, administration and maintenance manager [1106]. The NFV
infrastructure monitoring manager [1082] monitors the infrastructure part of the
5 NFs. For e.g., any metrics such as CPU utilization by the VNF. The assure manager
[1084] is responsible for supervising the alarms the vendor is generating. The
performance manager [1086] is responsible for manging the performance counters.
The policy execution engine (PEEGN) [1088] is responsible for all the managing
the policies. The capacity monitoring manager (CPM) [1090] is responsible for
10 sending the request to the PEEGN [1088]. The release management (mgmt.)
repository (RMR) [1092] is responsible for managing the releases and the images
of all the vendor network node. The configuration manager and GCT [1094]
manages the configuration and GCT of all the vendors. The NFV platform decision
analytics (NPDA) [1096] helps in deciding the priority of using the network
15 resources. It is further noted that the policy execution engine (PEEGN) [1088], the
configuration manager and GCT [1094] and the (NPDA) [1096] work together. The
platform NoSQL DB [1098] is a platform database for storing all the inventory
(both physical and logical) as well as the metadata of the VNFs and CNF. It may
be noted that the platform NoSQL DB [1098] may be just a narrow implementation
20 of the present disclosure, and any other kind of structure for the database may be
implemented for the platform database such as relational or non-relational database.
The platform schedulers and cron jobs [1100] schedules the task such as but not
limited to triggering of an event, traversing the network graph etc. The VNF backup
and upgrade manager [1102] takes backup of the images, binaries of the VNFs and
25 the CNFs and produces those backups on demand in case of server failure. The
micro service auditor [1104] audits the microservices. For e.g., in a hypothetical
case, instances not being instantiated by the MANO architecture [100] using the
network resources then the micro service auditor [1104] audits and informs the
same so that resources can be released for services running in the MANO
30 architecture [100], thereby assuring the services only run on the MANO platform
15
[100]. The platform operations, administration, and maintenance manager [1106] is
used for newer instances that are spawning.
[0048] The platform resource adapters and utilities module [112] further
5 comprises a platform external API adapter and gateway [1122], a generic decoder
and indexer (XML, CSV, JSON) [1124], a docker service adapter [1126], an API
adapter [1128], and a NFV gateway [1130]. The platform external API adapter and
gateway [1122] may be responsible for handling the external services (to the
MANO platform [100]) that require the network resources. The generic decoder
10 and indexer (XML, CSV, JSON) [1124] directly gets the data of the vendor system
in the XML, CSV, JSON format. The docker service adapter [1126] may be the
interface provided between the telecom cloud and the MANO architecture [100] for
communication. The API adapter [1128] may be used to connect with the virtual
machines (VMs). The NFV gateway [1130] may be responsible for providing the
15 path to each service going to/incoming from the MANO architecture [100].
[0049] The docker service adapter (DSA) [1126] is a microservices-based
system designed to deploy and manage Container Network Functions (CNFs) and
their components (CNFCs) across Docker nodes. The DSA [1126] offers REST
20 endpoints for key operations, including uploading container images to a Docker
registry, terminating CNFC instances, and creating Docker volumes and networks.
CNFs, which are network functions packaged as containers, may consist of multiple
CNFCs. The DSA [1126] facilitates the deployment, configuration, and
management of these components by interacting with Docker's API, ensuring
25 proper setup and scalability within a containerized environment. This approach
provides a modular and flexible framework for handling network functions in a
virtualized network setup.
[0050] FIG. 2 illustrates an exemplary block diagram of a computing device
30 [200] (herein, also referred to as a computer system [200]) upon which one or more
16
features of the present disclosure may be implemented in accordance with an
exemplary implementation of the present disclosure. The present disclosure can be
implemented on a computing device [200] as shown in FIG. 2. The computing
device [200] implements the present disclosure in accordance with the MANO
5 architecture (as shown in FIG. 1). In an implementation, the computing device
[200] may also implement a method for resource reservation in a network, utilising
a system, or one or more sub-systems, provided in the network. In another
implementation, the computing device [200] itself implements the method for
resource reservation in a network, using one or more units configured within the
10 computing device [200], wherein said one or more units are capable of
implementing the features as disclosed in the present disclosure.
[0051] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
15 processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a randomaccess memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
20 main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose machine that is customized to perform the operations specified in the
25 instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
[0052] A storage device [210], such as a magnetic disk, optical disk, or solid30 state drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
17
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
5 bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
10 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0053] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware,
15 and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
20 instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
25 software instructions.
[0054] The computing device [200] also may include a communication
interface [218] coupled to the bus [202]. The communication interface [218]
provides a two-way data communication coupling to a network link [220] that is
30 connected to a local network [222]. For example, the communication interface
[218] may be an integrated services digital network (ISDN) card, cable modem,
18
satellite modem, or a modem to provide a data communication connection to a
corresponding type of telephone line. As another example, the communication
interface [218] may be a local area network (LAN) card to provide a data
communication connection to a compatible LAN. Wireless links may also be
5 implemented. In any such implementation, the communication interface [218]
sends and receives electrical, electromagnetic, or optical signals that carry digital
data streams representing various types of information.
[0055] The computing device [200] can send messages and receive data,
10 including program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
15 and/or stored in the storage device [210], or other non-volatile storage for later
execution.
[0056] Referring to FIG. 3A, an exemplary block diagram of a processing
system [300], is shown, in accordance with the exemplary implementations of the
20 present disclosure. The processing system [300] is configured to facilitate resource
reservation in a network. In an implementation, the processing system [300] is a
part of a system [300] for resource reservation in a network. The processing system
[300] comprises at least one transceiver unit [302], at least one retrieval unit [304],
at least one reservation unit [306], at least one processing unit [308], and at least
25 one determination unit [310]. Also, all of the components/ units of the system [300]
are assumed to be connected to each other unless otherwise indicated below. As
shown in FIG. 3A, all units shown within the processing system [300] should also
be assumed to be connected to each other. Also, in FIG. 3A only a few units are
shown, however, the processing system [300] may comprise multiple such units or
30 the processing system [300] may comprise any such numbers of said units, as
required to implement the features of the present disclosure.
19
[0057] Referring to FIG. 3B, an exemplary block diagram of a system [350]
for resource reservation in a network, is shown, in accordance with the exemplary
implementations of the present disclosure. The system [350] comprises the
5 processing system [300], at least one execution module [352], at least one lifecycle
manager (LM) module [354], at least one operational manager (OM) module [356],
at least one user interface (UI) [358], at least one network function platform (NP)
[360], and at least one inventory manager (IM) module [362]. Also, all of the
components/ units of the system [350] are assumed to be connected to each other
10 unless otherwise indicated below. As shown in FIG. 3B, all units shown within the
system [350] should also be assumed to be connected to each other. Also, in FIG.
3B, only a few units are shown, however, the system [350] may comprise multiple
such units or the system [350] may comprise any such numbers of said units, as
required to implement the features of the present disclosure.
15
[0058] Further, in an implementation, the system [350] may be present in a
user device/ user equipment [102] to implement the features of the present
disclosure. The system [350] may be a part of the user device/ or may be
independent of but in communication with the user device (may also referred herein
20 as a UE). In another implementation, the system [350] may reside in a server or a
network entity. In yet another implementation, the system [350] may reside partly
in the server/ network entity and partly in the user device.
[0059] The system [350] is configured for resource reservation in a network,
25 with the help of the interconnection between the components/units of the system
[350].
[0060] The system [350] comprises the transceiver unit [302] configured to
receive, at the execution module [352], from a network node, a request for
20
performing an operation on at least one of a network function (NF), and a network
function component (NFC).
[0061] Herein, the execution module [352] may enforce one or more policies
5 related to the operations of network functions (NFs) and network function
components (NFCs) that are associated with the network. The execution module
[352] may evaluate any incoming requests received at the execution module [352]
associated with said NFs and NFCs, from a set of one or more pre-defined rules,
and may further perform one or more actions at the NFs and NFCs, based on the
10 evaluation of said request. In an exemplary implementation, the execution module
[352] is a policy execution engine (PEEGN) [1088] as shown in FIG. 1.
[0062] Further, the network node is a network entity within the network that
may interact with the execution module [352] to initiate one or more operations on
15 the NFs and NFCs. In an implementation, the network node may send a request
specifying a type of operation to be performed on the NFs and NFCs. In an aspect,
the NFs and NFCs may be associated with cloud-native network functions (CNFs)
and cloud-native network function components (CNFCs). In another aspect, the
NFs and NFCs may be associated with virtualized network functions (VNFs) and
20 virtualized network functions components (VNFCs).
[0063] Further, the system [350] comprises the retrieval unit [304] configured
to retrieve, at the execution module [352], from a lifecycle manager (LM) module
[354], a set of details relating to at least one of the NF, and the NFC.
25
[0064] Herein, the LM module [354] is responsible for storing and managing
a set of information associated with the lifecycle of NFs and NFCs. The set of
information may include at least one of a current state of NFs and NFCs, a resource
usage of NFs and NFCs, and operational requirements of NFs and NFCs. In one
30 aspect, the set of details mentioned above may include at least one of the sets of
21
information mentioned above. In another aspect, the set of details may include any
other details that may have not been mentioned herein, but would be known to a
person skilled in the art. In an exemplary implementation, the LM module [354] is
a CNF lifecycle manager [1052] as shown in FIG. 1. In another exemplary
5 implementation, the LM module [354] is a VNF lifecycle manager [1042] as shown
in FIG. 1.
[0065] It is to be noted that the set of details comprises at least one or more
policies relating to performing the operation.
10
[0066] Further, the one or more policies mentioned herein may comprise a set
of rules that may dictate the execution of the one or more policies at the NFs and
NFCs.
15 [0067] In an example, the one or more policies may include a scaling policy
that may determine the set of rules to scale the NFs and NFCs.
[0068] In another example, the one or more policies may include a security
policy that may be associated with the security of the NFs and NFCs.
20
[0069] In yet another example, the one or more policies may include a resource
allocation policy that may define the allocation of one or more network resources
at the NFs and NFCs.
25 [0070] It may be noted that the one or more policies may include any other
policy that may are not included herein, but would be known to a person skilled in
the art.
[0071] Further, the execution module [352] and the LM module [354] are
30 communicably coupled by an interface, which is a PE_CM interface. Further, the
22
PE_CM interface allows a flow of information between the execution module [352]
and the LM module [354] that is required for managing the one or more operations
on NFs or NFCs.
5 [0072] The PE_CM interface may connect the execution module [352] and the
LM module [354]. The PE_CM interface allows for bidirectional communication
between the execution module [352] and the LM module [354]. In an embodiment,
the PE_CM interface is configured to facilitate exchange of information using
hypertext transfer protocol (http) rest application programming interface (API). In
10 an embodiment, the http rest API is used in conjunction with JSON and/or XML
communication media. In another embodiment, the PE_CM interface is configured
to facilitate exchange of information by establishing a web-socket connection
between the execution module [352] and the LM module [354]. A web-socket
connection may involve establishing a persistent connectivity between the
15 execution module [352] and the LM module [354]. An example of the web-socket
based communication includes, without limitation, a transmission control protocol
(TCP) connection. In such a connection, information, such as operational status,
health, etc. of different components may be exchanged through the interface using
a ping-pong-based communication.
20
[0073] Further, the execution module [352] and the LM module [354] are
communicably coupled to the operation manager (OM) module [356]. Herein, the
OM module [356] is configured to facilitate communication between available
instances of the execution module [352] and available instances of the LM module
25 [354]. In one aspect the OM module [356] is configured to manage communication
between available instances of the execution module [352] and the LM module
[354]. For instance, if a plurality of execution module [352] instances is handling
one or more operations on multiple NFs and NFCs, then the OM module [356] may
ensure that each instance is linked with their corresponding LM module [354]
30 instance, as required for performing said one or more operation.
23
[0074] In another aspect, the OM module [356] is configured to handle a load
balancing of the incoming one or more requests, implying that the OM module
[356] distributes the received one or more requests across the plurality of instances
5 of said execution module [352] and LM module [354] in order to prevent any
overload of requests at a specific instance.
[0075] In yet another aspect, the OM module [356] in an event of a failure of
a specific instance at the execution module [352] and LM module [354],
10 respectively, the OM module [356] may re-route the communications to an
alternative instance at said execution module [352] and LM module [354], in order
to ensure that the one or more ongoing operations may continue with minimal
disruption.
15 [0076] Further, in an implementation of the present disclosure, the operation is
a deployment operation. Herein, the deployment operation may involve a process
of setting a new one or more NFs or NFCs in the network, based on at least one of
a network demands, service requirements or similar. It is to be noted that the
deployment operation of one or more NFs or NFCs can also be termed as
20 instantiation of the one or more NFs or NFCs.
[0077] Further, prior to receiving, at the execution module [352], the request,
the transceiver unit [302] is configured to receive, at the LM module [354], from
the user interface (UI) [358], the request. As mentioned herein, the request is firstly
25 received at the LM module [354] and is sent by the UI. Herein, the UI [358] can be
a graphical user interface (GUI) or a command-line interface (CLI) or any other
interface that would be known to a person skilled in the art.
[0078] Further, the transceiver unit [302] is configured to transmit, from the
30 LM module [354] to the execution module [352], the request. As mentioned herein,
24
as the LM module [354] receives the request from the UI [358], then the transceiver
unit [302] passes the request to the execution module [352] through the PE_CM
interface, for execution of the operation mentioned in said request. In an
implementation, the transceiver unit [302] may send an acknowledgement
5 indicative to the UI [358] regarding successful receiving of the request at the LM
module [354] or execution module [352] as per the status of said request.
[0079] Further, in another implementation of the present disclosure, the
operation is a scaling operation. The scaling operation is referred to as an operation
10 for adjusting the one or more network resources that are allocated to an already
deployed NFs or NFCs, in response to a demand or performance requirements
within the network. In an aspect, the scaling operation may involve a vertical
scaling, implying that the operation may involve increasing the one or more
network resources that are available to NFs or NFCs without changing the number
15 of instances that are already present at the NFs or NFCs. In another aspect, the
scaling operation may involve a horizontal scaling, implying that the operation may
involve an addition of one or more instances of the NFs or NFCs in order to handle
a higher load, distribution of data traffic or workload across other newly added
instances.
20
[0080] Further, prior to receiving, at the execution module [352], the request,
the transceiver unit [302] is configured to receive, from the network function
platform (NP) [360], the request. As mentioned herein, the request for scaling
operation is primarily received from the NP [360]. Herein, the NP [360] is
25 responsible for monitoring and managing the scaling of the one or more NFs or
NFCs. In an event, the NP [360] detects an extensive workload at one or more
instances of NFs or NFCs, then in such events, the NP [360] may generate a scaling
request and may send the scaling request to the execution module [352]. The
execution module [352] may further process the received scaling request, and
30 accordingly perform one or more operations as mentioned in said scaling request.
25
[0081] The system [350] further comprises the reservation unit [306]
configured to reserve, at the execution module [352], one or more resources for
performing the operation. The reservation unit [306] is further configured to reserve
5 the one or more network resources that are required for performing the operation
mentioned in the said received request. The reservation unit [306] is configured to
calculate, at the execution module [352], one or more resources required for
performing the operation based at least on the set of details. Herein, the calculation
of the one or more resources is based on the type of operation mentioned in the
10 request, and the set of details (described above) mentioned in said request which
may include at least one of the current state of NFs and NFCs, the resource usage
of NFs and NFCs, and the operational requirements of NFs and NFCs, the one or
more execution policies at the NFs and NFCs.
15 [0082] The system [350] further comprises the determination unit [310]
configured to, prior to reserving, at the execution module [352], the one or more
resources, determine, at the execution module [352], from the inventory manager
(IM) module [362], available resources. The IM module [362] mentioned herein, is
responsible for tracking and managing the one or more network resources available
20 in the network. The primary purpose of the IM module [362] is to inform the
execution module [352], the real time availability of the one or more network
resources, before allowing the execution module [352] to reserve the one or more
network resources.
25 [0083] For ease of understanding, the above-mentioned paragraph is explained
through an exemplary event. Further, in the event, when a request for a deployment
operation or a scaling operation is received at the execution module [352], them the
determination unit [310] may verify the one or more available network resources
via the IM module [362] and may further ensures that the number of one or more
30 network resources are available to perform said operation (deployment operation or
26
scaling operation). The requirement of firstly verifying the availability of one or
more network resources is to avoid any attempt of reserving additional network
resources that the current available network resources, which may further lead to a
failed execution of said operation.
5
[0084] Further, in response to the available resources being at least equal to the
required one or more resources, the reservation unit [306] is configured to reserve,
at the execution module [352], the one or more resources for performing the
operation. Post verification of the one or more available network resources with the
10 one or more required resource for the execution of the operation, the reservation
unit [306] may reserve the one or more required resources for the execution of the
operation (deployment operation or scaling operation) mentioned in said request.
[0085] Further, the transceiver unit [302] is configured to transmit, from the
15 execution module [352] to the LM module [354], an acknowledgement indicative
of reserving, at the execution module [352], the one or more resources for
performing the operation. Once the reservation unit [306] may successfully reserve
the one or more required network resources for the operation, thereafter the
transceiver unit [302] transmits the acknowledgement from the execution module
20 [352] back to the LM module [354] (if the operation is deployment operation) or
the NP [360] (if the operation is scaling operation). Herein, the acknowledgement
is indicative that the one or more required network resources have been successfully
reserved at the execution module [352] and that the operation is further to be
executed. It is to be noted that the acknowledgement may include at least one of a
25 text, a token, a flag, or any other indicative that would be known to a person skilled
in the art.
[0086] The system [350] further comprises the processing unit [308]
configured to execute, at the LM module [354], the operation. The processing unit
27
[308] is responsible for executing operations associated with the lifecycle
management of one or more NFs and NFCs.
[0087] In an aspect, the operation is a deployment operation, the processing
5 unit [308] manages the allocation of one or more network resources and may further
ensures that all configuration and dependency requirements associated with the one
or more newly NFs or NFCs are configured before the deployment of said newly
NFs or NFCs.
10 [0088] In an aspect, the operation is a scaling operation, the processing unit
[308] ensures that the additional one or more network resources that are allocated
to handle increased load. The processing unit [308] may apply the one or more
policies defined within the LM module [354] to ensure that scaling operation is
effectively executed.
15
[0089] The transceiver unit [302] is further configured to transmit, from the
execution module [352] to the network node, a notification indicative of performing
the operation on at least one of the NF, and the NFC. The notification may refer to
a confirmation to the network node that the mentioned operation is successfully
20 executed, and the new one or more NFs or NFCs are now operational within the
network.
[0090] Referring to FIG. 4, an exemplary method flow diagram [400] for
resource reservation in a network, in accordance with exemplary implementations
25 of the present disclosure is shown. In an implementation the method [400] is
performed by the system [350]. Further, in an implementation, the system [350]
may be present in a server device to implement the features of the present
disclosure.
30 [0091] Also, as shown in FIG. 4, the method [400] initially starts at step [402].
28
[0092] At step [404], the method comprises receiving, by the transceiver unit
[302], at the execution module [352], from the network node, the request for
performing the operation on at least one of a network function (NF), and a network
5 function component (NFC).
[0093] At step [406], the method comprises retrieving, by the retrieval unit
[304], at the execution module [352], from the lifecycle manager (LM) module
[354], the set of details relating to at least one of the NF, and the NFC.
10
[0094] It is to be noted that the set of details comprises at least one or more
policies relating to performing the operation.
[0095] The method [400] further explains that the execution module [352] and
15 the LM module [354] are communicably coupled by the interface, which is the
PE_CM interface.
[0096] The method [400] further explains that the execution module [352] and
the LM module [354] are communicably coupled to the operation manager (OM)
20 module [356]. Herein, the OM module [356] is configured to facilitate
communication between available instances of the execution module [352] and
available instances of the LM module [354].
[0097] In an embodiment, if the operation is the deployment operation, then
25 prior to the step of receiving, at the execution module [352], the request, the method
[400] comprises receiving, by the transceiver unit [302], at the LM module [354],
from a user interface (UI) [358], the request, and transmitting, by the transceiver
unit [302], from the LM module [354] to the execution module [352], the request.
29
[0098] In another embodiment, if the operation is the scaling operation, prior
to the step of receiving, at the execution module [352], the request, the method [400]
comprises receiving, by the transceiver unit [302], from the network function
platform (NP) [360], the request.
5
[0099] At step [408], the method [400] comprises reserving, by the reservation
unit [306], at the execution module [352], one or more resources for performing the
operation.
10 [0100] The method [400] further comprises calculating, by the reservation unit
[306], at the execution module, one or more resources required for performing the
operation based at least one the set of details.
[0101] Further, prior to the step of reserving, at the execution module [352],
15 the one or more resources, the method [400] comprises determining, by the
determination unit [310], at the execution module [352], from the inventory
manager (IM) module [362], available resources.
[0102] In response to the available resources being at least equal to the required
20 one or more resources, the method [400] comprises the step of reserving, at the
execution module [352], the one or more resources for performing the operation.
[0103] The method [400] further comprises transmitting, by the transceiver
unit [302], from the execution module [352] to the LM module [354], the
25 acknowledgement indicative of reserving, at the execution module [352], the one
or more resources for performing the operation.
[0104] At step [410], the method [400] comprises executing, by the processing
unit [308], at the LM module [354].
30
30
[0105] The method [400] further comprises transmitting, by the transceiver
unit [302], from the execution module [352] to the network node, the notification
indicative of performing the operation on at least one of the NF, and the NFC.
5 [0106] Thereafter, at step [412], the method [400] is terminated.
[0107] Referring to FIG. 5, an exemplary flow diagram [500] for resource
reservation in a network during a deployment operation, in accordance with
exemplary implementations of the present disclosure is shown. In an
10 implementation the flow [500] is performed by the system [350].
[0108] At step 502, the flow indicates that a user interacts with the system via
the UI [358]. Herein, the UI [358] may initiate a deployment of a cloud-native
function (CNF) by sending a CNF deployment request via the UI [358].
15
[0109] At step 504, the flow indicates that, post receiving the deployment
request from the UI [358], the LM module [354] forwards the request to the
execution module [352] for further processing. The LM module [354] sends a
reserve CNF resources request to the execution module [352].
20
[0110] At step 506, the flow indicates that, post receiving the reserve CNF
resources request from the LM module [354], the execution module [352]
communicates with the IM module [362] to check the availability of resources that
are needed for the deployment of the CNF. Herein, the IM module [362] is
25 responsible for managing a repository of the available resources.
[0111] At step 508, the IM module [362] processes the request, and checks
may verify that the required one or more resources for said CNF deployment are
available within the repository. Further, if the required resources are available, the
30 IM module [362] sends a confirmation back to the execution module [352] that the
31
required resources are now reserved and are ready to be utilized for the deployment
of the CNF.
[0112] At step 510, post confirmation for the availability of the required
5 resources, the execution module [352] reserves the required resources and
simultaneously generates a token (such as a CNF token) to confirm the reserved
resources. Further, the execution module [352] sends a message back to the IM
module [362] to confirm that the CNF token is generated.
10 [0113] Token is a mechanism in a communication architecture, such as the 5G
communication architecture, to authenticate an event or a communication received.
The token may be generated by a network function, such as a sessions management
function (SMF) to indicate that an event (such as reservation of resources) has
occurred at a target network node (such as the execution module [352] and/or the
15 LM module [354]). The token generated is then confirmed by an authentication
service (such as an authentication server function (AUSF)), which may be
indicative of a confirmation that the event for which the token has been generated
has occurred. The token may then be transmitted or broadcast to target network
nodes and/or services to update information at said target network nodes and/or
20 services of occurrence of the event.
[0114] At step 512, the IM module [362] acknowledges the confirmation and
updates the repository to reflect that the required resources are now reserved for the
CNF deployment.
25
[0115] At step 514, post reserving the required resources, the execution module
[352] sends a confirmation back to the LM module [354] indicating that the required
resources are successfully reserved for the deployment operation.
30 [0116] At step 516, once the LM module [354] receives the acknowledgment
from the execution module [352], thereafter the LM module [354] transmits a
32
notification indicative regarding the same to the UI [358], confirming that the
required resources are now reserved for the CNF deployment.
[0117] Referring to FIG. 6, an exemplary flow diagram [600] for resource
5 reservation in a network during a scaling operation, in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the flow
[600] is performed by the system [350].
[0118] At step 602, the flow indicates that the network platform (NP) [360]
10 initiates a CNF policy invocation. Herein, the CNF policy invocation is a scaling
request for adjusting the resources allocated to a CNF based on demand or policy
changes.
[0119] At step 604, the flow indicates that post receiving the CNF policy
15 invocation from the NP [360], the execution module [352] may query the IM
module [362] to retrieve necessary details regarding the CNF. The execution
module [352] sends a “get CNF details” request to the IM module [362]. The
request may be for retrieving the current state of resources and policies associated
with the CNF.
20
[0120] At step 606, the flow indicates that the IM module [362] processes the
request for CNF details and thereafter, the IM module [362] sends the requested
information to the execution module [352]. Here, the requested information may
include information such as available resources, policies, and other details that are
25 required for scaling of resources at the CNF.
[0121] The set of information may include at least one of a current state of
CNFs and CNFCs, a resource usage of CNFs and CNFCs, and operational
requirements of CNFs and CNFCs. In an embodiment, the set of details may
30 comprise at least one or more policies relating to performing the operation. In an
example, the one or more policies may comprise a set of rules that may dictate the
33
execution of the one or more policies at the CNFs and CNFCs. In another example,
the one or more policies may include a scaling policy that may determine the set of
rules to scale the CNFs and CNFCs. In another example, the one or more policies
may include a security policy that may be associated with the security of the CNFs
5 and CNFCs. In yet another example, the one or more policies may include a
resource allocation policy that may define the allocation of one or more network
resources at the CNFs and CNFCs.
[0122] At step 608, the flow indicates that the IM module [362] sends an
10 acknowledgment back to the execution module [352], confirming that the requested
CNF details and resources are successfully provided, implying that the execution
module [352] has received the data required to proceed with the scaling operation.
[0123] At step 610, the flow indicates that after processing the received CNF
15 details, the execution module [352] reserves the necessary resources required for
the scaling operation. Simultaneously, at step 612, the execution module [352]
generates a CNF Token to confirm that the necessary resources are successfully
reserved, and then the execution module [352] communicates the generated CNF
token back to the IM module [362].
20
[0124] At step 614, the flow indicates that the IM module [362] acknowledges
the reservation of the CNF token. Further, the IM module [362] may update the
associated repository to reflect that the necessary resources are now reserved for
scaling.
25
[0125] At step 616, the flow indicates that after the necessary resources are
reserved, the execution module [352] sends a command to execute the CNF scaling
operation. The command may include at least one of an increasing and a decreasing
of the resource allocation based on the invoked CNF policy.
30
34
[0126] At step 618, once the scaling operation is successfully triggered, the LM
module [354] sends an acknowledgement to the execution module [352] confirming
successful completion of the operation.
5 [0127] At step 620, the execution module [352] forwards the acknowledgment
to the NP [360], confirming that the CNF scaling operation is executed.
[0128] Referring to FIG. 7, descriptive flow chart [700] for resource
reservation in a network, in accordance with exemplary implementations of the
10 present disclosure is shown. In an implementation the flow [700] is performed by
the system [350].
[0129] At step 702, the system [350] receives a request from the LM module
[354], where the request may be associated with reserving resources that are
15 required for performing at least one of a scaling operation and deployment
operation.
[0130] The request may include a set of details. The set of details may include
at least one of a current state of CNFs and CNFCs, a resource usage of CNFs and
20 CNFCs, and operational requirements of CNFs and CNFCs. In an embodiment, the
set of details may comprise at least one or more policies relating to performing the
operation. In an example, the one or more policies may comprise a set of rules that
may dictate the execution of the one or more policies at the CNFs and CNFCs. In
another example, the one or more policies may include a scaling policy that may
25 determine the set of rules to scale the CNFs and CNFCs. In another example, the
one or more policies may include a security policy that may be associated with the
security of the CNFs and CNFCs. In yet another example, the one or more policies
may include a resource allocation policy that may define the allocation of one or
more network resources at the CNFs and CNFCs.
30
35
[0131] At step 704, once the request is received the system [350] evaluates
whether the event (resource request) is already logged in the database (also referred
to as a repository in FIG. 5 and FIG. 6). The evaluation performed by the system
[350] is to ensure that system [350] processes the one or more received requests
5 based on the timings of events received or based on the priority mentioned in the
received requests.
[0132] At step 706, the received request is also stored in the database for future
reference.
10
[0133] At step 708, post storing the received request in the database, the system
[350] sends a request to the IM module [362] for inspecting one or more available
resources that are present within the network.
15 [0134] At step 710, the system [350] may further execute one or more
commands based on the set of details present in the received request. In an aspect,
the request may involve an allocation of resources for scaling operations. In another
aspect, the request may involve allocation of additional resources at the NFs or
NFCs. In yet another aspect, the request may involve un-reservation of resources
20 that are no longer needed for the current operation. The system [350] may then take
an appropriate action based on the set of details mentioned in the received request.
[0135] At step 712, the result of the execution of the command is further stored
in the database for future reference.
25
[0136] At step 714, post execution of the command, the system [350] may
generate a response, at the execution module [352], based on whether reservation
of resources for the execution module [352] to perform the operation has occurred.
The response may further be sent to the LM module [354]. The response may
30 indicate whether the resource reservation and scaling operations were successful.
36
[0137] Further, the system [350] repeats the step 704 to 710 for all the
incoming requests. In some cases, one or more events may be involved in the
process to reserve resources. Some of the events may include, without limitations,
communication from the LM module [354] and/or the execution module [352] to
5 other services such as inventory, databases, etc. Some of the one or more events
may occur concurrently or may occur in a predefined sequence. The one or more
events may need to be completed as part of the event handling steps of 704 to 710
in order for the resource allocate to be complete. As a result, there may be repetition
of the one or more events occurring between the steps of 704 to 710 until such time
10 that all the events are completed. Once completed, the process may move on to
subsequent steps to finally reserve resources.
[0138] The present disclosure further discloses a non-transitory computer
readable storage medium storing one or more instructions for resource reservation
15 in a network, the one or more instructions include executable code which, when
executed by one or more units of a system [350], causes the one or more units to
perform certain functions. The one or more instructions when executed causes a
transceiver unit [302] to receive, at an execution module [352], from a network
node, a request for performing an operation on at least one of a network function
20 (NF), and a network function component (NFC). Further, the executable code
which, when executed by one or more units of a system, causes a retrieval unit [304]
to retrieve, at the execution module [352], from a lifecycle manager (LM) module
[354], a set of details relating to at least one of the NF, and the NFC. Herein, the set
of details comprises at least one or more policies relating to performing of the
25 operation. Further, the executable code which, when executed by one or more units
of a system, causes a reservation unit [306] to reserve, at the execution module
[352], one or more resources for performing the operation. Further, the executable
code which, when executed by one or more units of a system, causes a processing
unit [308] to execute, at the LM module [354], the operation.
30
37
[0139] As is evident from the above, the present disclosure provides a
technically advanced solution for resource reservation in a network. The present
solution handles deployment and scaling of containerized functions via interface
between Policy Execution Engine (PEEGN) and CNF Lifecycle Manager
5 (CNFLM) PE_CM interface. The PEEGN provides support for dynamic
requirements of resource management and network service orchestration in the
network. PEEGN supports automatic deployment, scaling and healing functionality
of network components and services and provides policies for resource, security,
availability, and scalability. The present method and system provide a solution,
10 which provides reservation of resources of CNFs and individual CNFCs with
respect to tokens provided from inventory. The present method and system provide
a solution, which provides proper resource allocation for CNF and CNFC, resource
allocation for dependent and dependent of CNFCs and deployment and scaling out
of CNFs and CNFCs. The present method and system provide a solution, which
15 enables auto sync inventory and allows zero data loss policies using PEEGN. The
present method and system provide a solution, which enables async event-based
implementation to utilize PE_CM interface efficiently.
[0140] While considerable emphasis has been placed herein on the disclosed
20 implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
25 and non-limiting.
[0141] Further, in accordance with the present disclosure, it is to be
acknowledged that the functionality described for the various components/units can
be implemented interchangeably. While specific embodiments may disclose a
30 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
38
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
5 of the present disclosure.
39
We Claim:
1. A method [400] for resource reservation in a network, the method [400]
comprising:
5 - receiving, by a transceiver unit [302], at an execution module [352],
from a network node, a request for performing an operation on at least
one of a network function (NF), and a network function component
(NFC);
- retrieving, by a retrieval unit [304], at the execution module [352], from
10 a lifecycle manager (LM) module [354], a set of details relating to at
least one of the NF, and the NFC, wherein the set of details comprises
at least one or more policies relating to performing of the operation;
- reserving, by a reservation unit [306], at the execution module [352],
one or more resources for performing the operation; and
15 - executing, by a processing unit [308], at the LM module [354].
2. The method [400] as claimed in claim 1, wherein, prior to the step of
reserving, at the execution module [352], the one or more resources, the
method [400] comprises determining, by a determination unit [310], at the
20 execution module [352], from an inventory manager (IM) module [362],
available resources.
3. The method [400] as claimed in claim 2, wherein the method [400]
comprises calculating, by the reservation unit [306], at the execution module
25 [352], one or more resources required for performing the operation based at
least on the set of details, and wherein, in response to the available resources
being at least equal to the required one or more resources,
- the method [400] comprises the step of reserving, at the execution
module [352], the one or more resources for performing the operation.
30
40
4. The method [400] as claimed in claim 1, wherein, the operation is a
deployment operation, and in response to the operation being the
deployment operation, prior to the step of receiving, at the execution module
[352], the request, the method [400] comprises:
5 - receiving, by the transceiver unit [302], at the LM module [354], from
a user interface (UI) [358], the request; and
- transmitting, by the transceiver unit [302], from the LM module [354]
to the execution module [352], the request.
10 5. The method [400] as claimed in claim 1, wherein, the operation is a scaling
operation, and in response to the operation being the scaling operation, prior
to the step of receiving, at the execution module [352], the request, the
method [400] comprises: receiving, by the transceiver unit [302], from a
network function platform (NP) [360], the request.
15
6. The method [400] as claimed in claim 1, wherein the method [400]
comprises transmitting, by the transceiver unit [302], from the execution
module [352] to the LM module [354], an acknowledgement indicative of
reserving, at the execution module [352], the one or more resources for
20 performing the operation.
7. The method [400] as claimed in claim 1, wherein the method [400]
comprises transmitting, by the transceiver unit [302], from the execution
module [352] to the network node, a notification indicative of performing
25 the operation on at least one of the NF, and the NFC.
8. The method [400] as claimed in claim 1, wherein the execution module
[352] and the LM module [354] are communicably coupled by an interface,
and wherein the interface is a PE_CM interface.
30
41
9. The method [400] as claimed in claim 1, wherein the execution module
[352] and the LM module [354] are communicably coupled to an operation
manager (OM) module [356], and wherein the OM module [356] is
configured to facilitate communication between available instances of the
5 execution module [352] and available instances of the LM module [354].
10. A system [350] for resource reservation in a network, the system [350]
comprising:
- a transceiver unit [302] configured to receive, at an execution module
10 [352], from a network node, a request for performing an operation on
at least one of a network function (NF), and a network function
component (NFC);
- a retrieval unit [304] configured to retrieve, at the execution module
[352], from a lifecycle manager (LM) module [354], a set of details
15 relating to at least one of the NF, and the NFC, wherein the set of details
comprises at least one or more policies relating to performing of the
operation;
- a reservation unit [306] configured to reserve, at the execution module
[352], one or more resources for performing the operation; and
20 - a processing unit [308] configured to execute, at the LM module [354],
the operation.
11. The system [350] as claimed in claim 10, wherein, the system [350]
comprises a determination unit [310] configured to, prior to reserving, at the
25 execution module [352], the one or more resources, determine, at the
execution module [352], from an inventory manager (IM) module [362],
available resources.
12. The system [350] as claimed in claim 11, wherein the reservation unit [306]
30 is configured to calculate, at the execution module [352], one or more
42
resources required for performing the operation based at least one the set of
details, and wherein, in response to the available resources being at least
equal to the required one or more resources,
- the reservation unit [306] is configured to reserve, at the execution
5 module [352], the one or more resources for performing the operation.
13. The system [350] as claimed in claim 10, wherein the operation is a
deployment operation, and in response to the operation being the
deployment operation, prior to receiving, at the execution module [352], the
10 request, the transceiver unit [302] is configured to:
- receive, at the LM module [354], from a user interface (UI) [358], the
request; and
- transmit, from the LM module [354] to the execution module [352], the
request.
15
14. The system [350] as claimed in claim 10, wherein the operation is a scaling
operation, and in response to the operation being the scaling operation, prior
to receiving, at the execution module [352], the request, the transceiver unit
[302] is configured to receive, from a network function platform (NP) [360],
20 the request.
15. The system [350] as claimed in claim 10, wherein the transceiver unit [302]
is configured to transmit, from the execution module [352] to the LM
module [354], an acknowledgement indicative of reserving, at the execution
25 module [352], the one or more resources for performing the operation.
16. The system [350] as claimed in claim 10, wherein the transceiver unit [302]
is configured to transmit, from the execution module [352] to the network
node, a notification indicative of performing the operation on at least one of
30 the NF, and the NFC.
43
17. The system [350] as claimed in claim 10, wherein the execution module
[352] and the LM module [354] are communicably coupled by an interface,
and wherein the interface is a PE_CM interface.
5
18. The system [350] as claimed in claim 10, wherein the execution module
[352] and the LM module [354] are communicably coupled to an operation
manager (OM) module [356], and wherein the OM module [356] is
configured to facilitate communication between available instances of the
10 execution module [352] and available instances of the LM module [354].
| # | Name | Date |
|---|---|---|
| 1 | 202321066596-STATEMENT OF UNDERTAKING (FORM 3) [04-10-2023(online)].pdf | 2023-10-04 |
| 2 | 202321066596-PROVISIONAL SPECIFICATION [04-10-2023(online)].pdf | 2023-10-04 |
| 3 | 202321066596-POWER OF AUTHORITY [04-10-2023(online)].pdf | 2023-10-04 |
| 4 | 202321066596-FORM 1 [04-10-2023(online)].pdf | 2023-10-04 |
| 5 | 202321066596-FIGURE OF ABSTRACT [04-10-2023(online)].pdf | 2023-10-04 |
| 6 | 202321066596-DRAWINGS [04-10-2023(online)].pdf | 2023-10-04 |
| 7 | 202321066596-Proof of Right [07-02-2024(online)].pdf | 2024-02-07 |
| 8 | 202321066596-FORM-5 [04-10-2024(online)].pdf | 2024-10-04 |
| 9 | 202321066596-ENDORSEMENT BY INVENTORS [04-10-2024(online)].pdf | 2024-10-04 |
| 10 | 202321066596-DRAWING [04-10-2024(online)].pdf | 2024-10-04 |
| 11 | 202321066596-CORRESPONDENCE-OTHERS [04-10-2024(online)].pdf | 2024-10-04 |
| 12 | 202321066596-COMPLETE SPECIFICATION [04-10-2024(online)].pdf | 2024-10-04 |
| 13 | 202321066596-FORM 3 [08-10-2024(online)].pdf | 2024-10-08 |
| 14 | 202321066596-Request Letter-Correspondence [24-10-2024(online)].pdf | 2024-10-24 |
| 15 | 202321066596-Power of Attorney [24-10-2024(online)].pdf | 2024-10-24 |
| 16 | 202321066596-Form 1 (Submitted on date of filing) [24-10-2024(online)].pdf | 2024-10-24 |
| 17 | 202321066596-Covering Letter [24-10-2024(online)].pdf | 2024-10-24 |
| 18 | 202321066596-CERTIFIED COPIES TRANSMISSION TO IB [24-10-2024(online)].pdf | 2024-10-24 |
| 19 | Abstract.jpg | 2024-12-03 |
| 20 | 202321066596-ORIGINAL UR 6(1A) FORM 1 & 26-060125.pdf | 2025-01-10 |