Sign In to Follow Application
View All Documents & Correspondence

System And Method For Managing Network Function Operation

Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING NETWORK FUNCTION (NF) OPERATION The present disclosure relates to a system (120) and a method (900) for managing NF operation. The method (900) includes the step of establishing, by a processor (406), an interface (214) with a microservice (306), the interface (214) enabling orchestration of the NF operation. The method (900) includes the step of requesting, by the processor (406), the microservice (306) to execute the at least one NF operation via the interface (214) based on received user request. The method (900) further includes the step of transmitting, by the processor (406), to a microservice (302), an inventory management request to manage inventory pertaining to resources at a database (212) based on a response received from the DSA (306) pertaining to completion of execution of the at least one of NF operation. Ref. Fig. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
14 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Sandeep Bisht
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Suman Singh Kanwer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Nilesh Sanas
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Ankur Mishra
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Lokesh Poonia
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Abhishek Priyadarshi
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Manisha Singh
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Shubham Kumar Naik
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Mohd. Rijvan Khan Mogia
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Nitesh Gour
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Ashish Kumar Pandey
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MANAGING NETWORK FUNCTION OPERATION
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication systems, and more particularly relates to a system and method for managing Network Function (NF) operation.
BACKGROUND OF THE INVENTION
[0002] Telecommunications networks rely on agile and flexible network functions that can adapt rapidly to changing demands. Containerization, exemplified by technologies like Docker and Kubernetes, has become a preferred deployment method for network functions due to its ability to provide lightweight, scalable, and portable environments. Container Network Functions (CNFs) are network functions that are encapsulated within containers, making them highly adaptable and suitable for cloud-native and microservices-based architectures.
[0003] CNFs have introduced a paradigm shift in the deployment and management of network functions. Unlike traditional, hardware-based network functions, CNFs can be instantiated, scaled, updated, and decommissioned with great agility. While this flexibility enhances network resource utilization and reduces operational costs, it also introduces significant challenges in terms of CNF life cycle management.
[0004] Managing the life cycle of CNFs deployed within containerized environments involves complex orchestration, monitoring, scaling, and health management tasks. Traditional network management solutions are often ill-suited for the dynamic nature of CNFs within container clusters. Manual intervention and ad-hoc scripting is commonly used for CNF management, resulting in operational inefficiencies, increased downtime, and a higher risk of configuration errors.
[0005] Furthermore, CNFs are typically distributed across multi-cloud, hybrid cloud, or on-premises environments, creating a need for a unified and adaptable CNF life cycle management solution that can span diverse infrastructure types.
[0006] The CNF life cycle management conventionally consists of different operations such as CNF/CNF (instantiation, termination, scaling, deletion, change management). For successful execution of each operation requires a certain set of pre and post check of CNF data, for example in CNF instantiation request must be send to region specific Docker Swarm Adaptor (DSA) and based on the DSA response CNF Lifecycle Manager (CNFLM) prepares a request for updating CNF/CNFC status and resource details at inventory. Further issues occur like resource data mismatch in inventory or CNF status requires to be updated incorrectly.
[0007] Hence, there exists a pressing need for an innovative approach to Container Network Function Life Cycle Management that addresses the unique challenges posed by CNFs in containerized environments. The present invention seeks to fulfill this need by offering an efficient, automated, and flexible system and method for managing the life cycle of CNFs within containerized network infrastructures.
BRIEF SUMMARY OF THE INVENTION
[0008] One or more embodiments of the present disclosure provide a system and method for managing a Network Function (NF) operation.
[0009] In one aspect of the present invention, a method for managing a Network Function (NF) operation is provided. The method includes the step of establishing, by a processor, an interface with a microservice, the interface enabling orchestration of the NF operation. The method includes the step of requesting, by the processor, the DSA, to execute at least one NF operation via the interface based on received user request. The method further includes the step of transmitting, by the processor, to a microservice, an inventory management request to manage inventory pertaining to resources at a database based on a response received from the DSA pertaining to completion of execution of the at least one of NF operation.
[0010] In one embodiment, the interface is at least one of, a Swarm Adapter Configuration Management (SA_CM) interface. The SA_CM interface between the processor and the DSA is responsible for orchestrating the NF operation.
[0011] In another embodiment, the DSA is configured to interact with the processor to spawn appropriate instances of Network Functions (NFs).
[0012] In yet another embodiment, when the NF operation commences, the PVIM is adapted to store information related to the NF operation.
[0013] In yet another embodiment, the at least one NF operation includes at least one of, NF instantiation, NF termination, NF scaling and NF deletion utilizing the interface.
[0014] In yet another embodiment, the NF instantiation includes the steps of, transmitting, by the processor, a request to a Policy Execution Engine (PEEGN) to check availability of at least one NF policy and reserve resources at the PEEGN, if determined availability of the NF policy and reserve resources at the PEEGN, transmitting, by the processor, a reservation request to the PVIM to reserve resources, and requesting, by the processor, the DSA to instantiate NF over the interface.
[0015] In yet another embodiment, the NF policy includes at least one of, a NF Initialization (INIT) policy.
[0016] In yet another embodiment, the reserved resources include the resources which are consumed during the NF instantiation. The resources include at least one of, a memory, a processor, a network.
[0017] In yet another embodiment, the NF instantiation status includes information pertaining to the completion or incompletion of the NF instantiation.
[0018] In yet another embodiment, the NF termination includes the steps of transmitting, by the processor, a NF termination request to the microservice, receiving, by the processor, a response from the microservice subsequent to performing a termination of all running NFs of the specific NF, and transmitting, by the processor, an inventory management request to the microservice upon checking status of all the NFs.
[0019] In yet another embodiment, the NF scaling includes the steps of receiving, by the processor, a NF scaling request from a Policy Execution Engine (PEEGN) to instantiate a NF instance, requesting, by the processor, the microservice to instantiate the NF instance via the interface, requesting, by the processor, for updating inventory at the microservice pertaining to resources in use and reserved based on a NF instantiation response received from the microservice, wherein the response comprises NF instantiation status, and based on the NF instantiation response received from the microservice over the interface, transmitting, by the processor, a request to the microservice for inventory management based on the NF instantiation status.
[0020] In yet another embodiment, the processor is configured to enable an async event-based implementation to manage the interface to function in a high availability mode in order to engage a next available CNFLM instance when a current CNFLM instance is down.
[0021] In yet another embodiment, the user requests the NF operation from a user interface module of a User Equipment (UE).
[0022] In yet another embodiment, the async event-based implementation enabled by the CNFLM ensures that one or more long running tasks are simultaneously accommodated while running one or more short running tasks.
[0023] In yet another embodiment, the interface enables orchestration of the NF operation by receiving, instructions from the CNFLM to execute the at least one NF operation, and forwarding, the instructions from the CNFLM to the microservice to execute the at least one NF operation.
[0024] In another aspect of the present invention, a system for managing Network Function (NF) operation is disclosed. The system includes a connecting module configured to establish, an interface with a microservice, the interface enabling orchestration of the NF operation. The system includes a NF operation module configured to request, the microservice to execute at least one NF operation via the interface based on received user request. The system further includes, a transceiver configured to, transmit, to a microservice, an inventory management request to manage inventory pertaining to resources at a database based on a response received from the microservice pertaining to completion of execution of the at least one of NF operation.
[0025] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0027] FIG. 1 is an exemplary block diagram of an environment for managing a Network Function (NF) operation, according to one or more embodiments of the present disclosure;
[0028] FIG. 2 shows a block diagram of an architecture of a NF Life Manager (NFLM), according to one or more embodiments of the present disclosure;
[0029] FIG. 3 illustrates an architecture of a system for managing NF operation, according to one or more embodiments of the present disclosure;
[0030] FIG. 4 is an exemplary block diagram of the system for managing the NF operation, according to one or more embodiments of the present disclosure;
[0031] FIG. 5 is a schematic representation of the present system of FIG. 4 workflow, according to one or more embodiments of the present disclosure;
[0032] FIG. 6 illustrates an operational flow diagram depicting a process for performing a NF instantiation operation, according to one or more embodiments of the present disclosure;
[0033] FIG. 7 illustrates an operational flow diagram depicting a process for performing a NF termination operation, according to one or more embodiments of the present disclosure;
[0034] FIG. 8 illustrates an operational flow diagram depicting a process for performing a NF scaling operation, according to one or more embodiments of the present disclosure;
[0035] FIG. 9 illustrates a system architecture framework (e.g. Management and Orchestration (MANO) architecture framework) that can be implemented in the system of FIG.4, according to one or more embodiments of the present disclosure; and
[0036] FIG. 10 is a flow diagram illustrating a method for managing the NF operation, according to one or more embodiments of the present disclosure.
[0037] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0038] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0039] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0040] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0041] A system and method for managing Network Function (NF) operation is disclosed. This system and method are characterized by its capability to efficiently update proper resource inventory on NF and NF instantiation process. When a NF deletion flow executed all resources that is used by the CNF get moved to a free pool thereby optimizing network resources, reducing memory space requirement, and improving processing speed.
[0042] Various embodiments of the invention provide a system and a method for managing Network Function (NF) operation are disclosed. The present invention describes a solution for managing a life cycle of NFs within containerized network infrastructures by providing an interface. The interface is also referred as a Swarm Adapter_ Configuration Management (SA_CM) interface, between a CNF life cycle manager (CNFLM) and a Policy Execution Engine (PEEGN) to provide a smooth execution of CNF operation, and also updates CNF/ Cloud-Native Network Functions Container (CNFC) status and resource details at an inventory.
[0043] FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing a Network Function (NF) operation, according to one or more embodiments of the present disclosure. The NF is hereinafter referred to as a Container Network Function (CNF). The environment 100 includes a network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 for transmitting a user request to a processor 406 (as shown in FIG. 4) to initiating the CNF operation.
[0044] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105.
[0045] In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0046] In accordance with one aspect of the present invention, each of the first UE 110a, the second UE 110b, and the third UE 110c are configured to facilitate the transmission of a request via the network 105 for the purpose of availing a variety of services. The scope of said services is inclusive of, but not limited to, engaging with the server 115 for the purpose of submitting a request thereto, initiating a process for the reconstruction of data, and subsequently conducting oversight of the data thus reconstructed, all aforementioned activities being conducted over the network 105. This configuration enables a streamlined and efficient interaction between the user equipment and the network resources, thereby enhancing the utility and performance of the network 105 in providing said services.
[0047] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0048] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0049] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity. However, for the purpose of description, the system 120 is described as an integral part of the server 115, without deviating from the scope of the present disclosure. The system 120 is configured to manage the CNF operation.
[0050] The system 120 is further configured to employ Transmission Control Protocol (TCP) connection to identify any connection loss in the network 105 and thereby improving overall efficiency. The TCP connection is a communication standard enabling applications and the system 120 to exchange information over the network 105.
[0051] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0052] FIG. 2 shows a block diagram for an architecture 200 of a CNF Life cycle Manager (CNFLM) 204, according to one or more embodiments of the present disclosure. The architecture 200 comprises the CNFLM 204 connected to the network 105. A Policy Execution Engine (PEEGN) 206 is connected to the CNFLM 204 via the network 105. The CNFLM 204 and the PEEGN 206 are configured to form a core management in the present invention.
[0053] The terms “interface,” and “SA_CM interface,” as used herein, are used interchangeably, without limiting the scope of the present disclosure.
[0054] The architecture 200 comprises a Swarm Adapter_ Configuration Management (SA_CM) interface 214 provided between the CNFLM 204 and the PEEGN 206 to provide a smooth execution of CNF operation. The SA_CM interface 214 provides a rich user interface to a user, where the user can initiate CNF requests.
[0055] In an embodiment of the present invention, the CNFLM 204 transmits the request of the CNF operation to the PEEGN 206 for checking availability of CNF policy and reserve resources via the SA_CM interface 214. In case the CNF policy is present, then the PEEGN 206 sends the request of CNF operation to reserve the resources. The reserved resources include the resources which are consumed during the CNF instantiation. The resources include at least one of, a memory 402 (as shown in FIG.4), a processor 406 (as shown in FIG.4), and the network 105. Further, the CNFLM 204 requests a microservice 306. In an embodiment, the term “microservice” are used herein after referred as a “Docker Swarm Adaptor (DSA)” 306 (as shown in FIG. 3), without limiting the scope of the disclosure. The DSA 306 is configured to perform the CNF operation.
[0056] The architecture 200 further comprises an infrastructure module 208. The infrastructure module 208 performs several critical functions that are essential for the successful deployment, management, and operation of the CNFs within a cloud-native environment. The infrastructure module 208 is configured to form a docker infrastructure and a swarm cluster and create the container. The architecture 200 further comprises a user interface layer 202 connected to a core management system, a Network Management System (NMS) module 210 and a database 212. The NMS module 210 performs Fault, Configuration, Accounting, Performance, Security (FCAPS) functions which define network management tasks. The database 212 is a persistent database used for storing all data related to the CNF operation.
[0057] FIG. 3 illustrates an architecture 300 of the system 120 for managing the CNF operation, according to one or more embodiments of the present invention. The architecture 300 of the system 120 comprises the user interface layer 202, the CNFLM 204, the PEEGN 206, a microservice 302, a RMR 304, the Docker Swarm Adapter (DSA) 306 (306a, 306b,…, 306n), and a swarm manager 308. In an embodiment, the microservice 302 is used herein after referred as a Physical Virtual Inventory Manager (PVIM) 302, without limiting the scope of the disclosure.
[0058] The CNFLM 204 is configured to capture the details of vendors, and instances of Network Functions (NFs). In an embodiment, the instances of NFs include, but not limited to, CNFs and Cloud-Native Network Functions Container (CNFCs) via Create, Read, and Update using Application Programming Interfaces (API’s) for interacting with and managing CNFs operation. The captured details are stored in the database 212 and further used by the DSA 306 for performing operation of the CNFs. In an embodiment, the CNFLM 204 is responsible for creating CNF instance or individual CNFC instance. The CNFLM 204 also scales out CNFs or individual CNFCs. The architecture 300 of the CNFLM 204 comprises the user interface layer 202 for obtaining requests to onboard/instantiate/terminate CNF instance for the CNFLM 204.
[0059] In the embodiment of the present disclosure, the DSA 306 is configured to interact with the CNFLM 204 to spawn appropriate CNF instances / CNFC instances. The DSA 306 is directly connected to a docker host of the swarm manager 308 to deploy docker images to the docker host which connects to the swarm manager 308. The DSA 306 further creates a Docker Agent Manager (DAM) and adds the docker host as worker nodes (W1, W2, W3, …, Wn) in a call flow. The DSA 306 is deployed based on at least one of, but not limited to, a first region, a second region and the like. The CNF operations corresponding to each request received are performed regionally. The CNFLM 204 requests the region related details to the PEEGN 206. An Elastic Load Balancer (ELB) routes the requests to the specific region in the DSA 306 based on the received request.
[0060] The PVIM 302 is used to obtain the status of instantiated CNF/CNFC upon subscription to CNF-LM Ack event. The PVIM 302 is further used to update an inventory from reserved to use. The CNFLM 204 further comprises the PEEGN 206 for supporting scaling policy for the CNFC. In an embodiment, the PEEGN 206 checks for the CNF policy and reserves resources required to instantiate the CNF at the PVIM 302 during CNF operation.
[0061] The RIC Message Router (RMR) 304 is a library for peer-to-peer communication. Applications use the library to send and receive messages where the message routing and endpoint selection is based on the message type rather than Domain Name Server (DNS) host name-IP port combinations. The RMR 304 is a component within an O-RAN (Open Radio Access Network) architecture that is responsible for routing messages between various RAN (Radio Access Network) elements. It acts as a communication hub, facilitating the exchange of control messages, data, and information between different RAN functions and entities.
[0062] The swarm manager 308 comprises a swarm consisting of multiple docker hosts configured to run in a swarm mode and act as manager nodes and worker nodes (W1, W2, W3,… Wn). The swarm acts as managers to manage membership and delegation and workers to run swarm services. A given docker host can be a manager, a worker or perform both roles. When the swarm service is created, the user of the UE 110 wants to define its optimal state (number of replicas, network and storage resources available to it, port the service exposes to the outside world, and more). For instance, the web service that serves web pages, APIs, or web applications can be deployed as a Docker Swarm service. This service can be scaled horizontally to handle varying levels of web traffic by adding or removing instances (replicas) of the service. Further, the DAM facilitates the management of the docker services and containers to act as an intermediary layer that aids in the deployment, scaling, monitoring, and management of docker containers across a cluster of nodes within a docker swarm environment.
[0063] The DAM provides various functionalities such as automating the deployment of containers to the appropriate nodes. The DAM manages the lifecycle of containers across the swarm and monitors the health and status of containers and nodes. The DAM further facilitates the communication and coordination between docker swarm manager nodes and worker nodes and provides a user interface or API for administrators to manage the swarm environment.
[0064] FIG. 4 is an exemplary block diagram of the system 120 for managing the CNF operation, according to one or more embodiments of the present disclosure. The system 120 includes a processor 406, a memory 402, and an I/O interface 404. The processor 406may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 406. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
[0065] The information related to the request pertaining to initiating a CNF operation is provided or stored in the memory 402. Among other capabilities, the processor 406 is configured to fetch and execute computer-readable instructions stored in the memory 402. The memory 402 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 402 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0066] The information related to the request pertaining to initiating a CNF operation is rendered on the I/O interface 404. The I/O interface 404 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The I/O interface 404 facilitates communication of the system 120. In one embodiment, the I/O interface 404 provides a communication pathway for one or more components of the system 125. Examples of such components include, but are not limited to, the UE 110 and the database 212.
[0067] The database 212 is configured to store the request pertaining to the at least one packet processing rule change which is generated and transmitted by the UE 110. Further, the database 212 provides structured storage, support for complex queries, and enables efficient data retrieval and analysis. The database 212 is one of, but is not limited to, one of a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0068] Further, the processor 406, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 406. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 406 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 406 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 402 may store instructions that, when executed by the processing resource, implement the processor 406. In such examples, the system 120 may comprise the memory 402 storing the instructions and the processing resource to execute the instructions, or the memory 402 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 406 may be implemented by electronic circuitry.
[0069] In order for the system 120 to manage CNF operation, the processor 406 includes a connecting module 408, a CNF operation module 410, an updating module 412, and a transceiver 414 communicably coupled to each other for managing the CNF operation.
[0070] The connecting module 408, the CNF operation module 410, the updating module 412, and the transceiver 414 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 406. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 406 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 402 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 402 storing the instructions and the processing resource to execute the instructions, or the memory 402 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 406 may be implemented by electronic circuitry.
[0071] The connecting module 408 is configured to establish the interface 214 with the DSA 306 to perform CNF operation, the interface 214 enabling orchestration of the CNF operation. The connecting module 408 needs to be configured with one or more parameters to interact with the DSA 306. The connecting module 408 includes, but not limited to, network addresses, authentication credentials, and communication protocols. In an exemplary embodiment, configuring IP addresses and ports for the DSA 306 and specifying the authentication tokens or credentials for secure communication. The interface 214 is at least one of, but not limited to, the SA_CM interface. The interface 214 is configured to connect the processor 406 and the DSA 306 is responsible for orchestrating the CNF operation. In an embodiment, the at least one CNF operation includes at least one of, CNF instantiation, CNF termination, CNF scaling and CNF deletion utilizing the interface 214.
[0072] The CNF instantiation refers to the process of deploying and activating the CNF within a software-defined networking (SDN) or network function virtualization (NFV) environment. The CNF instantiation process involves several steps to deploy the CNF and to make the CNF operational for handling network traffic or providing specific network services. The CNF termination refers to the process of stopping, deactivating, and removing the CNF instance from a cloud or virtualized environment. The CNF termination process is typically initiated when the CNF instance is no longer needed, such as when scaling down due to decreased workload, updating to a new version, or decommissioning the CNF altogether.
[0073] The CNF scaling refers to the process of adjusting the capacity of CNFs to handle varying levels of workload or traffic within the cloud or virtualized environment. The CNF scaling can be performed in two primary ways: vertical scaling (scaling up) and horizontal scaling (scaling out). The CNF deletion refers to the process of removing the CNF instance or service from the cloud or virtualized environment. The CNF deletion process typically involves several steps to ensure that the CNF is properly decommissioned, resources are deallocated, and any associated configurations or data are cleaned up. The DSA 306 is configured to interact with the processor 406 to spawn appropriate CNF instances / CNFC instances.
[0074] The transceiver 414 is configured to receive instructions from the processor 406 to execute the at least one CNF operation. In one embodiment, the transceiver 414 is configured to forward the instructions from the processor 406 to the DSA 306 to execute the at least one CNF operation. In another embodiment, the CNF operation module 410 is configured to request the DSA 306 to execute the at least one CNF operation via the interface 214 based on received user request. When the CNF operation commences, the PVIM 302 is adapted to store information related to the CNF operation. When the CNF operation begins, the CNF operation module 410 collects relevant data about the operation, such as configuration parameters, resource allocation, and operational status. The CNF operation module 410 transmits the collected data to the PVIM 302 through the user interface 202. The PVIM 302 might organize data into one or more parameters. In an embodiment, the one or more parameters include, but not limited to, IP addresses, security policies, and service ports. Upon organizing the data, the PVIM 302 stores the structured data in the database 212, which ensures that the information is not lost even after the operation is completed or if the system restarts.
[0075] In an embodiment, the at least one CNF operation includes at least one of, CNF instantiation, CNF termination, CNF scaling and CNF deletion utilizing the interface.
[0076] The user interface layer 202 of the UE 110 is configured to initiate a CNF instantiation request to the CNFLM 204 by the transceiver 414. When the CNFLM 204 receives the CNF instantiation request, the CNFLM 204 transmits the CNF instantiation request to the PEEGN 206 for checking availability of the CNF policy. If the CNF policy is available, then the PEEGN 206 sends the CNF requests to reserve the resources. The CNF policy includes at least one of, the CNF Initialization (INIT) policy. In an embodiment, the reserved resources include the resources which are consumed during the CNF instantiation. The resources include at least one of, the memory 402, the processor 406, and the network 105. The transceiver 414 transmits the CNF instantiation request from the PEEGN 206 to the PVIM 302 to reserve resources if the determined availability of the at least one CNF policy and reserve resources at the PEEGN 206.
[0077] Accordingly, as per one embodiment, the CNF operation module 410 is configured to request the DSA 306 to instantiate the CNF over the SA_CM interface 214 for creating and initializing the CNF instance for handling network traffic or providing specific network services. The DSA 306 is configured to transmit the instantiate CNF to the docker host of the swarm manager 308. The docker host of the swarm manager 308 transmits the instantiation status to the DSA 306. The DSA 306 is configured to transmit the acknowledgement of the CNF instantiation to the CNFLM 204. Further, all CNFC instantiation status are included on the CNF instantiation response from the DSA 306 to the CNFLM 204 over the SA_CM interface 214. The CNF instantiation status includes information pertaining to the completion or incompletion of the CNF instantiation.
[0078] The CNFLM 204 initiates the request for updating the inventory and sends the request to the PVIM 302 for proper inventory management. When the inventory is updated, the PVIM 302 transmits an acknowledgement to the CNFLM 204. On receipt of the acknowledgement, the CNFLM 204 transfers the CNFC instantiation status to the RMR 304. The RMR 304 is the library for peer-to-peer communication. Applications use the library to send and receive messages or requests where the message routing and endpoint selection is based on the message type rather than DNS host name-IP port combinations. The RMR 304 transmits the acknowledgement subsequent to the updating of the CNFC instantiation status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF instantiation status to the user interface layer 202.
[0079] As per the one or more embodiments, the user interface layer 202 of the UE 110 initiates a CNF termination request to the CNFLM 204 via the transceiver 414. When the CNFLM 204 receives the CNF termination request, the CNFLM 204 transmits the CNF termination request to the DSA 306 via the SA_CM interface 214 by the transceiver 414.
[0080] The DSA 306 is configured to transmit the CNF termination request to the docker host of the swarm manager 308. The docker host of the swarm manager 308 terminates all running CNFC’s of the CNF and transmits the CNF termination status to the DSA 306. The DSA 306 transmits the acknowledgement of the CNF termination status to the CNFLM 204 via the transceiver 414. Further based on the CNF termination response from the DSA 306 subsequent to performing the termination of all running CNFC’s of the specific CNF, which transmits to the CNFLM 204.
[0081] The CNFLM 204 initiates the request for update the inventory and sends the request to the PVIM 302 for proper inventory management upon checking status of all the CNFC’s. When the inventory is updated, the PVIM 302 transmits an updated inventory acknowledgement to the CNFLM 204. The CNFLM 204 is configured to transfer the CNFC termination status to the RMR 304. The RMR 304 transmits the acknowledgement of updated the CNFC termination status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF termination status to the user interface layer 202.
[0082] As per the one or more embodiments, the transceiver 414 is configured to receive a CNFC scaling request from the PEEGN 206 to instantiate a CNFC instance. The CNF operation module 410 is configured to request the DSA 306 to instantiate the CNFC over the SA_CM interface 214. The DSA 306 is configured to transmit the instantiate CNF to the docker host of the swarm manager 308. The docker host of the swarm manager 308 receives the instantiate CNF and transmits the instantiation status to the DSA 306.
[0083] The DSA 306 is configured to transmit the acknowledgement of the CNF instantiation to the CNFLM 204. Further based on the CNF instantiation response from the DSA 306 over the SA_CM interface 214 which includes all CNFC instantiation status. An updating module 412 is configured to request for update inventory at the PVIM 302 pertaining to resources in use and reserved based on the CNF instantiation response received from the DSA 306. The response includes a CNFC instantiation status. The transceiver 414 is configured to transmit the request to the PVIM 302 for inventory management based on the CNFC instantiation status.
[0084] Further, the system 120 is configured to enable an async event-based implementation to manage the interface 214 to function in a high availability mode in order to engage a next available CNF Life Cycle Manager (CNFLM) instance when a current CNFLM instance is down. The async event-based implementation enabled by the processor 406 ensures that one or more long running tasks are simultaneously accommodated while running one or more short running tasks. By doing so, the system 120 enables the async event-based implementation utilizing the interface 214 efficiently, which reduces fault tolerance for any event failure, thus improving processing speed, and reducing memory space requirement. The interface 214 is available for all the events and supports efficient operation to avoid the requirement of data replication.
[0085] FIG. 5 is a schematic representation of the system 120 in which various entities operations are explained, according to one or more embodiments of the present disclosure. Referring to FIG. 5, describes the system 120 for managing CNF operation. It is to be noted that the embodiment with respect to FIG. 5 will be explained with respect to the first UE 110a for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0086] As mentioned earlier in FIG.1, In an embodiment, the first UE 110a may encompass electronic apparatuses. These devices are illustrative of, but not restricted to, personal computers, laptops, tablets, smartphones (including phones), or other devices enabled for web connectivity. The scope of the first UE 110a explicitly extends to a broad spectrum of electronic devices capable of executing computing operations and accessing networked resources, thereby providing users with a versatile range of functionalities for both personal and professional applications. This embodiment acknowledges the evolving nature of electronic devices and their integral role in facilitating access to digital services and platforms. In an embodiment, the first UE 110a can be associated with multiple users. Each user equipment 110 is communicatively coupled with the processor 406 via the network 105.
[0087] The first UE 110a includes one or more primary processors 502 communicably coupled to the processor 406 of the system 120. The one or more primary processors 502 are coupled with a memory unit 504 storing instructions which are executed by the one or more primary processors 502. Execution of the stored instructions by the one or more primary processors 502 enables the first UE 110a to transmit the user request pertaining to the processor 406 to initiating the CNF operation.
[0088] Furthermore, the one or more primary processors 502 within the UE 110 are uniquely configured to execute a series of steps as described herein. This configuration underscores the processor’s capability to manage CNF operation. The operational synergy between the primary processors and the additional processors, guided by the executable instructions stored in the memory unit 504, facilitates a seamless initiation of CNF operation. This initiation is critically underpinned by the dynamic resource management capabilities, which includes establishing necessary connections for policy and resource evaluation, determining the availability of CNF policies, reserving resources based on these policies, and ultimately facilitating the instantiation of the CNF.
[0089] Further, the processor 406 of the system 120 is configured to manage CNF operation. More specifically, the processor 406 of the system 120 is configured to manage the CNF operation from a kernel 506 of at least one of the first UE 110a in response to managing the CNF operation.
[0090] The kernel 506 is a core component serving as the primary interface between hardware components of the first UE 110a and the plurality of services at the database 212. The kernel 506 is configured to provide the plurality of services on the first UE 110a to resources available in the network 105. The resources include one of a Central Processing Unit (CPU), memory components such as Random Access Memory (RAM) and Read Only Memory (ROM).
[0091] As mentioned earlier in FIG.4, the processor 406 of the system 120 is configured to establish the interface 214 with the DSA 306, the interface 214 enabling orchestration of the CNF operation. In the context of Cloud-Native Functions (CNF) operations, the inventory encompasses the resources, components, and configurations needed to deploy, manage, and operate the CNF operations within a cloud-native architecture. The inventory includes, but is not limited to, a resource inventory, a component inventory, and an operational inventory. In the network resources, virtual networks, load balancers, and network interfaces necessary for CNF communication and traffic management. Virtual Private Cloud (VPC) subnets and internal load balancers routing traffic to CNF instances, request the DSA 306 to execute at least one CNF operation via the interface 214 based on received user request, and transmit to the PVIM 302, the inventory management request to manage inventory pertaining to resources at the database 212 based on a response received from the DSA 306 pertaining to completion of execution of the at least one of CNF operation.
[0092] As per the illustrated embodiment, the system 120 includes the processor 406, the memory 402, and the I/O interface 404. The operations and functions of the processor 406, the memory 402, and the I/O interface 404 are already explained in FIG. 4. For the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 4 has been omitted to avoid repetition.
[0093] Further, the processor 406 includes the connecting module 408, the CNF operation module 410, and transceiver 414. The operations and functions of the connecting module 408, the CNF operation module 410, and the transceiver 414 are already explained in FIG. 4. Hence, for the sake of brevity, a similar description related to the working and operation of the system 120 as illustrated in FIG. 4 has been omitted to avoid repetition. The limited description provided for the system 120 in FIG. 5, should be read with the description provided for the system 120 in the FIG. 4 above, and should not be construed as limiting the scope of the present disclosure.
[0094] FIG. 6 illustrates an operational flow diagram depicting a process for performing the CNF instantiation operation, according to one or more embodiments of the present disclosure.
[0095] At step 602, the user interface layer 202 of the UE 110 initiates the CNF instantiation request by the user to the CNFLM 204. The CNF instantiation refers to the process of deploying and activating the CNF within a software-defined networking (SDN) or network function virtualization (NFV) environment. The CNF instantiation process involves several steps to deploy the CNF and to make the CNF operational for handling network traffic or providing specific network services.
[0096] At step 604, when the CNFLM 204 receives the CNF instantiation request, the CNFLM 204 transmits the CNF instantiation request to the PEEGN 206 for checking availability of the CNF policy. The CNF policy includes at least one of, the CNF Initialization (INIT) policy. In case the CNF policy is present, then the PEEGN 206 sends the CNF requests to reserve the resources. The reserved resources include the resources which are consumed during the CNF instantiation. The resources include at least one of, the memory 402, the processor 406, and the network 105. Further, the CNFLM 204 requests the DSA 306 to perform the CNF instantiation operation.
[0097] At step 606, Accordingly, as per one embodiment, the CNFLM 204 requests the DSA 306 to instantiate CNF over the SA_CM interface 214 by the CNF operation module 410. The DSA 306 transmits the instantiate CNF to the docker host of the swarm manager 308. The docker host of the swarm manager 308 transmits the instantiation status to the DSA 306. The docker host of the swarm manager 308 is configured to deploy docker images to the docker host.
[0098] At step 608, the DSA 306 transmits the acknowledgement of the CNF instantiation to the CNFLM 204. Further, all CNFC instantiation statuses are included on the CNF instantiation response from the DSA 306 over the SA_CM interface 214. The CNF instantiation status includes information pertaining to the completion or incompletion of the CNF instantiation.
[0099] At step 610, upon receiving acknowledgement of the CNF instantiation to the CNFLM 204, the CNFLM 204 initiates the request for update inventory and sends the request to the PVIM 302 for proper inventory management. When the inventory is updated, the PVIM 302 transmits the acknowledgement of updated the inventory to the CNFLM 204.
[00100] At step 612, upon updating the inventory, the CNFLM 204 transfers the CNFC instantiation status to the RMR 304. The RMR 304 is the library for peer-to-peer communication. Applications use the library to send and receive messages or requests where the message routing and endpoint selection is based on the message type rather than DNS host name-IP port combinations.
[00101] At step 614, the RMR 304 transmits the acknowledgement of updated the CNFC instantiation status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF instantiation status to the user interface layer 202.
[00102] FIG. 7 illustrates an operational flow diagram depicting a process for performing CNF termination operation, according to one or more embodiments of the present disclosure.
[00103] At step 702, the user interface layer 202 of the UE 110 initiates the CNF termination request by the user to the CNFLM 204. The CNF termination refers to the process of stopping, deactivating, and removing the CNF instance from a cloud or virtualized environment. The CNF termination process is typically initiated when the CNF instance is no longer needed, such as when scaling down due to decreased workload, updating to a new version, or decommissioning the CNF altogether.
[00104] At step 704, when the CNFLM 204 receives the CNF termination request, the CNFLM 204 transmits the CNF termination request to the DSA 306 to terminate the CNF over the SA_CM interface 214 by the CNF operation module 410.
[00105] At step 706, the DSA 306 transmits the CNF termination request to the docker host of the swarm manager 308. The docker host of the swarm manager 308 transmits the termination status to the DSA 306. The docker host of the swarm manager 308 is configured to deploy docker images to the docker host.
[00106] At step 708, the DSA 306 transmits the acknowledgement of the CNF termination to the CNFLM 204. Further, all CNFC instantiation statuses are included on the CNF termination response from the DSA 306 over the SA_CM interface 214.
[00107] At step 710, upon receiving acknowledgement of the CNF termination status to the CNFLM 204, the CNFLM 204 initiates the request for update inventory and sends the request to the PVIM 302 for proper inventory management. When the inventory is updated, the PVIM 302 transmits the acknowledgement of updated the inventory to the CNFLM 204.
[00108] At step 712, upon updating the inventory, the CNFLM 204 transfers the CNFC termination status to the RMR 304. The RMR 304 is the library for peer-to-peer communication. Applications use the library to send and receive messages or requests where the message routing and endpoint selection is based on the message type rather than DNS host name-IP port combinations.
[00109] At step 714, the RMR 304 transmits the acknowledgement of updated the CNFC termination status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF termination status to the user interface layer 202 of the UE 110.
[00110] FIG. 8 illustrates an operational flow diagram depicting a process for performing CNF scaling operation, according to one or more embodiments of the present disclosure.
[00111] At step 802, the PEEGN 206 initiates the CNF scaling request to instantiate a CNFC instance to the CNFLM 204. The CNF scaling refers to the process of adjusting the capacity of CNFs to handle varying levels of workload or traffic within the cloud or virtualized environment. The CNF scaling can be performed in two primary ways: vertical scaling (scaling up) and horizontal scaling (scaling out).
[00112] At step 804, when the CNFLM 204 receives the CNF scaling request, the CNFLM 204 transmits the CNF scaling request to the DSA 306 to instantiate the CNFC over the SA_CM interface 214. The DSA 306 transmits the instantiate CNF to the docker host of the swarm manager 308. The docker host of the swarm manager 308 receives the instantiate CNF and transmits the CNF instantiation status to the DSA 306.
[00113] At step 806, the DSA 306 transmits the acknowledgement of the CNF instantiation to the CNFLM 204. Further all CNFC instantiation status are included on the CNF instantiation response from the DSA 306 over the SA_CM interface 214.
[00114] At step 808, upon receiving the acknowledgement of the CNF instantiation by the CNFLM 204, the CNFLM 204 initiates the request for update inventory and sends the request to the PVIM 302 for proper inventory management. When the inventory is updated, the PVIM 302 transmits the acknowledgement of updated the inventory to the CNFLM 204.
[00115] At step 810, the CNFLM 204 transfers the CNFC instantiation status to the RMR 304. The RMR 304 is the library for peer-to-peer communication. Applications use the library to send and receive messages or requests where the message routing and endpoint selection is based on the message type rather than DNS host name-IP port combinations.
[00116] At step 812, the RMR 304 transmits the acknowledgement of updated the CNFC instantiation status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF scaling status to the PEEGN 206.
[00117] FIG. 9 illustrates a system architecture framework 900 (e.g., Management and Orchestration (MANO) architecture framework) that can be implemented in the system of FIG.4, according to the one or more embodiments of the present disclosure. The system architecture 900 includes the user interface 202, a Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) design function module 905, a platform foundation service module 910, a platform core service module 915, and a platform resource adapter and utilities module 920.
[00118] The NFV and SDN design function module 905 is crucial for modernizing network infrastructure by enabling virtualized, scalable, and programmable network functions and management systems, particularly within the framework of CNFs. The platform foundation service module 910 refers to the underlying services and infrastructure components that support and enable the deployment, operation, and management of containerized network functions. The platform foundation service module 910 provides the essential capabilities and resources required for the CNF environment to function effectively.
[00119] The platform core service module 915 refers to the fundamental services and components that are essential for the core functionality and operation of containerized network functions. These services are critical for the effective deployment, execution, and management of CNFs, providing the necessary support and infrastructure for their operation. The platform resource adapter and utilities module 920 refers to a set of components and tools designed to manage and adapt various resources and services necessary for the operation of CNFs. The platform resource adapter and utilities module 920 plays a crucial role in integrating CNFs with underlying infrastructure and services, providing the necessary support for efficient operation, resource utilization, and interoperability.
[00120] The NFV and SDN design function module 905 includes a VNF lifecycle manger 905a, a VNF catalog 905b, a network service catalog 905c, a network slicing and service chaining manger 905d, a physical and virtual resource manager 905e, and a CNF lifecycle manager 905f.
[00121] The VNF lifecycle manager 905a is responsible for managing the entire lifecycle of Virtual Network Functions (VNFs). The VNF lifecycle manager 905b ensures that VNFs or CNFs are deployed, configured, monitored, scaled, and eventually decommissioned effectively. The VNF catalog 905b (referred to as a CNF catalog) is a repository or registry that stores information about various containerized network functions and their configurations. The VNF catalog 905b serves as a central reference for managing and deploying CNFs, providing details about their capabilities, requirements, and how they can be used within the network environment. The network service catalog 905c is a comprehensive repository that organizes and manages the information related to network services composed of multiple CNFs or other network functions. The network service catalog 905c serves as a central resource for defining, deploying, and managing these services within a containerized network environment.
[00122] The network slicing and service chaining manager 905d is a crucial component responsible for orchestrating and managing network slicing and service chaining functionalities. These functionalities are essential for efficiently utilizing network resources and delivering tailored network services in a dynamic and scalable manner. The physical and virtual resource manager 905e is a critical component responsible for overseeing and managing both physical and virtual resources required to support the deployment, operation, and scaling of CNFs. The physical and virtual resource manager 905e ensures that the necessary resources are allocated efficiently and effectively to meet the performance, availability, and scalability requirements of containerized network functions.
[00123] Further, the CNF lifecycle manager 905f is a component responsible for overseeing the entire lifecycle of containerized network functions. This includes the management of CNFs from their initial deployment through ongoing operation and maintenance, up to their eventual decommissioning. The CNF lifecycle manager 905f ensures that the CNFs are efficiently deployed, monitored, scaled, updated, and removed, facilitating the smooth operation of network services in a containerized environment.
[00124] The platform foundation service module 910 includes a microservice elastic load balancer 910a, an identity and access manager 910b, a command line interface 910c, a central logging manager 910d and an event routing manager 910e.
[00125] The microservice elastic load balancer 910a is a specific type of load balancer designed to dynamically distribute network traffic across a set of microservices running in a containerized environment. Its primary purpose is to ensure efficient resource utilization, maintain high availability, and improve the performance of network services by evenly distributing incoming traffic among multiple instances of microservices. The identity and access manager 910b is a critical component responsible for managing and securing access to containerized network functions and their resources. The identity and access manager 910b ensures that only authorized users and systems can access specific resources, and it enforces policies related to identity verification, authentication, authorization, and auditing within the CNF ecosystem.
[00126] The central logging manager 910d is a component responsible for aggregating, managing, and analyzing log data from various containerized network functions and associated infrastructure components. This centralized approach to logging ensures that logs are collected from disparate sources, consolidated into a single repository, and made accessible for monitoring, troubleshooting, and auditing purposes. The event routing manager 910e is a component responsible for handling the distribution and routing of events and notifications generated by various parts of the CNF environment. This includes events related to system status, performance metrics, errors, and other operational or application-level events. The event routing manager 910e ensures that these events are efficiently routed to the appropriate consumers, such as monitoring systems, alerting systems, or logging infrastructure, for further processing and action.
[00127] The platform core service module 915 includes an NFV infrastructure monitoring manager 915a, an assurance manager 915b, a performance manger 915c, a policy execution engine 915d, a capacity monitoring manger 915e, a release management repository 915f, a configuration manger and GCT 915g, a NFV platform decision analytics unit 915h, a platform NoSQL DB 915i, a platform scheduler and Cron Jobs module 915j, a VNF backup & upgrade manger 915k, a micro service auditor 915l, and a platform operation, administration and maintenance manager 915m.
[00128] The NFV infrastructure monitoring manager 915a monitors the underlying infrastructure of NFV environments, including computing, storage, and network resources. The NFV infrastructure monitoring manager 915a provides real-time visibility into resource health, performance, and utilization. Further, the NFV infrastructure monitoring manager 915a detects and alerts infrastructure issues. Further, the NFV infrastructure monitoring manager 915a integrates with monitoring tools to ensure reliable operation of CNFs.
[00129] The assurance manager 915b manages the quality and reliability of network services by ensuring compliance with service level agreements (SLAs) and operational standards. The performance manager 915c optimizes the performance of CNFs by tracking and analyzing key performance indicators (KPIs). The policy execution engine 915d enforces and applies policies within the CNF environment to manage operations and access. Further, the policy execution engine 915d executes policies related to security, resource allocation, and service quality. Further, the policy execution engine 915d executes policies translates policy rules into actionable configurations and enforces compliance across CNFs.
[00130] The capacity monitoring manager 915e monitors and manages the capacity of resources within the CNF environment to ensure optimal usage and avoid resource shortages. The release management repository 915f stores and manages software releases, configurations, and versions of CNFs. Further, the release management repository 915f keeps track of different versions of CNFs.
[00131] The configuration manager and Generic Configuration Tool (GCT) 915g manages the configuration of CNFs and related infrastructure components. The NFV platform decision analytics unit 915h analyzes data from a NFV platform to support decision-making and strategic planning.
[00132] The platform NoSQL database (DB) 915i is used for storing and managing large volumes of unstructured or semi-structured data within the CNF environment. The platform scheduler and Cron Jobs module 915j manage scheduled tasks and periodic operations within the CNF environment. The VNF backup & upgrade manager 915k oversees the backup and upgrade processes for Virtual Network Functions (VNFs) within the CNF environment.
[00133] The micro service auditor 915l monitors and audits microservices to ensure compliance with operational and security standards. The platform operation, administration and maintenance manager 915m manages the overall operation, administration, and maintenance of the CNF platform.
[00134] The platform resource adapter and utilities module 920 includes a platform external API adaptor and gateway 920a, a generic decoder and indexer 920b, a swarm adaptor 920c, an OpenStack API adaptor 920d and a NFV gateway 920e.
[00135] The platform external API adaptor and gateway 920a facilitate communication between the CNF platform and external systems or services by providing an interface for API interactions. The generic decoder and indexer 920b decode and indexes various types of data and logs within the CNF environment. The swarm adaptor 920c facilitates communication between a swarm cluster and the CNF environment, including container deployment, scaling, and management.
[00136] The OpenStack API adaptor 920d provides an interface for the CNF platform to interact with OpenStack APIs, enabling operations such as provisioning, scaling, and managing virtual resources. The NFV gateway 920e manages and facilitates communication between NFV (Network Functions Virtualization) components and external networks or services.
[00137] FIG. 10 is a flow diagram illustrating a method 1000 for managing the CNF operation, according to one or more embodiments of the present disclosure. For the purpose of description, the method 1000 is described with the embodiments as illustrated in FIG. 4 and should nowhere be construed as limiting the scope of the present disclosure.
[00138] At step 1002, the method 1000 includes the step of establishing the interface 214 with the DSA 306 to perform CNF operation by the connecting module 408, the interface 214 enabling orchestration of the CNF operation. The interface 214 is at least one of, the SA_CM interface. The interface 214 is configured to connect the processor 406 and the DSA 306 is responsible for orchestrating the CNF operation. In an embodiment, the at least one CNF operation includes at least one of, CNF instantiation, CNF termination, CNF scaling and CNF deletion utilizing the interface. The DSA 306 is configured to interact with the processor 406 to spawn appropriate CNF instances / CNFC instances. When the CNF operation commences, the PVIM 302 is adapted to store information related to the CNF operation.
[00139] At step 1004, the method 1000 includes the step of requesting the DSA 306 to execute at least one CNF operation via the interface 214 based on the received user request. In one embodiment, the transceiver 414 is configured to forward the instructions from the processor 406 to the DSA 306 to execute the at least one CNF operation. In another embodiment, the CNF operation module 410 is configured to request the DSA 306 to execute the at least one CNF operation via the interface 214 based on received user request.
[00140] The user interface layer 202 of the UE 110 is configured to initiate a CNF instantiation request to the CNFLM 204 by the transceiver 414. When the CNFLM 204 receives the CNF instantiation request, the CNFLM 204 transmits the CNF instantiation request to the PEEGN 206 for checking the CNF policy and reserve resources based on provided CNF details. The transceiver 414 transmits the CNF instantiation request from the PEEGN 206 to the PVIM 302 to reserve resources if the determined availability of the at least one CNF policy and reserve resources at the PEEGN 206.
[00141] Accordingly, as per one embodiment, the CNF operation module 410 is configured to request the DSA 306 to instantiate CNF over the SA_CM interface 214 for creating and initializing the CNF instance for handling network traffic or providing specific network services. The DSA 306 is configured to transmit the instantiate CNF to the docker host of the swarm manager 308. The docker host of the swarm manager 308 transmits the instantiation status to the DSA 306. The DSA 306 is configured to transmit the acknowledgement of the CNF instantiation to the CNFLM 204. Further, all CNFC instantiation status are included on the CNF instantiation response from the DSA 306 to the CNFLM 204 over the SA_CM interface 214. The CNF instantiation status includes information pertaining to the completion or incompletion of the CNF instantiation.
[00142] At step 1006, the method 1000 further includes the step of transmitting, by the processor, to the PVIM 302, an inventory management request to manage inventory pertaining to resources at the database 212 based on a response received from the DSA 306 pertaining to completion of execution of the at least one of CNF operation. The CNFLM 204 initiates the request for update inventory and sends the request to the PVIM 302 for proper inventory management. When the inventory is updated, the PVIM 302 transmits the acknowledgement of updated the inventory to the CNFLM 204. The CNFLM 204 transfers the CNFC instantiation status to the RMR 304. The RMR 304 transmits the acknowledgement of updated the CNFC instantiation status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF instantiation status to the user interface layer 202.
[00143] Further, the method 1000 includes the step of performing CNF termination process. The user interface layer 202 of the UE 110 initiates a CNF termination request to the CNFLM 204 by the transceiver 414. When the CNFLM 204 receives the CNF termination request, the CNFLM 204 transmits the CNF termination request to the DSA 306 via the SA_CM interface 214 by the transceiver 414.
[00144] The DSA 306 is configured to transmit the CNF termination request to the docker host of the swarm manager 308. The docker host of the swarm manager 308 terminates all running CNFC’s of the CNF and transmits the CNF termination status to the DSA 306. The DSA 306 transmits the acknowledgement of the CNF termination status to the CNFLM 204 by the transceiver 414. The CNFLM 204 initiates the request for update inventory and sends the request to the PVIM 302 for proper inventory management upon checking status of all the CNFC’s. When the inventory is updated, the PVIM 302 transmits updated inventory acknowledgement to the CNFLM 204.
[00145] The CNFLM 204 is configured to transfer the CNFC termination status to the RMR 304. The RMR 304 transmits the acknowledgement of updated the CNFC termination status to the CNFLM 204. Furthermore, the CNFLM 204 transmits the acknowledgement of the CNF termination status to the user interface layer 202.
[00146] As per the one or more embodiments, the transceiver 414 is configured to receive a CNFC scaling request from the PEEGN 206 to instantiate a CNFC instance. The CNF operation module 410 is configured to request the DSA 306 to instantiate the CNFC over the SA_CM interface 214. The DSA 306 is configured to transmit the instantiate CNF to the docker host of the swarm manager 308. The docker host of the swarm manager 308 receives the instantiate CNF and transmits the instantiation status to the DSA 306.
[00147] The DSA 306 is configured to transmit the acknowledgement of the CNF instantiation to the CNFLM 204. Further based on the CNF instantiation response from the DSA 306 over the SA_CM interface 214 which includes all CNFC instantiation status. An updating module 412 is configured to request for update inventory at the PVIM 302 pertaining to resources in use and reserved based on the CNF instantiation response received from the DSA 306. The response includes a CNFC instantiation status. The transceiver 414 is configured to transmit the request to the PVIM 302 for inventory management based on the CNFC instantiation status.
[00148] Further, the processor 406 is configured to enable an async event-based implementation to manage the interface 214 to function in a high availability mode in order to engage a next available CNF Life Cycle Manager (CNFLM) instance when a current CNFLM instance is down. The async event-based implementation enabled by the processor 406 ensures that one or more long running tasks are simultaneously accommodated while running one or more short running tasks. By doing so, the method 900 enables the async event-based implementation utilizing the interface 214 efficiently, which reduces fault tolerance for any event failure, thus improving processing speed, and reducing memory space requirement. The interface 214 is available for all the events and supports efficient operation to avoid the requirement of data replication.
[00149] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 406. The processor 406 is configured to establish an interface 214 with a Docker Swarm Adaptor (DSA) 306, the interface 214 enabling orchestration of the Container Network Function (CNF) operation. The processor 406 is configured to request the DSA 306 to execute at least one CNF operation via the interface 214 based on the received user request. The processor 406 is further configured to transmit to a Physical Virtual Inventory Manager (PVIM) 302, an inventory management request to manage inventory pertaining to resources at a database 212 based on a response received from the DSA 306 pertaining to completion of execution of the at least one of CNF operation.
[00150] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-10) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00151] The present disclosure incorporates technical advancement of enabling an async event-based implementation to utilize the interface efficiently to function in a high availability mode in order to engage a next available CNFLM instance when a current CNFLM instance is down, owing to this the present invention reducing fault tolerance for any event failure. The interface is highly available for all the events and supports efficient operation to avoid the requirement of data replication.
[00152] The present disclosure offers several advantages by updating the inventory for proper resource management on CNF and CNFC instantiation process. The resource inventory update during partial CNF instantiation, for example 3 CNFC are present in a CNF and while instantiation 2 CNFC get instantiated successfully and 1 CNFC instantiation fails then the CNFLM inform these details to the PVIM so that reserved resources for failed CNFC get moved to free pool. Further, the present disclosure includes the interface, which transmits delivery of image deletion and faulty host replacement notification to the DSA.
[00153] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[00154] Environment - 100
[00155] Network - 105
[00156] User Equipment - 110
[00157] Server - 115
[00158] System - 120
[00159] User Interface Layer -202
[00160] CNFLM– 204
[00161] PEEGN– 206
[00162] Infrastructure module - 208
[00163] NMS- 210
[00164] Database- 212
[00165] SA_CM interface – 214
[00166] PVIM- 302
[00167] RMR- 304
[00168] DSA- 306
[00169] Swarm Manager- 308
[00170] Memory- 402
[00171] I/O interface- 404
[00172] Processor- 406
[00173] Connecting module- 408
[00174] CNF operation module - 410
[00175] Updating module- 412
[00176] Transceiver- 414
[00177] One or more primary processors – 502
[00178] Memory of user equipment –504
[00179] Kernel-506
[00180] System Architecture framework- 900
[00181] NFV and SDN design function module– 905
[00182] VNF lifecycle manger - 905a
[00183] VNF catalogue - 905b
[00184] Network service catalogue - 905c
[00185] Network slicing and service chaining manger - 905d
[00186] Physical and virtual resource manager - 905e
[00187] CNF lifecycle manger - 905f
[00188] Platform foundation service module - 910
[00189] Microservice elastic load balancer - 910a
[00190] Identity and access manager - 910b
[00191] Command line interface - 910c
[00192] Central logging manager - 910d
[00193] Event routing manger - 910e
[00194] Platform core service module – 915
[00195] NFV infrastructure monitoring manager - 915a
[00196] Assurance manager - 915b
[00197] Performance manger - 915c
[00198] Policy execution engine - 915d
[00199] Capacity monitoring manager - 915e
[00200] Release management repository - 915f
[00201] Configuration manager and GCT - 915g
[00202] NFV platform decision analytics unit- 915h
[00203] Platform NoSQL DB - 915i
[00204] Platform scheduler and cron Jobs module - 915j
[00205] VNF backup & upgrade manager - 915k
[00206] Micro service auditor - 915l
[00207] Platform operation, administration and maintenance manager - 915m
[00208] Platform resource adapter and utilities module – 920
[00209] Platform External API adaptor and gateway - 920a
[00210] Generic decoder and indexer - 920b
[00211] Swarm adaptor - 920c
[00212] OpenStack API adaptor - 920d
[00213] NFV gateway - 920e.

,CLAIMS:CLAIMS
We Claim:
1. A method (900) for managing a Network Function (NF) operation, the method (900) comprises the steps of:
establishing (902), by a processor (406), an interface (214) with a microservice (306), the interface enabling orchestration of the NF;
requesting (904), by the processor (406), the microservice (306) to execute at least one NF operation via the interface (214) based on received user request; and
transmitting (906), by the processor (406), to a microservice (302), an inventory management request to manage inventory pertaining to resources at a database (212) based on a response received from the microservice (306) pertaining to completion of execution of the at least one of NF operation.

2. The method (900) as claimed in claim 1, wherein the interface (214) is at least one of, a SA_CM interface, wherein the SA_CM interface between the processor and the microservice (306) is responsible for orchestrating the NF operation.

3. The method (900) as claimed in claim 1, wherein the microservice (306) is configured to interact with the processor (406) to spawn appropriate instances of Network Functions (NFs).

4. The method (900) as claimed in claim 1, wherein when the NF operation commences, the microservice (302) is adapted to store information related to the NF operation.

5. The method (900) as claimed in claim 1, wherein the at least one NF operation includes at least one of, NF instantiation, NF termination, NF scaling and NF deletion utilizing the interface (214).

6. The method (900) as claimed in claim 4, wherein the NF instantiation includes the steps of:
transmitting, by the processor (406), a request to a Policy Execution Engine (PEEGN) (206) to check availability of an at least one NF policy and reserve resources at the PEEGN (206);
if determined availability of the NF policy and reserve resources at the PEEGN (206), transmitting, by the processor (406), a reservation request to the microservice (302) to reserve resources; and
requesting, by the processor (406), the microservice (306) to instantiate NF over the interface (214);

7. The method (900) as claimed in claim 6, wherein the NF policy includes at least one of, a NF Initialization (INIT) policy.

8. The method (900) as claimed in claim 6, wherein the reserved resources include the resources which are consumed during the NF instantiation, wherein the resources include at least one of, a memory (402), a processor (406), and a network (105).

9. The method (900) as claimed in claim 6, wherein the NF instantiation status includes information pertaining to the completion or incompletion of the NF instantiation.

10. The method (900) as claimed in claim 4, wherein the NF termination includes the steps of:
transmitting, by the processor (406), a NF termination request to the microservice (306);
receiving, by the processor (406), a response from the microservice (306) subsequent to performing a termination of all running instances of the NFs; and
transmitting, by the processor (406), an inventory management request to the microservice (302) upon checking status of all the instances of the NFs.

11. The method (900) as claimed in claim 4, wherein the NF scaling includes the steps of:
receiving, by the processor (406), a NF scaling request from a Policy Execution Engine (PEEGN) (206) to instantiate a NF instance;
requesting, by the processor (406), the microservice (306) to instantiate the NF instance via the interface (214);
requesting, by the processor (406), for updating inventory at the microservice (302) pertaining to resources in use and reserved based on a NF instantiation response received from the microservice (306), wherein the response comprises NF instantiation status; and
based on the NF instantiation response received from the microservice (306) over the interface (214), transmitting, by the processor (406), a request to the microservice (302) for inventory management based on the NF instantiation status.

12. The method (900) as claimed in claim 1, wherein the processor (406) is configured to enable an async event-based implementation to manage the interface to function in a high availability mode in order to engage a next available CNFLM instance when a current CNFLM instance is down.

13. The method (900) as claimed in claim 1, wherein the user requests the NF operation from a user interface layer (202) of a User Equipment (UE) (110).

14. The method (900) as claimed in claim 12, wherein the async event-based implementation enabled by the CNFLM (204) ensures that one or more long running tasks are simultaneously accommodated while running one or more short running tasks.

15. The method (900) as claimed in claim 1, wherein the interface (214) enables orchestration of the NF operation by:
receiving, instructions from the CNFLM (204) to execute the at least one NF operation; and
forwarding, the instructions from the CNFLM (204) to the microservice (306) to execute the at least one NF operation.

16. A system (120) for managing a Network Function (NF) operation, the system (120) comprising:
a connecting module (408) configured to, establish, an interface (214) with a microservice (306), the interface (214) enabling orchestration of the NF operation;
a NF operation module (410) configured to, request, the microservice (306) to execute at least one NF operation via the interface (214) based on received user request; and
a transceiver (414) configured to, transmit, to a microservice (302), an inventory management request to manage inventory pertaining to resources at a database (212) based on a response received from the microservice (306) pertaining to completion of execution of the at least one of NF operation.

17. The system (120) as claimed in claim 15, wherein the microservice (306) is configured to interact with the CNFLM (204) to spawn appropriate instances of Network Functions (NFs).

18. The system (120) as claimed in claim 15, wherein when the NF operation commences, the microservice (302) is adapted to store information related to the NF operation.

19. The system (120) as claimed in claim 15, wherein the at least one NF operation includes at least one of, NF instantiation, NF termination, NF scaling and NF deletion utilizing the interface.

20. The system (120) as claimed in claim 18, wherein the NF instantiation is performed by:
a transceiver (414) configured to, transmit, a request to a Policy Execution Engine (PEEGN) (206) to check availability of at least one NF policy and reserve resources at the PEEGN;
if determined availability of the NF policy and reserve resources at the PEEGN, the transceiver (414) configured to, transmit, a reservation request to the microservice (302) to reserve resources;
a NF operation module (410) configured to, request, the microservice (306) to instantiate NF over the interface (214).

21. The system (120) as claimed in claim 19, wherein the NF policy includes at least one of, a NF Initialization (INIT) policy.

22. The system (120) as claimed in claim 19, wherein the reserved resources include the resources which are consumed during the NF instantiation, wherein the resources include at least one of, a memory (402), a processor (406), and a network (105).

23. The system (120) as claimed in claim 19, wherein the NF instantiation status includes information pertaining to the completion or incompletion of the NF instantiation.

24. The system (120) as claimed in claim 18, wherein the NF termination is performed by:
a transceiver (414) configured to, transmit, a NF termination request to the microservice (306);
the transceiver (414) configured to, receive, a response from the microservice (306) subsequent to performing a termination of all running instances of NFs; and
the transceiver (414) configured to, transmit, an inventory management request to the microservice (302) upon checking status of all the NF’s.

25. The system (120) as claimed in claim 18, wherein the NF scaling is performed by:
a transceiver (414) configured to receive, a NF scaling request from a Policy Execution Engine (PEEGN) (206) to instantiate a NF instance;
a NF operation module (410) configured to, request, the microservice (306) to instantiate the NF instance via the interface (214);
a updating module (412) configured to, request, for updating inventory at the microservice (302) pertaining to resources in use and reserved based on a NF instantiation response received from the microservice (306), wherein the response comprises NF instantiation status; and
based on the NF instantiation response received from the microservice (306) over the interface (214), the transceiver (414) configured to transmit a request to the microservice (302) for inventory management based on the NF instantiation status.

26. The system (120) as claimed in claim 15, wherein the system (120) is configured to enable an async event-based implementation to manage the interface (214) to function in a high availability mode in order to engage a next available CNF Life Cycle Manager (CNFLM) instance when a current CNFLM instance is down.

27. The system (120) as claimed in claim 15, wherein the user requests the NF operation from a user interface layer (202) of a User Equipment (UE) (110).

28. The system (120) as claimed in claim 25, wherein the async event-based implementation enabled by the processor (406) ensures that one or more long running tasks are simultaneously accommodated while running one or more short running tasks.

29. The system (120) as claimed in claim 15, wherein the interface (214) enables orchestration of the NF operation by:
a transceiver (414) configured to, receive, instructions from the processor (406) to execute the at least one NF operation;
the transceiver (414) configured to, forward, the instructions from the processor to the microservice (306) to execute the at least one NF operation.

30. A User Equipment (UE) (110), comprising:
one or more primary processors (502) communicatively coupled to a processor (406), the one or more primary processors (502) coupled with a memory unit (504), wherein said memory unit (504) stores instructions which when executed by the one or more primary processors (502) causes the UE (110) to:
transmit, a user request to the processor (406) pertaining to initiating a Network Function (NF) operation, and
wherein the processor (406) is configured to perform the steps as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321062049-STATEMENT OF UNDERTAKING (FORM 3) [14-09-2023(online)].pdf 2023-09-14
2 202321062049-PROVISIONAL SPECIFICATION [14-09-2023(online)].pdf 2023-09-14
3 202321062049-POWER OF AUTHORITY [14-09-2023(online)].pdf 2023-09-14
4 202321062049-FORM 1 [14-09-2023(online)].pdf 2023-09-14
5 202321062049-FIGURE OF ABSTRACT [14-09-2023(online)].pdf 2023-09-14
6 202321062049-DRAWINGS [14-09-2023(online)].pdf 2023-09-14
7 202321062049-DECLARATION OF INVENTORSHIP (FORM 5) [14-09-2023(online)].pdf 2023-09-14
8 202321062049-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321062049-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321062049-DRAWING [16-09-2024(online)].pdf 2024-09-16
11 202321062049-COMPLETE SPECIFICATION [16-09-2024(online)].pdf 2024-09-16
12 Abstract.jpg 2024-10-16
13 202321062049-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321062049-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321062049-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321062049-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321062049-FORM 3 [29-01-2025(online)].pdf 2025-01-29