Abstract: ABSTRACT AN INTERFACE BETWEEN NSMP AND MANO ARCHITECTURE IN A VIRTUALIZED NETWORK The present disclosure provides a method (700) and a system (102) for communicating between a Network Slice Management Platform (NSMP) (210) and a Management and Orchestration (MANO) function (214). The system (102) comprises an Event Routing Manager (ERM) (212). The Event Routing Manager (ERM) (212) is configured to receive a service request from the NSMP (210) over a first interface. Further, the ERM (212) is configured to route the service request to a target microservice of the MANO function (214) over a second interface for request fulfilment. Further, the ERM (212) is configured to receive a response from the target microservice of the MANO function (214) over the second interface, wherein the response is an acknowledgement of the service request. Further, the ERM (212) is configured to forward the response to the NSMP (210) over the first interface. Ref. Fig. 6
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
The Patent Rules, 2003
COMPLETE SPECIFICATION
(See section 10 & rule 13)
1. TITLE OF THE INVENTION
AN INTERFACE BETWEEN NSMP AND MANO ARCHITECTURE IN A VIRTUALIZED NETWORK
2. APPLICANT (S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED IN Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
Ahmedabad - 380006, Gujarat, India.
3. PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF THE DISCLOSURE
[0002] The embodiments of the present disclosure generally relate to event routing in wireless communication systems. In particular, the present disclosure relates to providing an interface between a Network Slice Management Platform (NSMP) and a Management and Orchestration (MANO) architecture for resource provision.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression ‘Network Slice Management Platform (NSMP)’ used hereinafter in the specification refers to a component that is responsible for creating, configuring, monitoring, and managing network slice instances dynamically in a virtualized network environment.
[0005] Management and Orchestration (MANO) refers to a framework that manages and orchestrates virtualized network functions and resources in a Network Functions Virtualization environment.
[0006] Event Routing Manager (ERM) refers to an intermediary component that facilitates communication between the NSMP and MANO functions, routing service requests, and responses for network slice management.
[0007] Network Functions Virtualization (NFV) refers to a network architecture concept that uses virtualization technologies to manage core networking functions via software rather than hardware.
[0008] Virtualized Network Function (VNF) refers to a software implementation of a network function that runs on NFV infrastructure and can be deployed on a virtual machine.
[0009] Network Functions Virtualization Orchestrator (NFVO) refers to a component of the MANO framework responsible for the orchestration and lifecycle management of physical and software resources.
[0010] Virtual Network Function Manager (VNFM) refers to a component of the MANO framework responsible for the lifecycle management of VNF instances. The key functions of VNFM includes VNF Configuration Management, which involves overseeing the configuration parameters of both VNFs and Virtual Network Function components (VNFCs) to ensure optimal performance. The VNFM also performs VNF Information Management by monitoring changes in VNF-related indicators, providing insights into the operational health of the functions. Additionally, the VNFM plays a vital role in VNF Performance Management (PM) by tracking performance metrics to ensure compliance with service-level agreements. The VNFM is configured to employ VNF Fault Management (FM), which includes identifying, isolating, and resolving faults within VNF instances to maintain service reliability and continuity.
[0011] Virtualized Infrastructure Manager (VIM) refers to a component of the MANO framework responsible for managing the overlay of virtual resources over physical hardware resources. The VIM enables the abstraction and pooling of compute, storage, and networking resources, facilitating the deployment and orchestration of virtual machines and services. By effectively managing these virtual resources, the VIM allows for dynamic allocation, scaling, and optimization based on demand, ensuring efficient utilization of underlying hardware while providing a flexible environment for various applications and services. This layer of management is essential for supporting cloud environments and virtualized infrastructures.
[0012] Containerized Network Function (CNF) refers to a network function designed and implemented for the Cloud Native environment, packaged as a container.
[0013] NFV Infrastructure (NFVI) refers to the totality of hardware and software components that build the environment in which VNFs are deployed, managed, and executed.
[0014] The expression ‘event’ used hereinafter in the specification refers to a specific action that can trigger a network element or a system to take a particular action. In an example, the event may include service request, network traffic, system configuration changes, or security incidents and the like.
BACKGROUND OF THE DISCLOSURE
[0015] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0016] In telecommunications, NFV (Network Functions Virtualization) involves virtualizing network functions, making them software-based and deployable on standard IT hardware. NFV technology 800 may include key components as shown in FIG.8 like Virtualized Network Function (VNF, 820), Element Management System (EMS, 820-1), NFV Infrastructure (NFVI, 830), Virtualized Infrastructure Manager (VIM, 840-3), NFV Orchestrator (NFVO, 840-1), VNF Managers (VNFM, 840-2) and Operations and Business Support System (OSS/BSS, 810). The OSS/BSS (810) is an integrated system for telecom operators that covers network and system management, accounting, customer service, and more. The VNF (820) is a virtualized version of network functions corresponding to physical functions in traditional networks and consists of multiple components running on virtual machines. The VNF (820) comprises of EMSs (820-1) and PNF (820-3). The EMS (820-1) manages VNFs (820-2) and various functions like fault management and configuration. Unlike virtualized functions, the PNF (820-3) refers to a traditional, physical network component. The NFVI (830) includes hardware and virtual resources managed by a virtualization layer and provides resources for VNFs. The NFVI (830) comprises virtual computing resource (830-1), virtual network resource (830-2), virtual storage resource (830-3), virtualization layer (830-4), and hardware resources (830-5) (like computing (830-6), network (830-7) and storage (830-8)).
[0017] On the other hand, the VIM (840-3) manages virtual computing, storage, and network resources. The NFVO (840-1) orchestrates NFV resources and thus creates service topologies. The VNF Managers (VNFM, 840-2) manage VNF instances' lifecycle. Additionally, NFV also involves creating network services using combinations of VNFs (820), often represented by a VNF Forwarding Graph (VNFFG). The NFVO (840-1), VNFM (840-2), and VIM (840-3) together form the NFV Management and Orchestration (MANO) architecture (or MANO function), which is crucial for orchestrating and managing virtualized network functions.
[0018] Network slicing is one key concept in 5G networks that allows the creation of multiple virtual networks (known as slices) on a shared physical infrastructure. Each network slice can be customized to fulfill specific user traffic requirements, such as low latency, high bandwidth, or specific security measures catering to various applications and services. The Network Slice Management Platform (NSMP) is a system or platform designed to manage these network slice instances efficiently. The NSMP provides functionalities to create, configure, monitor, and manage network slice instances dynamically. The NSMP enables service providers to allocate network resources, ensure Quality of Service (QoS), and optimize network performance based on the requirements of different applications or services using network slicing. The NSMP is essential for managing the complexity of modern networks, enabling efficient resource utilization, ensuring diverse application requirements are met, and supporting the rapid deployment of services in a secure and scalable manner.
[0019] The state-of-art techniques to connect the NSMP and the MANO function rely on a peer-to-peer dedicated connectivity for meeting the requests initiated by several users or OSS/BSS. However, the state-of-art techniques are typically characterized by rigid interfaces that are tailored to specific microservices within the MANO function, leading to a fragmented and inefficient integration process. Therefore, several interfaces are required to provide connectivity according to the network's requirements, yielding a complex and costly solution. As traffic demand continues to surge, driven by the proliferation of IoT devices, enhanced multimedia applications, and growing user expectations, existing interfaces struggle to provide the necessary scalability and flexibility.
[0020] Thus, conventional systems and methods face difficulty in event routing in wireless communication systems. Therefore, there is a need for building an interface configured to provide connectivity between the NSMP and the MANO function and efficiently serve the ever-rising traffic demand.
SUMMARY OF THE DISCLOSURE
[0021] In an exemplary embodiment, a method for communicating between a Network Slice Management Platform (NSMP) and a Management and Orchestration (MANO) function via an Event Routing Manager (ERM) is described. The method includes receiving, by the ERM, a service request from the NSMP over a first interface. The method includes analyzing, by the ERM, the received service request to determine one or more target network services associated with the MANO function for routing the service request. The method includes routing, by the ERM, the service request to the one or more target network services associated with the MANO function over one or more different interfaces for request fulfillment. The method includes receiving, by the ERM, at least one response from each of the one or more target network services associated with the MANO function over the one or more different interfaces, wherein the response is an acknowledgement of the service request. The method includes forwarding, by the ERM, the response to the NSMP over the first interface.
[0022] In some embodiments, the method further comprises determining, by the ERM, whether the received service request is valid by checking if the service request exists within a memory of the ERM. The ERM discards the received service request when the service request does not exist in the memory of the ERM. The service request includes at least one of a provision resource request, a create resource request, and an initialize resource request.
[0023] In some embodiments, the one or more target network services associated with the MANO function are provided by at least one of a Network Functions Virtualization Orchestrator (NFVO), a Virtual Network Function Manager (VNFM), and a Virtualized Infrastructure Manager (VIM).
[0024] In some embodiments, the acknowledgement is one of a positive acknowledgement indicating successful fulfillment of the service request, or a negative acknowledgement indicating failure to fulfill the service request.
[0025] In some embodiments, the method further comprises generating, by the ERM, an update signal indicating a status of the processed service request. The ERM communicates the update signal to the NSMP via the first interface, providing real-time updates on the request status.
[0026] In some embodiments, the first interface and each of the one or more different interfaces are bi-directional interfaces, enabling an asynchronous event-based communication between the NSMP, the ERM, and the MANO function. The first interface is an NSMP_EM interface, and each of the one more different interfaces is an EM_MS interface.
[0027] In some embodiments, the method further comprises storing, by the ERM, the received service request in a database associated with the MANO function, enabling tracking and management of the service requests. The method further comprises maintaining, by the ERM, a log of the received service requests and their corresponding responses in the database.
[0028] In some embodiments, the method further comprises implementing, by the ERM, a load balancing mechanism to distribute service requests across multiple instances of the MANO function network services.
[0029] Another exemplary embodiment describes a system for communicating between a Network Slice Management Platform (NSMP) and a Management and Orchestration (MANO) function. The system comprises an Event Routing Manager (ERM). The ERM includes a transceiver unit, a memory and one or more processor(s). The transceiver unit is configured to receive a service request from the NSMP over a first interface. The one or more processor(s) are coupled with the transceiver unit to receive the service request and are further coupled with the memory to execute a set of instructions stored in the memory. The one or more processor(s) are configured to analyze the received service request to determine one or more target network services associated with the MANO function for routing the service request The one or more processor(s) are configured to route the service request to the one or more determined target network services associated with the MANO function over a one or more different interfaces for request fulfillment The one or more processor(s) are configured to receive at least one response from the each of the one or more target network services associated with the MANO function over the one or more different interfaces, wherein the response is an acknowledgement of the service request The one or more processor(s) are configured to forward the response to the NSMP over the first interface.
[0030] In some embodiments, the system is further configured for generating, by the ERM, an update signal indicating a status of the processed service request. By the ERM, the system communicates the update signal to the NSMP via the first interface, providing real-time updates on the request status.
[0031] In some embodiments, the system is further configured to store, by the ERM, the received service request in a database associated with the MANO function, enabling tracking and management of service requests. The ERM implements an asynchronous event-based processing model to handle multiple service requests concurrently.
[0032] In some embodiments, the system is further configured implementing, by the ERM, a load-balancing mechanism to distribute service requests across multiple instances of the MANO function network services. The ERM implements security measures to ensure the authenticity and integrity of communications between the NSMP and the MANO function.
[0033] The foregoing general description of the illustrative embodiments and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTIVES OF THE DISCLOSURE
[0034] Some of the objectives of the present disclosure, which at least one embodiment herein satisfies are as listed herein below:
[0035] An objective of the present disclosure is to provide a method and system for efficient communication between a Network Slice Management Platform (NSMP) and a Management and Orchestration (MANO) function via an Event Routing Manager (ERM).
[0036] An objective of the present disclosure is to enable the ERM to receive, route, and forward service requests and responses between the NSMP and the MANO function over dedicated interfaces.
[0037] An objective of the present disclosure is to implement a validation mechanism within the ERM to ensure the integrity of received service requests.
[0038] An objective of the present disclosure is to provide intelligent routing of service requests to appropriate target microservices within the MANO function based on the type of resource allocation required.
[0039] An objective of the present disclosure is to enable real-time status updates and communication between the NSMP and the MANO function through the ERM.
[0040] An objective of the present disclosure is to implement fault-tolerant and high-availability features in the ERM to ensure continuous operation of the network slice management process.
[0041] An objective of the present disclosure is to provide an asynchronous event-based processing model in the ERM to handle multiple service requests concurrently.
[0042] An objective of the present disclosure is to implement load balancing and security measures in the ERM to ensure efficient and secure communication between the NSMP and the MANO function.
BRIEF DESCRIPTION OF DRAWINGS
[0043] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0044] FIG. 1 illustrates an exemplary network architecture implementing a system for communicating between a Network Slice Management Platform (NSMP) and a Management and Orchestration (MANO) function via an Event Routing Manager (ERM), in accordance with embodiments of the present disclosure.
[0045] FIG. 2 illustrates a block diagram of the system, in accordance with embodiments of the present disclosure.
[0046] FIG. 3 illustrates a MANO framework architecture, in accordance with embodiments of the present disclosure.
[0047] FIG. 4 illustrates a flow chart representing a method of processing the service request between the NSMP and the MANO function via the ERM, in accordance with the present disclosure.
[0048] FIG. 5 illustrates a flow diagram describing service request processing, in accordance with embodiments of the present disclosure.
[0049] FIG. 6 illustrates a block diagram depicting interfaces between the NSMP, the ERM and the MANO function, in accordance with embodiments of the present disclosure.
[0050] FIG. 7 illustrates a method for processing service request between the NSMP and the MANO function via the ERM, in accordance with embodiments of the present disclosure.
[0051] FIG. 8 illustrates a conventional Network Functions Virtualization (NFV) end-to-end architecture that forms a traditional MANO function.
[0052] FIG. 9 illustrates a computer system in which or with which the embodiments of the present disclosure may be implemented.
[0053] The foregoing shall be more apparent from the following more detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 – Network architecture
102 – System
106 – Centralized server
108-1, 108-2…108-N – User equipment
110-1, 110-2…110-N – Users
202 – Transceiver Unit
204 – Memory
206 – One or more processor(s)
208 – Plurality of interfaces
210 - Event Routing Manager (ERM)
212 - Slice Management Platform (NSMP)
214 - Management and Orchestration (MANO) function
216 – Other module(s)
218 - Database
300 - MANO framework architecture
302 - User interface layer
304 - NFV and SDN design function
306 - Platform foundation service
308 – Platform core service
310 - Administration and maintenance manager
312 – Platform resource adapters and utilities
610-1, 610-2…610-n - Network elements
800 - NFV technology
810 - OSS/BSS
820, 820-1, 820-2 - VNF
820-3 - PNF
830 - NFVI
830-1 - Virtual computing resource
830-2 - Virtual network resource
830-3 - Virtual storage resource
830-4 - Virtualization layer
830-5 - Hardware resources
830-6 - Computing
830-7 - Network
830-8 - Storage
840 - MANO
840-1 - NFVO
840-2 - VNFM
840-3 – VIM
910 – External Storage Device
920 – Bus
930 – Main Memory
940 – Read Only Memory
950 – Mass Storage Device
960 – Communication Port
970 – Processor
DETAILED DESCRIPTION OF THE DISCLOSURE
[0054] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0055] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0056] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0057] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0058] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
[0059] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0060] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. As portable electronic devices and wireless technologies continue to improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology is also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), fifth generation (5G), and now sixth generation (6G), and more such generations are expected to continue in the forthcoming time.
[0061] Radio Access Technology (RAT) refers to the technology used by mobile devices/ user equipment (UE) to connect to a cellular network. RAT refers to the specific protocol and standards that govern the way devices communicate with base stations, which are responsible for providing the wireless connection. Further, each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS (Universal Mobile Telecommunications System), LTE (Long-Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities. Mobile devices often support multiple RATs, allowing them to connect to different types of networks and provide optimal performance based on the available network resources.
[0062] Network slicing allows operators to create multiple virtual networks on a shared physical infrastructure. Network slicing is an advanced network architecture that enables multiple virtual networks to operate on the same physical infrastructure, each tailored to specific requirements like bandwidth, latency, security, and service level agreements (SLAs). It leverages virtualization technologies such as Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) to create isolated slices that can be managed independently. Each network slice instance can be tailored to meet specific service requirements, such as ultra-low latency for autonomous vehicles, high bandwidth for video streaming, or massive connectivity for IoT devices. This flexibility enables operators to optimize resource utilization and provide customized services to different vertical industries. However, the implementation of network slicing introduces significant complexity in network management. Each slice instance needs to be created, configured, monitored, and managed independently, while still ensuring optimal performance of the overall network. This is where the Network Slice Management Platform (NSMP) plays a crucial role. The NSMP is responsible for the end-to-end management of network slice instances, from their creation to their termination.
[0063] As the wireless technologies are advancing, there is a need to cope up with the 5G and 6G requirements and deliver a high level of service to the customers. Thus, faster communication between elements of a 5G communication network is becoming crucial day by day. Business systems are gradually migrating to a cloud platform and starting to provide business services for users through virtual hosts. With a gradually increasing demand from users regarding cost reduction and business configuration flexibility improvement, network functions virtualization (NFV) is extensively employed. NFV is a replacement of network appliance hardware with virtual machines (VM), or containers. on physical hosts for providing various applications is becoming more and more extensive.
[0064] NFV architecture relies on server virtualization technologies to provide the VMs necessary to host the network functions. Virtualization makes it possible to spin up resources as needed to meet the demands of fluctuating and evolving workloads, while also taking advantage of the cost savings that come with commercial off-the-shelf (COTS) hardware. NFV also includes containers to host networking operations. NFV architecture is built on three components: virtualized network functions (VNFs), the NFV infrastructure (NFVI) and an administrative framework that handles management and orchestration architecture (MANO).
[0065] VNFs are software applications that run in virtual machines (VMs) and carry out specific networking tasks, such as routing or load balancing. An individual VNF can span multiple VMs, and the network administrators may couple multiple VNFs to deliver broader network services. The NFVI component provides an underlying structure to host the VMs and run the VNF applications. The infrastructure includes the physical computing, storage, and network resources, as well as a hypervisor-based virtualization layer that abstracts the resources and makes them available to the VNFs. The MANO handles all VNF-related tasks, such as chaining, connectivity and lifecycle management. The MANO is also responsible for managing, monitoring and optimizing NFVI hardware and virtual resources. The MANO is focused on specifying operations at each interface. The MANO includes three essential components a virtualized infrastructure manager (VIM), a CNF manager (CNFM), and an NFV orchestrator (NFVO). It is required that an event (demand for a specific service) raised by the user (or network entity) should be handled by the MANO effectively. It is required that these components are able to communicate with each other in a flexible manner.
[0066] The NSMP interacts with various network infrastructure components, including a MANO (Management and Orchestration) framework. The MANO framework is responsible for the management and orchestration of virtualized network functions and resources. Effective communication between the NSMP and MANO is crucial for successfully implementing network slicing. The NSMP interacts with the MANO framework through various modes, facilitating effective communication and coordination. In an example, the modes include REST(Representational State Transfer)ful APIs (Application Programming Interfaces), service descriptors, endpoints for standardized data exchange, message bus systems, and event-driven protocols that trigger actions based on network events. The service descriptors define network service specifications, enabling the NSMP to communicate requirements to the MANO for resource orchestration. The existing interfaces are often designed for specific microservices within the MANO framework. This specificity leads to a proliferation of interfaces, each catering to a particular function or service. As a result, network operators are forced to manage multiple interfaces, increasing the complexity of network management and the potential for errors. These interfaces face limitations in the context of dynamic and complex 5G or 6G networks. The rigidity of these interfaces often leads to inefficiencies in resource allocation, delays in service deployment, and challenges in scaling network slices.
[0067] The present disclosure aims to overcome the above-mentioned issue and other existing problems in the field of virtualization by promoting interoperability between the NSMP and the MANO function. The present disclosure is configured to provide an interfacing unit (referred to as an Event Routing Manager or ERM henceforth), which is configured to provide communication between the NSMP and the MANO. The present disclosure brings dedicated interfaces for communicating between NSMP, ERM and MANO functions in a virtualized network-like interfaces, namely EM_MS and NSMP_EM. Both the aforementioned interfaces may be bi-directional or duplex in nature. Interface NSMP_EM carries the service request (or event) initiated due to user traffic to the ERM. It is understood that the service request corresponds to network slice provisioning or a request to manage a network slice to the user and the like. Interface EM_MS then forwards the aforesaid service request (or event) to the MANO function for request fulfillment. In an aspect, the network slice refers to a specific allocation of bandwidth or network resources tailored to meet the requirements of a particular service or application. The MANO function then searches for a suitable network service) microservice dedicated to the network slice provisioning. Depending on the availability or load on the MANO function, the request may or may not be served. Accordingly, a response is generated by the MANO function, which may be a positive/ negative acknowledgement that corresponds to the success/ failure status of request fulfillment. Such response may be carried over the interface EM_MS to convey to the ERM. ERM may then forward the response to the NSMP over the interface NSMP_EM. The NSMP may then finally convey the response to the user (operator or network service provider) as requested.
[0068] The various embodiments throughout the disclosure will be explained in more detail with reference to FIGS. 1-9.
[0069] FIG. 1 illustrates a network architecture (100) implementing a system (102) for communicating between a Network Slice Management Platform (NSMP) and a Management and Orchestration (MANO) function via an Event Routing Manager (ERM) (212), in accordance with embodiments of the present disclosure.
[0070] In an embodiment, the system (102) is connected to a network 104, which is further connected to one or more computing devices 108-1, 108-2, … 108-N (collectively referred to as a computing device 108, herein) associated with one or more users 110-1, 110-2, … 110-N (collectively referred to as a user (110), herein). The computing device 108 may be a personal computer, laptop, tablet, wristwatch, or any custom-built computing device integrated within a modern diagnostic machine that can connect to a network as an IoT (Internet of Things) device. In an embodiment, the computing device 108 may also be referred to as a user equipment (UE) or a user device. Accordingly, the terms “computing device” and “User Equipment” may be used interchangeably throughout the disclosure. In an aspect, the user (110) is a network operator or a field engineer. Further, the network 104 can be configured with a centralized server 106 that stores compiled data.
[0071] In an embodiment, the system (102) may receive at least one input data from the user (110) via the at least one computing device (108). In an aspect, the user (110) may be configured to initiate a process of allocating the network slice, through an application interface of a mobile application installed in the computing device 108. For example, the user may be a network operator or a service provider who wants to configure the network slice to serve a plurality of subscribers using the network. The mobile application may be configured to communicate with the network analysis server. In some examples, the mobile application may be a software or a mobile application from an application distribution platform. Examples of application distribution platforms include the App Store for iOS provided by Apple, Inc., Play Store for Android OS provided by Google Inc., and such application distribution platforms. In an embodiment, the computing device 108 may transmit the at least one captured data packet over a point-to-point or point-to-multipoint communication channel or network (104) to the system (102).
[0072] In an exemplary embodiment, the network 104 may include, but not be limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. In an exemplary embodiment, the network 104 may include, but not be limited to, a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0073] Although FIG. 1 shows exemplary components of the network architecture 100, in other embodiments, the network architecture 100 may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture 100 may perform functions described as being performed by one or more other components of the network architecture 100.
[0074] FIG. 2 illustrates a block diagram (200) of the system (102), in accordance with an embodiment of the present disclosure.
[0075] The system (102) includes the NSMP (210), the ERM (212), the MANO function (214), and a database (218). In an exemplary embodiment, the system (102) may include an interface(s) (208), and other module(s) (216) having functions that may include but are not limited to receiving data, processing data, testing, storage, and peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
[0076] The NSMP is a component used by the network operators to facilitate the efficient management and delivery of diverse services and applications within the network. The NSMP is configured to support the dynamic and flexible nature of modern telecommunications, enabling the creation, customization, isolation, monitoring, and optimization of the network slices to address the varying demands posed by contemporary applications.
[0077] In an example, the network slice may be selected from an Enhanced Mobile Broadband (eMBB) slice, a Massive Machine Type Communication (mMTC) slice, and an Ultra-Reliable Low Latency Communication (URLLC) slice. The eMBB slice provides high data rates and improved capacity for applications requiring substantial bandwidth, such as video streaming and virtual reality. In an exemplary aspect, the eMBB slice requires substantial computational and storage resources to handle bandwidth-intensive applications such as HD video streaming, online gaming, and augmented reality. The mMTC slice supports a vast number of low-power devices with low data rates, usually around 0.1 to 1 Mbps per device, enabling efficient communication for applications like smart sensors and IoT devices that transmit small data packets infrequently. The eMBB slice manages thousands or millions of connections simultaneously. In the eMBB slice, resource allocation emphasizes efficient use, catering to the needs of IoT sensors and devices that prioritize battery life and long-range communication over high data throughput. The URLLC slice is configured for applications that require ultra-low latency, typically below 1 ms, and high reliability. Bandwidth allocations for URLLC may vary, often ranging from 10 to 100 Mbps, but are configured to ensure minimal delay and consistent performance for critical applications such as autonomous driving and remote surgery. The URLLC slice demands dedicated resources to guarantee reliability and quick responsiveness.
[0078] In an aspect, the NSMP is configured to provide monitoring and analytics capabilities to the operators, allowing operators to track the performance of each network slice in real-time. The monitoring and analytics functionality includes measuring key performance indicators (KPIs) such as latency, throughput, and reliability. By using these KPIs, the NSMP may be configured to optimize resource distribution and implement Quality of Service (QoS) policies that prioritize critical applications, ensuring that high-priority traffic receives the necessary resources even during periods of congestion.
[0079] In an operative aspect, the NSMP is configured to enable dynamic scaling of resources. As the user demand fluctuates, such as during peak viewing times for live events, the NSMP may automatically allocate additional bandwidth and computational resources to the relevant network slice. For example, suppose an unexpected surge in HD video streaming occurs during a live sports event. In that case, the NSMP may dynamically increase the bandwidth allocation for the eMBB slice, ensuring that users experience uninterrupted service without buffering. In an aspect, the operator may request the NSMP to manage the number of resources or bandwidth associated with each network slice based on the user demand.
[0080] The NSMP (210) oversees the lifecycle of network slices, ensuring they are provisioned, monitored, and optimized according to the specific needs of various services and applications. The NSMP enables operators to allocate resources efficiently, maintain quality of service, and dynamically adjust slice instances in response to changing network demands.
[0081] In an aspect, to manage the network slices, as per the user demand or the request of the operator, the NSMP (210) may be configured to initiate a service request. In an aspect, each network slice may be configured to require a specific bandwidth and/or one or more resources accordingly. In an example, the one or more resources include computing resources (e.g., CPU, memory allocation to VMs/containers), storage resources (e.g., disk space), network resources (e.g., bandwidth, IP addresses, network configuration), management resources (e.g., resources for monitoring, logging, orchestration, etc.). In an example, the service request may include various types of resource-related requests, such as a provision resource request, a create resource request, a terminate resource request, and/or an initialize resource request.
[0082] The provision resource request is generated to allocate essential resources for a designated network slice. The provision resource request typically defines the specific resource requirements, such as bandwidth, computational capacity, and storage, to ensure the optimal operational capability of the slice. For example, when generating a network slice based on user demand, the NSMP may be configured to create the provision resource request that specifies the required resources for the new slice. For example, a mobile gaming company experiences a surge in users during a game release, necessitating the creation of a dedicated gaming slice to handle the increased load. In this scenario, the NSMP may generate the provision resource request detailing the specific resource requirements. The provision resource request may include parameters such as bandwidth (for example, 1 Gbps to support high-speed connections for numerous players) and computational capacity (network resources), specifying the need for 12 CPU cores and 32 GB of RAM to manage real-time game server operations and storage, indicating a requirement of 5 TB for game data and user profiles. The NSMP then submits the provision resource request to the MANO to allocate the necessary resources, ensuring that the new gaming slice is provisioned efficiently and can handle the expected demand.
[0083] The create resource request triggers the instantiation of additional resources serving the network slice. The create resource request involves specifying the slice's attributes, including its operational characteristics, intended use cases, and the necessary configuration of resources to support the desired applications. For example, if a network slice experiences (for example, the gaming slice) increased demand and requires additional resources to meet user needs, the NSMP may be configured to generate the create resource request. In response, the NSMP generates the create resource request that details the new requirements: increased bandwidth of 500 Mbps to accommodate more data from additional sensors, augmented computational capacity with an additional 16 CPU cores and 32 GB of RAM for enhanced data processing and analytics, and increased storage of 3 TB to handle the influx of operational data. The create resource request is submitted to the MANO, instantiating the additional resources and ensuring the gaming slice can effectively support the growing demand. By automating this process, the NSMP ensures that the slice can scale efficiently and deliver reliable performance for critical manufacturing applications.
[0084] In an example, the NSMP determines that the gaming slice is no longer needed, perhaps due to a decline in user activity following the completion of a gaming event, then the NSMP generates the terminate resource request to outline the resources that need to be terminated and made available for reassignment to other network slices. The terminate resource request specifies the details of the resources to be released, including the bandwidth, computational capacity, and storage that were allocated to the gaming slice. For instance, it might detail the need to terminate 2 Gbps of bandwidth, release 16 CPU cores and 64 GB of RAM that were provisioned for gaming operations, and free up 10 TB of storage previously used for game data and player profiles. Once the terminate resource request is generated, the NSMP coordinates a deallocation process with the MANO, ensuring that the resources are properly released from the Gaming Slice. This may involve shutting down servers, clearing network configurations, and updating the resource management system to reflect the availability of these resources. By doing so, the NSMP facilitates efficient resource utilization, allowing these freed resources to be reassigned to other active network slices, thus optimizing the overall network performance and adaptability to changing demands.
[0085] The initialize resource request facilitates the readiness of a newly created network slice for operational deployment. The initialize resource request includes the configuration of resources, the establishment of connectivity, and validation that the slice is equipped to manage traffic and services in accordance with predefined specifications. For example, when new resources are created for the gaming slice to meet increased user demand, the NSMP may generate the initialize resource request. Once the NSMP identifies the need for additional resources, such as increased bandwidth, computational power, and storage, the NSMP generates the initialize resource request that specifies the necessary steps to ensure the gaming slice is ready for use. The initialize resource request includes configuring the newly allocated resources, establishing connectivity with existing infrastructure, and validating that the slice can effectively manage traffic and services according to predefined specifications. For example, the initialize resource request might detail configuring the new servers with the appropriate gaming software, ensuring that the bandwidth is set up to handle 2 Gbps of data flow, and integrating the additional computational resources with the existing game servers for seamless gameplay.
[0086] The NSMP (210) communicates these service requests to the ERM (212) over a first interface (i.e. NSMP_EM). The NSMP (210) receives responses and status updates related to these requests from the ERM (212), allowing it to manage and monitor the overall network slice creation and allocation process.
[0087] The ERM 212 may include a transceiver unit (202), a memory (204), and one or more processor(s) (206). The transceiver unit (202) is configured to receive the service request from the NSMP (210). In an aspect, the transceiver unit (202) is configured to receive the service request from at least one network entity. The at least one network entity includes, but is not limited to, virtual switches, virtual routers, virtual firewall, virtual load balancer, virtual private network (VPN) gateways, virtualized network interface cards, virtualized network functions, etc.
[0088] The service request includes a container life cycle management request, a resource allocation and management request, a fault management and recovery request, and a resource optimization and scaling request. For example, the life container life cycle management request includes, but is not limited to, a container creation request, container scaling request, container update request, container restart request, container stop request, container removal request, container health request, container configuration request, etc. In an aspect, the resource allocation and management request include, but is not limited to, request to allocate central processing unit (CPU) or memory to virtual machines (VMs), request to resize the VMs, request to allocate storage to VMs, request to modify storage allocation, request to deallocate resources from VMs, request to monitor resource utilization, request to update resource allocation policies, etc. The fault management and recovery request includes, but is not limited to, request to detect a fault, request to restart a faulty VM, request to reboot a host, request to restore VM from backup, request to update fault tolerance settings, etc. The resource optimization and scaling request includes, but is not limited to, request to scale up or scale down the VM, request to auto scale container service, request to optimize storage allocation, request to adjust network bandwidth allocation, request to provision additional VMs, request to monitor and adjust performance metrics, request to optimize load balancer configuration, etc.
[0089] The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like. The one or more processor(s) (206) is configured to initiate a process of allocating the network slice through the application interface of the UE (108). In an embodiment, the application interface is configured to transmit one or more instructions to the one or more processor(s) (206). In an embodiment, the one or more processor(s) (206) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (206) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (102).
[0090] In an embodiment, the one or more processor(s) (206) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the one or more processor(s) (206). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the one or more processor(s) (206) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the one or more processor(s) (206) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the one or more processor(s) (206). In such examples, the system (102) may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (102) and the processing resource. In other examples, the one or more processor(s) (206) may be implemented by electronic circuitry.
[0091] In an operative embodiment, the ERM (212) may be configured to receive the service request from the NSMP (210) over the first interface. The first interface may be referred to as the NSMP_EM interface. The service request service request may correspond to a network slice management, potentially containing specific parameters defining the desired characteristics of the network slice. The NSMP (210) may generate these service requests based on various factors, such as user demands, network conditions, or predefined policies. The first interface may be a dedicated communication channel designed to facilitate efficient and secure information exchange between the NSMP (210) and the ERM (212).
[0092] Upon receiving the service request, the ERM (212) may route the service request, using the transceiver unit, to a target network service associated with the MANO function (214) over a second interface for request fulfillment. The second interface may be known as the EM_MS interface. The network services refer to the functionalities or capabilities that the network provides to support various applications and user requirements. The network services encompass a wide range of operations, including data transmission, security, network management, and application support. Examples of the network services include voice over IP (VoIP), video conferencing, cloud storage, and internet access. In an implementation, the network service may be implemented as a microservice. Each microservice is designed to perform a specific function and can be developed, deployed, and scaled independently. This approach enhances flexibility and scalability, allowing network operators to rapidly adapt to changing user demands and technological advancements. For example, a video conferencing service may be implemented as a microservice that handles video and audio processing, while separate microservices might manage user authentication, session management, and data storage.
[0093] The ERM may be configured to analyze the received service requests to determine the most appropriate network service within the MANO function (214) to handle the specific requirements of the network slice management. In an aspect, the network service may be implemented as a microservice. This analysis may be based on the type of resource allocation required, as specified in the service request. The second interface may be designed to support seamless communication between the ERM (212) and the various network service within the MANO function (214), potentially utilizing standardized protocols to ensure interoperability. The target microservice may be provided by at least one of a Network Functions Virtualization Orchestrator (NFVO), a Virtual Network Function Manager (VNFM), or a Virtualized Infrastructure Manager (VIM). Each of these components plays a specific role in the orchestration and management of virtualized network functions and resources.
[0094] The MANO function (214) is configured to receive the one or more service requests from the MANO over the one or more different interfaces. In an aspect, the one or more different interfaces collectively referred to as a second interface. The MANO function (214) is configured to process the routed service request through the target microservice, which may perform the necessary actions to allocate the requested network slice. Once the processing is complete, the target microservice may generate a response, which may be an acknowledgement of the service request. This response may be transmitted back to the ERM (212) over the second interface. The acknowledgement may provide crucial information about the status of the network slice management, potentially indicating success or failure of the operation. The system (102) may provide different types of acknowledgements based on the outcome of the service request processing. A positive acknowledgement may indicate successful fulfillment of the service request, potentially including details about the allocated network slice or resources. Conversely, a negative acknowledgement may indicate a failure to fulfill the service request, possibly accompanied by information about the reason for the failure.
[0095] Upon receiving the response from the MANO function (214), the ERM (212) may forward this response to the NSMP (210) over the first interface. To enhance the visibility of the request fulfillment process, the ERM (212) may generate an update signal indicating the status of the processed service request. This update signal may be communicated to the NSMP (210) via the first interface, providing real-time updates on the request status. Such real-time updates may enable the NSMP (210) to monitor the progress of network slice managing requests, potentially allowing for timely interventions or adjustments if necessary.
[0096] The system (102) may incorporate various mechanisms to enhance the reliability and efficiency of the communication process. For instance, the ERM (212) may be configured to determine whether the received service request is valid by checking if the received service request exists within the memory of the ERM (212). For example, the memory may include a step of service requests to be served by the ERM (212). If the service request is invalid or non-existent in the memory (204), the ERM (212) may discard the received service request.
[0097] The communication interfaces utilized by the system (102) may possess specific characteristics to support efficient information exchange. Both the first interface and the second interface may be bi-directional interfaces, enabling asynchronous event-based communication between the NSMP (210), the ERM (212), and the MANO function (214).
[0098] To facilitate effective tracking and management of service requests, the ERM (212) may store the received service requests in a database associated with the MANO function (214). This database may serve as a centralized repository of service request information, potentially enabling comprehensive auditing, performance analysis, and troubleshooting.
[0099] The service requests may include user-defined parameters for configuring multiple profiles to cater to different use cases and monitor multiple metrics. This flexibility in request configuration may enable customized network slices for diverse service requirements, potentially allowing the system (102) to address a wide range of network scenarios and user needs. When routing the service request to the target microservice, the ERM (212) may translate the service request into a format compatible with the MANO function (214), ensuring seamless communication between different components of the system (102).
[0100] To provide comprehensive visibility into the request fulfillment process, the ERM (212) may monitor the progress of service request fulfillment and generate periodic status updates to the NSMP (210). These status updates may include information such as the current stage of fulfillment, any encountered issues, and estimated time to completion. Additionally, the ERM (212) may maintain a mapping of service request types to target microservices within the MANO function (214), potentially enabling more efficient and accurate routing of requests.
[0101] The ERM (212) may implement a load-balancing mechanism to distribute service requests across multiple instances of the MANO function (214) microservices to optimize resource utilization and improve overall performance.
[0102] In an aspect, the system (102) may be capable of handling multiple concurrent service requests from the NSMP (210). The ERM (212) may implement a prioritization mechanism to manage these concurrent requests effectively. The system (102) may prioritize the received service requests based on predetermined criteria, such as the urgency of the request, the importance of the requesting entity, or the potential impact on network performance. The prioritized service requests may then be routed to appropriate target microservices of the MANO function (214) in order of their assigned priority, potentially ensuring that critical requests are processed in a timely manner.
[0103] In an aspect, the system (102) may maintain a log of all service requests and their corresponding responses, providing a historical record of all network slice management activities. This log may be periodically synchronized with a backup system to ensure data redundancy and enable recovery in case of system failures.
[0104] The system (102) may incorporate advanced analytical capabilities to optimize operations and provide valuable insights. The ERM (212) may perform real-time analytics on the service requests and responses to identify patterns and potential issues in the network slice managing process. By analyzing trends in service requests, success rates of resources allocations, and performance metrics of allocated network slices, the system (102) may provide valuable information for optimizing the network slice management process. Furthermore, the ERM (212) may generate reports based on these analytics for network administrators, potentially facilitating data-driven decision-making and continuous improvement of network resource allocation strategies.
[0105] The interface(s) (208) is included within the system (102) to serve as a medium for data exchange, configured to facilitate user interaction with the mobile application. The interface(s) (208) may be composed of interfaces for data input and output devices, storage devices, and the like, providing a communication pathway for the various components of the system (102). The interface(s) (208) may comprise a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The interface(s) (208) may facilitate communication to/from the system (102). The interface(s) (208) may also provide a communication pathway for one or more components of the system (102). Examples of such components include, but are not limited to, the database (218) and a distributed data lake.
[0106] In an embodiment, the database (218) is configured to serve as a centralized repository for storing and retrieving various operational data. The database (218) is designed to interact seamlessly with other components of the system (102), such as the NSMP (210), the ERM (212), the MANO function (214) to support the system's functionality effectively. The database (218) may store data that may be either stored or generated as a result of functionalities implemented by any of the components of the one or more processor(s) (206). In an embodiment, the database 218 may be separate from the system (102).
[0107] FIG. 3 illustrates a MANO framework architecture (300) corresponding to the MANO function (214), in accordance with the present disclosure.
[0108] As depicted in FIG. 3, the MANO framework architecture (300) may comprise several interconnected modules that work together to enable efficient resource allocation and management. The interconnected modules may include a user interface layer (302), NFV and software-defined networking (SDN) design functions (304), platform foundation services (306), platform core services (308), a platform operation, administration, and maintenance manager (310), and platform resource adapters and utilities (312).
[0109] The user interface layer (302) may serve as a primary point of interaction for the network operators and the network administrators. Through the user interface layer (302), various parameters associated with the ERM (212) may be configured and adjusted based on specific requirements. These parameters may include a number of microservices associated with ERMs, a number of ERMs that may be deployed simultaneously, and other relevant settings.
[0110] The NFV and SDN design functions (304) may work in conjunction with the platform core services (308) to analyze and route the service requests received by the ERM (212). The modules may be responsible for determining the target microservice within the MANO (128) framework based on a type of resource allocation required. The modules may utilize a mapping of service request types to target microservices maintained by the ERM (212) to ensure accurate routing. The NFV and SDN design functions (304) may include a virtualized network function (VNF) lifecycle manager (compute) that is a specialized component focused on managing the compute resources associated with VNF throughout their lifecycle. The NFV and SDN design functions (304) may include a VNF catalog that is a repository that stores and manages metadata, configurations, and templates for VNF, facilitating their deployment and lifecycle management. The NFV and SDN design functions (304) may include network services catalog, network slicing and service chaining manager, physical and virtual resource manager and CNF lifecycle manager. The network services catalog serves as a repository for managing and storing detailed information about network services, including their specifications and deployment requirements. The network slicing & service chaining manager is responsible for orchestrating network slices and service chains, ensuring efficient allocation and utilization of network resources tailored to various services. The physical and virtual resource manager oversees both physical and virtual resources, handling their allocation, monitoring, and optimization to ensure seamless operation across the network infrastructure. The CNF lifecycle manager manages the complete lifecycle of the CNF, including onboarding, instantiation, scaling, monitoring, and termination, thereby facilitating the efficient deployment and operation of network functions in a cloud-native environment.
[0111] The platform foundation services (306) may support an asynchronous event-based processing model implemented by the ERM (212), enabling concurrent handling of multiple service requests. They may also facilitate bi-directional communication interfaces used by the ERM (212) to interact with the external systems (e.g., the NSMP (210)) and the MANO function (214) framework microservices. The platform foundation services (306) may include microservices elastic load balancer, identity & access manager, command line interface (CLI), central logging manager, and the ERM. The microservices elastic load balancer ensures that incoming traffic is evenly distributed across multiple microservices, enhancing performance and availability. The identity & access manager handles user identity management and access control, enforcing permissions and roles to secure resources and services. The CLI offers a text-based method for users to interact with the platform, enabling command execution and configuration management. The central logging manager consolidates log data from various system components, providing a unified view for effective monitoring, troubleshooting, and data analysis.
[0112] The platform core services (308) are central to the processing and fulfillment of the service requests (i.e., at least one service request). The platform core services (308) may work together to allocate network slices, manage virtualized network functions, and orchestrate the underlying infrastructure resources based on each of the at least one service request routed by the ERM (212). The platform core services (308) may include NFV infrastructure monitoring manager, assurance manager, performance manager, policy execution engine, capacity monitoring manager, release management repository, configuration manager & golden configuration template (GCT), NFV platform decision analytics platform, Not Only SQL database (NoSQL DB), platform schedulers & jobs, VNF backup & upgrade manager, and microservice auditor. The NFV infrastructure monitoring manager tracks and oversees the health and performance of NFV infrastructure. The assurance manager ensures service quality and compliance with operational standards. The performance manager monitors system performance metrics to optimize efficiency. The policy execution engine enforces and executes policies across the platform. The capacity monitoring manager tracks resource usage and forecasts future needs. The release management repository manages software releases and version control. The configuration manager handles system configurations, ensuring consistency and automation. The GCT provides centralized oversight and management of platform operations. The NFV platform decision analytics platform utilizes data analytics to support decision-making. The NoSQL DB stores unstructured data to support flexible and scalable data management. The platform schedulers and jobs automate and schedule routine tasks and workflows. The VNF backup and upgrade manager oversees the backup and upgrading of VNFs. The microservice auditor ensures the integrity and compliance of microservices across the platform.
[0113] The platform operation, administration, and maintenance manager (310) may oversee operational aspects of the MANO framework architecture (300). The platform operation, administration, and maintenance manager (310) may be responsible for implementing a load-balancing mechanism used by the ERM (212) to distribute the service requests across multiple instances of the MANO (128) framework microservices.
[0114] The platform resource adapters and utilities (312) may provide necessary tools and interfaces for interacting with an underlying network infrastructure, i.e., the NFV architecture. These components may be crucial in translating the service requests into actionable commands for the resources (also referred to as the network resources) allocation and management. The platform resource adapters and utilities (312) may work closely with the platform core services (308) to ensure that allocated resources meet specified requirements. Together, these modules create a cohesive and efficient system for managing resource allocation. The platform resource adapters and utilities (312) may include platform external API adapter and gateway, generic decoder, and indexer, orchestration adapter, API adapter, and NFV gateway. The platform external API adapter and gateway facilitates seamless integration with external APIs and manages data flow between external systems and the platform. The generic decoder and indexer processes and organizes data from various formats such as XML, comma-separated values (CSV), and JSON, ensuring compatibility and efficient indexing. The orchestration adapter manages interactions with clusters, enabling container orchestration and scaling. The API adapter interfaces with services, allowing integration and management of cloud resources. The NFV gateway acts as a bridge for NFV communications, coordinating between NFV components and other platform elements.
[0115] The ERM (212) may interact with all these modules to facilitate an end-to-end process of the service request handling. The ERM (212) may receive the service requests through the user interface layer (302), utilize the NFV and SDN design functions (304) for the service requests analysis, leverage the platform core services (308) for the service requests fulfillment and use the platform resource adapters and utilities (312) for actual resources allocation for each of the service requests. The platform operations, administration and maintenance manager (310) may oversee this entire process, ensuring efficient operation and fault tolerance. This integrated approach may enable the MANO (128) framework to efficiently manage a complex process of resource allocation and management, potentially providing a flexible and responsive system capable of meeting diverse service requirements in a dynamic network environment.
[0116] FIG. 4 illustrates a flow chart representing a method (400) of processing the service request between the NSMP (210) and the MANO function (214) via the ERM (212), in accordance with the present disclosure.
[0117] At step 402, the NSMP (210) may generate the one or more service requests. These one or more service requests may encompass various resource-related actions to manage the network slice effectively, including but not limited to the provision resource request, the create resource request, the a terminate resource request, and/or the initialize resource request. After generation, the NSMP (210) may communicate the one or more service requests to the ERM (212) over the first interface, which may be the NSMP_EM interface.
[0118] Step 404 may involve a validation process performed by the ERM (212). Upon receiving the service requests, the ERM (212) may determine whether each received service request (also referred to as an event) is valid. This validation may be conducted by checking if the received service request exists within the memory of the ERM (212). The memory of the ERM (212) may contain a repository of valid request templates or identifiers, serving as a reference for this validation process. Step 406, the ERM (212) may discard the invalid service request, if the ERM (212) determines that the received service request does not exist in the memory.
[0119] If the received service request is found to be valid, the process may proceed to step 408. Step (408) may involve a detailed analysis of the received service request by the ERM (212). At step (408), the ERM (212) determines the appropriate microservice within the MANO function (214) that should be assigned to handle the request. For instance, if the analyzed service request is identified as a provision service request, a specific microservice configured to process such requests may be assigned.
[0120] At step 408, the ERM (212) forwards the analyzed service requests via the transceiver unit to the specific microservice associated with the MANO function (214). Additionally, the ERM (212) may be configured to transfer an acknowledgement message along with a delivery report back to the NSMP (210), providing confirmation that the request has been successfully routed.
[0121] At step 410, the allocated microservice within the MANO function (214) may transmit at least one event acknowledgement towards the transceiver unit of the ERM (212). This acknowledgement may indicate that the microservice has received and begun processing the service request.
[0122] At step 412, the transceiver unit of the ERM (212) may receive the event acknowledgement from the allocated microservice. This step may complete the communication loop from the MANO function (214) back to the ERM (212). Additionally, this step may involve the ERM (212) communicating an update signal via its input module towards the NSMP (210). This update signal may indicate the status of the processed request, providing real-time information about the progress of the network slice managing process.
[0123] FIG. 5 illustrates a flow diagram (500) describing service request processing, in accordance with the present disclosure. For example, FIG. 5 shows various steps of processing the provision resource request received from the NSMP (210).
[0124] Step (502) includes generating the service requests (for example, the provision resource request) by the NSMP (210). In an aspect, step (502) may further include transmitting the generated provision resource request to the ERM (212).
[0125] Step (504) includes forwarding the received service request by the ERM (212) to the MANO function (214). In an example, the ERM (212) may be configured to analyze the received service requests to allocate the specific microservice associated with the MANO function (214) accordingly. For example, if the ERM (212) analyses that the received service request is the provision resource request, then the ERM (212) may forward the received request to the NFVO.
[0126] In an example, the MANO function (214) may include the VIM, the CNF manager, the NFVO, and a virtual network function manager (VNFM). The NFVO may be configured to perform resource orchestration and network service orchestration. The NFVO may be configured to receive the provision resource request from the NSMP via the ERM (212) and to generate a process service request. The NFVO coordinates resource allocation and oversees the lifecycle of both Virtual Network Functions (VNFs) and the network services. The NFVO is responsible for onboarding VNFs, integrating them into the network, and ensuring proper configuration. It communicates directly with the VIM to manage underlying infrastructure resources effectively. Additionally, the NFVO generates process service requests for other components and enforces policies related to resource usage and service quality. In an aspect, the process service request may include a set of instructions to be followed by the CNF manager. In an aspect, the NFVO is responsible for the onboarding of VNFs and network services that are managed by the same or different VNF Managers. The NFVO may communicate with the VIMs directly, coordinate, authorize, release, and engage the virtual storage and networking resources. The NFVO is also configured to manage the life cycles of network services and VNFs.
[0127] In an aspect, the NFVO may parse the received provision resource request to understand the specific requirements, such as bandwidth, computational capacity, and storage. The NFVO then maps these resource needs to an available infrastructure, checking the current inventory to ensure the requested resources may be provisioned. The NFVO generates an orchestration plan that outlines allocating the resources efficiently. This includes determining which VIM will handle the provisioning. Once the plan is established, the NFVO communicates with the selected VIM, sending detailed instructions for resource allocation.
[0128] Step (506) includes storing the received provision resource request by the NFVO in the database associated with the MANO function (214). Storing the received provision resource request(s) enables tracking of resource allocations and changes over time, which is essential for accountability and troubleshooting. Additionally, maintaining a record allows for historical analysis of usage patterns and demand trends, informing future capacity planning and resource allocation strategies. The stored resource requests also aid in effective resource management, providing quick access to past requests and their requirements.
[0129] Step (508) includes communicating, by the NFVO, the processed service request towards the transceiver unit of the ERM (212). The NFVO generates the processed service request based on the provision service request. The processed service request includes details about the required processing actions, such as resource provisioning or configuration changes. The NFVO validates the provision service request to ensure it meets all the necessary criteria and complies with the network policies. The NFVO prepares the provision service request for execution. The processed service request outlines the specific actions to be taken, such as provisioning new resources or modifying existing configurations. Unique identifiers for the network service or slice are included to provide context for the request. Additionally, the processed request contains configuration parameters that need to be applied, along with compliance information to ensure adherence to network policies and service level agreements (SLAs).
[0130] Optionally, another step (510) may include forwarding the received processed service request by the ERM (212) to the CNF manager directly. The CNF manager may be configured to receive the processed resource request from the ERM and may be configured to analyze the received processed service request.
[0131] Step (512) includes, based upon the analysis, the CNF manager may be configured to generate an infrastructure orchestration request. The CNF manager may be configured to generate an instruction regarding onboarding or instantiation or termination of the process, or the infrastructure based on the received process resource request by the CNF manager. In an aspect, the infrastructure orchestration request may include information on the infrastructure to be assigned/created/updated or deleted. Upon receiving the processed service request, the CNF manager generates specific instructions for managing network functions accordingly. For example, if the processed service request involves onboarding a new containerized network function (CNF) for a mobile gaming application, the CNF manager may generate the infrastructure orchestration request that specify how to integrate the CNF into the existing network environment, including necessary configurations and dependency checks. If the processed service request requires instantiation, the CNF manager may generate the infrastructure orchestration request for deploying the CNF, ensuring that the CNF manager is allocated the specified resources, such as 12 CPU cores and 32 GB of RAM, to handle the expected user load. On the other hand, if the processed service request involves terminating a CNF that is no longer needed, the CNF manager may generate the infrastructure orchestration request for safely decommissioning it, ensuring that all resources are released, and the network remains stable.
[0132] Step (514) includes communicating, by the CNF manager, the infrastructure orchestration request towards the transceiver unit of the ERM (212).
[0133] Step (516) includes forwarding the received infrastructure orchestration request by the ERM (212) to the VIM directly. In an example, the VIM may be responsible for controlling and managing the NFV infrastructure (NFVI) storage, and network resources. The VIM may be configured to receive the infrastructure orchestration request from the ERM, and may be configured to analyze the received manager, the infrastructure orchestration request.
[0134] Step (518) may include processing the infrastructure orchestration request by the VIM. In an aspect, the processing begins by analyzing the infrastructure orchestration request to understand the specific resource requirements. The infrastructure orchestration request specifies the required resources, such as CPU, memory, and storage. The VIM then assesses its current resource inventory to determine availability, checking how many resources (CPU cores, RAM, and storage) can be allocated to the new CNF. Once it verifies that the required resources are available, the VIM allocates the resources accordingly, provisioning the necessary virtual machines (VMs) or containers as specified. Following allocation, the VIM applies necessary configurations, including network settings and security policies, to ensure the infrastructure is ready for deployment.
[0135] Step (520) includes generating an event acknowledgement and communicating the generated event acknowledgement towards the ERM (212).
[0136] Step (522) includes communicating the update signal, via the transceiver unit of the ERM (212) towards the NSMP (210) indicating a status of the processed request. In an example, the update signal may represent a status of the received service request. In an example, the status may be a success or a failure, or a negative acknowledgement.
[0137] FIG. 6 illustrates a block diagram (600) depicting interfaces between the NSMP (210), the ERM (212), and the MANO function (214) in accordance with the present disclosure. Several network elements (610-1, 610-2,…610-n) may be connected to the NSMP (210). In an example, the network element is AMF (Access and Mobility Management Function), SMF (Session Management Function), and UPF (User Plane Function). The network elements (610-1, 610-2,…610-n) may require customized network slices to fulfill specific requirements, such as low latency, high bandwidth, or specific security measures, catering to various applications and services. The NSMP (210) manages network slices and provides functionalities to create, configure, monitor, and manage network slices dynamically. The first interface (NSMP_EM) between the NSMP (210) and ERM (212) may be utilized to initiate the service request. The service request may carry requests regarding network slice instances. Further, the second interface (EM_MS) between the ERM (212) and the MANO function (214) may be used to forward the request received on NSMP_EM interface at ERM (212) to the specific microservice in the MANO function (214) for request fulfillment. In an aspect, the second interface may consist of one or more different (distinct) interfaces to facilitate communication with the various components of the MANO function (214). For example, a specific interface between the ER) and the VIM, which handles requests for resource allocation and management. Additionally, another interface may exist between the ERM and the CNF manager, enabling seamless interaction for onboarding and managing CNFs. Furthermore, an interface connecting the ERM to the NFVO for facilitating orchestration and coordination of network services and resources. These one or more different interfaces enhance the modularity and flexibility of the system, allowing each component to communicate effectively while serving its distinct role in the overall architecture. This design ensures efficient resource management, orchestration, and operation of network functions across different layers of the network.
[0138] The ERM (212) may then receive an event acknowledgement (success/failure) from the specified microservice of the MANO function (214) via the second interface (EM_MS interface). Further, the ERM (212) may send the event acknowledgement to the NSMP (210) via the first interface (NSMP_EM interface).
[0139] FIG. 7 illustrates an exemplary flow diagram of a method 700 for communicating between the NSMP and the MANO function (214) via the ERM (212) for allocating the network slice, in accordance with embodiments of the present disclosure.
[0140] At step 702, the method 700 includes receiving, by the ERM (212), the service request from the NSMP (210) over a first interface. The service request may correspond to a network slice management. Further, the first interface may be a dedicated communication channel designed to facilitate efficient interaction between the NSMP (210) and the ERM (212). The service request received by the ERM (212) may contain specific parameters defining the requirements for the network slice, such as bandwidth, latency, or security specifications.
[0141] Upon receiving the service request, the ERM (212) may perform a validation check to ensure the integrity and authenticity of the request. This validation process may involve verifying if the received service request exists within the memory of the ERM (212). The memory of the ERM (212) may contain a repository of valid request templates or identifiers, against which incoming requests are checked. If the service request is found to be invalid or non-existent in the ERM's memory, the ERM (212) may discard the received service request. The service request may include various types of resource-related requests, such as a provision resource request, a create resource request, or an initialize resource request.
[0142] At step 704, the method 700 includes analyzing, by the ERM, the received service request to determine one or more target network services (also implemented as microservices) associated with the MANO function (214) for routing the service request.
[0143] At step 706, the method 700 includes routing, by the ERM (212), the service request to the one or more target network services associated with the MANO function (214) over one or more different interfaces for request fulfillment. The routing process may involve a sophisticated analysis of the received service request to determine the most appropriate target microservice within the MANO function (214). This analysis may be based on the type of resource allocation required, as specified in the service request. The ERM (212) may maintain a mapping of service request types to target microservices, enabling efficient and accurate routing of requests. The target microservice of the MANO function (214) may be provided by the at least one of a Network Functions Virtualization Orchestrator (NFVO), a Virtual Network Function Manager (VNFM), or a Virtualized Infrastructure Manager (VIM). Each of these microservices may be specialized in handling specific aspects of network slice management and resource allocation. The routing process may also involve translating the service request into a format compatible with the MANO function (214).
[0144] The second interface including the one or more network services, over which the ERM (212) routes the service request to the MANO function (214), maybe a bi-directional interface referred to as the EM_MS interface. This interface may enable asynchronous event-based communication between the ERM (212) and the MANO function (214).
[0145] At step 708, the method 700 includes receiving, by the ERM (212), a response from the target microservice associated with the MANO function (214) over the second interface. The response may be an acknowledgement of the service request, indicating whether the request has been successfully fulfilled or not. The acknowledgement may take the form of either a positive acknowledgement indicating successful fulfillment of the service request, or a negative acknowledgement indicating failure to fulfill the service request.
[0146] Upon receiving the response, the ERM (212) may generate an update signal indicating the status of the processed service request. This update signal may provide real-time updates on the request status, allowing for timely monitoring of the network slice allocation process.
[0147] At step 710, the method 700 includes forwarding, by the ERM (212), the response to the NSMP (210) over the first interface. This step completes the communication loop, ensuring that the NSMP (210) is informed of the outcome of its service request. The NSMP is configured to allocate the network slice based on the positive acknowledgement.
[0148] The first interface, over which the response is forwarded to the NSMP (210), may be a bi-directional interface referred to as the NSMP_EM interface. Like the EM_MS interface, this interface may enable asynchronous event-based communication, allowing efficient information exchange between the ERM (212) and the NSMP (210). The bi-directional nature of this interface may facilitate not only the forwarding of responses but also the potential for follow-up queries or additional requests from the NSMP (210) based on the received response.
[0149] The ERM (212) may implement an asynchronous event-based processing model to handle multiple service requests concurrently. This model may allow the ERM (212) to efficiently manage a high volume of service requests without becoming a bottleneck in the system. The asynchronous nature of the processing may enable the ERM (212) to initiate multiple request fulfillment processes simultaneously, improving the overall throughput of the network slice managing system.
[0150] To facilitate efficient management and tracking of service requests, the ERM (212) may store the received service requests in a database associated with the MANO function (214). The stored information may be used for various purposes, including auditing, performance analysis, and trend identification.
[0151] To optimize network resource utilization and ensure efficient service request processing, the ERM (212) may implement a load balancing mechanism. This mechanism may distribute service requests across multiple instances of the MANO function (214) microservices. The load balancing mechanism may consider factors such as the current workload of each microservice instance, the type of service request, and the urgency of the request to make intelligent distribution decisions.
[0152] The method 700 may provide a robust and efficient mechanism for managing network slice in complex network environments. By leveraging the capabilities of the ERM (212) as an intelligent intermediary between the NSMP (210) and the MANO function (214), the method may enable streamlined communication, efficient resource allocation, and enhanced visibility into the network slice management process. The various features of the method, such as request validation, intelligent routing, load balancing, and real-time analytics, may contribute to optimizing network performance and resource utilization.
[0153] In conclusion, the method 700 for communicating between the NSMP (210) and the MANO function (214) via the ERM (212) may represent a significant advancement in network slice management technology. By providing a sophisticated mechanism for handling service requests, routing them to appropriate microservices, and managing the entire lifecycle of network slice allocation, the method (700) may contribute to the realization of truly dynamic and efficient network infrastructures capable of meeting the diverse and evolving needs of modern communication systems.
[0154] FIG. 9 illustrates a computer system (900) in which or with which the embodiments of the present disclosure may be implemented.
[0155] As shown in FIG. 9, the computer system (900) may include an external storage device (910), a bus (920), a main memory (930), a read-only memory (940), a mass storage device (950), a communication port(s) (960), and a processor (990). A person skilled in the art will appreciate that the computer system (900) may include more than one processor and communication ports. The processor (990) may include various modules associated with embodiments of the present disclosure. The communication port(s) (960) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (960) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (900) connects.
[0156] In an embodiment, the main memory (930) may be Random Access Memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (940) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (990). The mass storage device (950) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0157] In an embodiment, the bus (920) may communicatively couple the processor(s) (990) with the other memory, storage, and communication blocks. The bus (920) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (990) to the computer system (900).
[0158] In another embodiment, operator and administrative interfaces, e.g., a display, keyboard, and cursor control device may also be coupled to the bus (920) to support direct operator interaction with the computer system (900). Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (960). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system (900) limit the scope of the present disclosure.
[0159] The present disclosure provides technical advancement related to network slice management and orchestration in virtualized networks. This advancement addresses the limitations of existing solutions by introducing an Event Routing Manager (ERM) that acts as an intelligent intermediary between the Network Slice Management Platform (NSMP) and the Management and Orchestration (MANO) function. The disclosure involves a communication method utilizing dedicated interfaces and an asynchronous event-based processing model, which offers significant improvements in the efficiency, scalability, and reliability of network slice management. By implementing intelligent request routing, load balancing, and fault tolerance mechanisms, the disclosed invention enhances the flexibility and responsiveness of network resource management, resulting in optimized network performance, reduced operational complexity, and improved ability to meet diverse service requirements in 5G and beyond networks.
[0160] While considerable emphasis has been placed herein on the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0161] The present disclosure provides a more efficient and reliable communication system between the NSMP and the MANO function through the implementation of an Event Routing Manager (ERM), significantly reducing complexity in network slice management.
[0162] The present disclosure enables asynchronous event-based processing, allowing for concurrent handling of multiple service requests, thereby improving system responsiveness and throughput.
[0163] The present disclosure implements intelligent request routing to specific MANO microservices, ensuring optimal resource allocation and improving overall network performance.
[0164] The present disclosure enhances system reliability through built-in fault tolerance and load balancing mechanisms, ensuring continuous operation even in the event of component failures.
[0165] The present disclosure provides real-time status updates and analytics, enabling proactive network management and optimization of resource allocation.
[0166] The present disclosure improves security in network slice management through implemented security measures that ensure the authenticity and integrity of communications.
[0167] The present disclosure offers greater flexibility in network slice customization, allowing for the accommodation of diverse service requirements and use cases.
[0168] The present disclosure enhances data integrity and recovery capabilities through systematic logging and backup synchronization of service requests and responses.
[0169] The present disclosure improves user experience by providing clear success or failure indications for resource allocation requests, enabling users to make informed decisions about their service needs.
,CLAIMS:CLAIMS
We Claim:
1. A method (700) for communicating between a Network Slice Management Platform (NSMP) (210) and a Management and Orchestration (MANO) function (214) via an Event Routing Manager (ERM) (212), the method (700) comprising:
receiving (702), by the ERM (212), a service request from the NSMP (210) over a first interface;
analyzing (704), by the ERM (212), the received service request to determine one or more target network services associated with the MANO function (214) for routing the service request;
routing (706), by the ERM (212), the service request to the one or more target network services associated with the MANO function (214) over one or more different interfaces for request fulfillment;
receiving (708), by the ERM (212), at least one response from each of the one or more target network services associated with the MANO function (214) over the one or more different interfaces, wherein the response is an acknowledgement of the service request; and
forwarding (710), by the ERM (212), the response to the NSMP (210) over the first interface.
2. The method (700) of claim 1, wherein the method (700) further comprises:
determining, by the ERM (212), whether the received service request is valid by checking if the received service request exists within a memory of the ERM (212); and
discarding, by the ERM (212), the received service request when the received service request does not exist in the memory of the ERM (212).
3. The method (700) of claim 1, wherein the service request includes at least one of a provision resource request, a create resource request, a terminate resource request, and an initialize resource request.
4. The method (700) of claim 1, wherein the one or more target network services associated with the MANO function (214) are provided by at least one of a Network Functions Virtualization Orchestrator (NFVO), a Virtual Network Function Manager (VNFM), and a Virtualized Infrastructure Manager (VIM).
5. The method (700) of claim 1, wherein the acknowledgement is at least one of:
a positive acknowledgement indicating a successful fulfillment of the service request, and
a negative acknowledgement indicating a failure to fulfill the service request.
6. The method (700) of claim 1, wherein the method (700) further comprises:
generating, by the ERM (212), an update signal indicating a status of the processed service request; and
communicating, by the ERM (212), the update signal to the NSMP (210) via the first interface, providing real-time updates on the request status.
7. The method (700) of claim 1, wherein the first interface and each of the one or more different interfaces are bi-directional interfaces, enabling an asynchronous event-based communication between the NSMP (210), the ERM (212), and the MANO function (214), wherein the first interface is an NSMP_EM interface, and wherein each of the one more different interfaces is an EM_MS interface.
8. The method (700) of claim 1, further comprising:
storing, by the ERM (212), the received service request in a database associated with the MANO function (214), enabling tracking and management of the received service request(s); and
maintaining, by the ERM (212), a log of the received service requests and their corresponding responses in the database.
9. The method (700) of claim 1, further comprising:
implementing, by the ERM (212), a load balancing mechanism to distribute the received service requests across multiple instances of the MANO function (214) network services.
10. A system (102) for communicating between a Network Slice Management Platform (NSMP) (210) and a Management and Orchestration (MANO) function (214), the system (102) comprising:
an Event Routing Manager (ERM) (212), the ERM (212) comprising:
a transceiver unit (202) configured to receive a service request from the NSMP (210) over a first interface;
a memory (204); and
one or more processor(s) (206) coupled with the transceiver unit (202) to receive the service request and is further coupled with the memory (204) to execute a set of instructions stored in the memory (204), the one or more processor(s) (206) are configured to:
analyze the received service request to determine one or more target network services associated with the MANO function (214) for routing the service request;
route the service request to the one or more determined target network services associated with the MANO function (214) over a one or more different interfaces for request fulfillment;
receive at least one response from the each of the one or more target network services associated with the MANO function (214) over the one or more different interfaces, wherein the response is an acknowledgement of the service request; and
forward the response to the NSMP (210) over the first interface.
11. The system (102) of claim 10, wherein the service request includes at least one of a provision resource request, a create resource request, a terminate resource request, and an initialize resource request.
12. The system (102) of claim 10, wherein the one or more target network services are provided by at least one of a Network Functions Virtualization Orchestrator (NFVO), a Virtual Network Function Manager (VNFM), and a Virtualized Infrastructure Manager (VIM).
13. The system (102) of claim 10, wherein the acknowledgement is at least one of:
a positive acknowledgement indicating a successful fulfillment of the service request, and
a negative acknowledgement indicating a failure to fulfill the service request.
14. The system (102) of claim 10, is further configured for:
generating, by the ERM (212), an update signal indicating a status of the processed service request; and
communicating, by the ERM (212), the update signal to the NSMP (210) via the first interface, providing real-time updates on the request status.
15. The system (102) of claim 10, wherein the first interface and each of the one or more different interfaces are bi-directional interfaces, enabling an asynchronous event-based communication between the NSMP (210), the ERM (212), and the MANO function (214), wherein the first interface is an NSMP_EM interface, and wherein each of the one or more different interfaces is an EM_MS interface.
16. The system (102) of claim 10, wherein the ERM (212) is further configured to:
store the received service request in a database associated with the MANO function (214), enabling tracking and management of the received service request(s); and
maintain a log of the received service requests and their corresponding responses in the database.
17. The system (102) of claim 10, is further configured for:
implement, by the ERM (212), a load balancing mechanism to distribute the received service requests across multiple instances of the MANO function network services.
| # | Name | Date |
|---|---|---|
| 1 | 202321072992-STATEMENT OF UNDERTAKING (FORM 3) [26-10-2023(online)].pdf | 2023-10-26 |
| 2 | 202321072992-PROVISIONAL SPECIFICATION [26-10-2023(online)].pdf | 2023-10-26 |
| 3 | 202321072992-FORM 1 [26-10-2023(online)].pdf | 2023-10-26 |
| 4 | 202321072992-FIGURE OF ABSTRACT [26-10-2023(online)].pdf | 2023-10-26 |
| 5 | 202321072992-DRAWINGS [26-10-2023(online)].pdf | 2023-10-26 |
| 6 | 202321072992-DECLARATION OF INVENTORSHIP (FORM 5) [26-10-2023(online)].pdf | 2023-10-26 |
| 7 | 202321072992-FORM-26 [28-11-2023(online)].pdf | 2023-11-28 |
| 8 | 202321072992-Proof of Right [06-03-2024(online)].pdf | 2024-03-06 |
| 9 | 202321072992-DRAWING [23-10-2024(online)].pdf | 2024-10-23 |
| 10 | 202321072992-COMPLETE SPECIFICATION [23-10-2024(online)].pdf | 2024-10-23 |
| 11 | 202321072992-FORM-5 [25-11-2024(online)].pdf | 2024-11-25 |
| 12 | Abstract.jpg | 2025-01-16 |
| 13 | 202321072992-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321072992-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321072992-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321072992-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 17 | 202321072992-FORM 3 [24-02-2025(online)].pdf | 2025-02-24 |