Sign In to Follow Application
View All Documents & Correspondence

System And Method For Managing Service Requests In A Network

Abstract: ABSTRACT SYSTEM AND METHOD FOR MANAGING SERVICE REQUESTS IN A NETWORK The present disclosure envisages a system (108) and a method (600) for managing a service request in a network (106). The method (600) includes receiving at least one service request from a subscriber support system (SSS) (124) using a first interface. The method (600) includes determining (606), by the processing engine (116), at least one network service associated with the at least one received service request based on the one or more extracted parameters. The method (600) includes transmitting the at least one received service request to the at least one determined network service using a second interface. The method (600) includes processing the at least one service request to perform one or more operations associated with the at least one service request. The method (600) includes transmitting at least one response message towards the SSS (124) using the first interface and the second interface. Ref. Fig. 6

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
27 October 2023
Publication Number
18/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Sumit Thakur
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Pramod Jundre
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Arun Maurya
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Ganmesh Koli
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Somendra Singh
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Kuldeep Singh
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MANAGING SERVICE REQUESTS IN A NETWORK
2. APPLICANT(S)
Name Nationality Address
JIO PLATFORMS LIMITED INDIAN Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. PREAMBLE TO THE DESCRIPTION
The following specification particularly describes the invention and the manner in which it is to be performed.


RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to JIO PLATFORMS LIMITED or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF THE DISCLOSURE
[0002] The present disclosure relates generally to the field of wireless communication network. The present disclosure relates to managing service requests in a network by providing an interface for managing communication between a Subscriber Support System (SSS) and a Management and Orchestration (MANO) framework.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression ‘Management and Orchestration (MANO)’ framework used hereinafter in the specification refers to a framework that manages and orchestrates virtualized network functions (VNFs) and resources in a network functions virtualization (NFV) environment.
[0005] The expression ‘Subscriber Support System (SSS)’ used hereinafter in the specification refers to a system that allows subscribers or service users to manage their accounts, make payments, update information, upgrade subscription and provision and access resources without the need for direct interaction with customer service.
[0006] The expression ‘Event Routing Manager (ERM)’ used hereinafter in the specification refers to an intermediary component that facilitates communication between the SSS (external system) and the MANO framework, routing service requests and responses of network slice allocations and resources allocation and management.
[0007] The expression ‘Network Functions Virtualization (NFV)’ used hereinafter in the specification refers to a network architecture concept that uses virtualization technologies to manage core networking functions via software rather than hardware.
[0008] The expression ‘Virtualized Network Function (VNF)’ used hereinafter in the specification refers to a software implementation of a network function that runs on an NFV environment and can be deployed on a virtual machine.
[0009] The expression ‘Orchestrator’ used hereinafter in the specification refers to a component of the MANO framework responsible for the orchestration and lifecycle management of physical and software resources.
[0010] The expression ‘Container network function (CNF) manager’ used hereinafter in the specification orchestrates and optimizes network functions deployed within containerized environments.
[0011] The expression ‘Virtualized Infrastructure Manager (VIM)’ used hereinafter in the specification refers to a component of the MANO framework responsible for controlling and managing the NFV infrastructure computation, storage, and resources (i.e., network resources).
[0012] The expression ‘NFV Infrastructure (NFVI)’ used hereinafter in the specification refers to a collection of hardware and software components that build an environment in which VNFs are deployed, managed, and executed.
[0013] The expression ‘NFV orchestrator (NFVO)’ used hereinafter in the specification refers to a component within the NFV architecture responsible for the end-to-end orchestration and management of the VNFs and their associated resources.
[0014] The expression ‘event’ used hereinafter in the specification refers to a specific action that can trigger a network element or a system to take a particular action. In an example, the event may include service requests, network traffic, system configuration changes, security incidents, and the like.
[0015] The expression ‘microservice’ is a software architectural style where an application is composed of small, independently deployable services, each responsible for a specific business function. The microservices communicate over well-defined application programming interfaces (APIs) and can be developed, deployed, and scaled independently.
BACKGROUND
[0016] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0017] Wireless communication technology has rapidly evolved over the past few decades. The first generation of wireless communication technology was analog technology that offered only voice services. Further, when the second-generation (2G) technology was introduced, text messaging and data services became possible. The third-generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized the wireless communication with faster data speeds, improved network coverage, and security. Currently, the fifth-generation (5G) technology is being deployed, with even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. The sixth generation (6G) technology promises to build upon these advancements, pushing the boundaries of wireless communication even further. While the 5G technology is still being rolled out globally, research and development into the 6G are rapidly progressing, with the aim of revolutionizing the way we connect and interact with technology.
[0018] With the rapid development of the wireless communication technology, there is a need for development in cloud computing. The business systems are gradually migrating to a cloud platform and are starting to provide business services for users through virtual hosts. With a gradually increasing demand from the users regarding cost reduction and business configuration flexibility improvement, the use of containers on physical hosts for providing various applications is becoming more extensive.
[0019] A subscriber support system (SSS) is a system that allows subscribers or service users to manage their accounts, make payments, update information, and upgrade subscriptions and provisions. The SSS system further enables the subscribers to or the service users to access resources without direct interaction with customer service. The subscribers or the service users can request services from the SSS depending on their or organizations defined needs.
[0020] Management and Orchestration (MANO) is a framework for managing and orchestrating all network resources in the cloud platform. The MANO framework includes computing, networking, storage, containers, and virtual machine (VM) resources. The MANO is a key element of the European Telecommunications Standards Institute (ETSI) network functions virtualization (NFV) architecture. The MANO framework is required to coordinate network resources for cloud-based applications and the lifecycle management of virtualized network functions (VNFs) and network services.
[0021] The existing interfaces, between the SSS and the MANO framework are rigidly designed for specific microservices, leading to the need for multiple interfaces to meet various requirements. This results in a complex and expensive solution
[0022] Hence, there is a need for an interface that can provide automation and improved scalability between the SSS and the MANO framework.
SUMMARY OF THE DISCLOSURE
[0023] In an exemplary embodiment, the present disclosure relates to a system for managing a service request in a network. The system includes a receiving unit configured to receive at least one service request from subscriber support system (SSS) using a first interface. A processing engine is coupled with the receiving unit to receive the at least one service request and is further coupled with the memory to execute a set of instructions stored in the memory. The processing engine is configured to analyze the at least one received service request to extract one or more parameters from the at least one received service request. The processing engine is configured to determine at least one network service associated with the at least one received service request based on the one or more extracted parameters. The processing engine is configured to transmit the at least one received service request to the at least one determined network service using a second interface. The processing engine is configured to process, by the at least one determined network service, the at least one received service request to perform one or more operations associated with the at least one received service request. The processing engine is configured to transmit, by the at least one determined network service, at least one response message towards the SSS using the first interface and the second interface.
[0024] In an embodiment, transmit the at least one response message indicating a status of the at least one processed service request, wherein the status of the at least one processed service request comprises at least one of a success status and a failure status.
[0025] In an embodiment, the processing engine is configured to analyse, by at least one network service, one or more key performance indicators (KPIs) associated with the at least one network element, compare the analysed one or more KPIs with one or more threshold values and provision the one or more operations when the one or more KPIs exceed or fall below the one or more threshold values.
[0026] In an embodiment, the one or more operations associated with the at least one received service request comprises at least one of a resource provision, a resource creation, a resource initialization and resource termination.
[0027] In an embodiment, the first interface includes a subscriber support system_event routing manager (SSS_EM) interface, and the second interface includes an event routing manager_ microservice (EM_MS) interface.
[0028] In an embodiment, the one or more extracted parameters include at least one of a network element identifier (ID) and a request type.
[0029] In an exemplary embodiment, the present disclosure relates to a method for managing a service request in a network. The method includes receiving, by a receiving unit, at least one service request related to at least one network element from a subscriber support system (SSS) using a first interface. The method includes analyzing, by a processing engine, the at least one received service request to extract one or more parameters from the at least one received service request. The method includes determining, by a processing engine, at least one network service associated with the at least one received service request based on the one or more extracted parameters. The method includes transmitting, by the processing engine, the at least one received service request to the at least one determined network service using a second interface. The method includes processing, by the at least one determined network service, the at least one received service request to perform one or more operations associated with the at least one received service request. The method includes transmitting, by the at least one determined network service, at least one response message towards the SSS using the first interface and the second interface.
[0030] The foregoing general description of the illustrative embodiments and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTIVES OF THE DISCLOSURE
[0031] Some of the objective of the present disclosure, which at least one embodiment herein satisfies, are as follows:
[0032] An objective of the present disclosure is to manage communication between a Subscriber Support System (SSS) (an external system) and a Management and Orchestration (MANO) framework using an Event Routing Manager (ERM).
[0033] Another objective of the present disclosure is to provide advanced automation features and improved scalability to existing MANO frameworks by facilitating new functionalities and capabilities for the existing MANO frameworks using the ERM.
[0034] Another objective of the present disclosure is to promote interoperability between the SSS and the MANO framework, making it easier for the SSS and the MANO framework to work seamlessly together.
[0035] Another objective of the present disclosure is to support the SSS in enhancing their operations and seamlessly integrating with the MANO framework by utilizing interfaces associated with the ERM. The integration of the SSS and the MANO framework enables the SSS to access a wide range of functions supported by the MANO framework.
[0036] Another objective of the present disclosure is to automate additional resource creation processes based on a user request.
[0037] Other objectives and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
[0038] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0039] FIG. 1A illustrates an exemplary network architecture for implementing a system for managing a service request in a network, in accordance with an embodiment of the present disclosure.
[0040] FIG. 1B illustrates an exemplary block diagram of the system for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0041] FIG. 1C illustrates an exemplary system architecture for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0042] FIG. 2 illustrates an exemplary flow diagram of a method for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0043] FIG. 3 illustrates another exemplary flow diagram of the method for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0044] FIG. 4 illustrates an exemplary Management and Orchestration (MANO) framework architecture, in accordance with embodiments of the present disclosure.
[0045] FIG. 5 illustrates a computer system in which or with which the embodiments of the present disclosure may be implemented.
[0046] FIG. 6 illustrates another exemplary flow diagram of the method for managing the service request in the network, in accordance with an embodiment of the present disclosure.
[0047] The foregoing shall be more apparent from the following detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100A - Network Architecture
102-1, 102-2…102-N – Plurality of Users
104-1, 104-2…104-N – Plurality of User Equipments
106 – Network
108 – System
100B – Block Diagram
110 – Processor(s)
112 - Memory
114 – Plurality of Interfaces
116 – Processing Engine
118 – Receiving unit
120 – Database
100C – System architecture
122 - A Plurality of Network Elements
124 - Subscriber Support System (SSS)
126 - Event Routing Manager (ERM)
128 - Management and Orchestration (MANO)
200, 300, 600 - Flow Diagram
400 - Management and Orchestration (MANO) framework architecture
500 - Computer System
510 - External Storage Device
520 - Bus
530 - Main Memory
540 - Read-Only Memory
550 - Mass Storage Device
560 - Communication Ports
570 – Processor
DETAILED DESCRIPTION
[0048] In the following description, for the purposes of explanation, various specific details are set forth to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of the present disclosure are described below, as illustrated in various drawings in which like reference numerals refer to the same parts throughout the different drawings.
[0049] The ensuing description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0050] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0051] Also, it is noted that individual embodiments may be described as a process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0052] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0053] Reference throughout this specification to “one embodiment” or “an embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0054] The terminology used herein is to describe embodiments only and is not intended to be limiting the disclosure. As used herein, the singular forms “a” “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items. It should be noted that the terms “mobile device”, “user equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
[0055] As used herein, an “electronic device” or “portable electronic device” or “user device” or “communication device” or “user equipment” or “device” refers to any electrical, electronic, electromechanical, and computing device. The user device can receive and/or transmitting one or parameters, performing function/s, communicating with other user devices, and transmitting data to the other user devices. The user equipment may have a processor, a display, a memory, a battery, and an input-means such as a hard keypad and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user equipment may include, but not limited to, a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
[0056] Further, the user device may also comprise a “processor” or “processing unit” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor is a hardware processor.
[0057] As portable electronic devices and wireless technologies continue to improve and grow in popularity, the advancing wireless technologies for data transfer are also expected to evolve and replace the older generations of technologies. In the field of wireless data communications, the dynamic advancement of various generations of cellular technology are also seen. The development, in this respect, has been incremental in the order of second generation (2G), third generation (3G), fourth generation (4G), and now fifth generation (5G), and more such generations are expected to continue in the forthcoming time.
[0058] Radio Access Technology (RAT) refers to the technology used by mobile devices/ user equipment (UE) to connect to a cellular network. It refers to the specific protocol and standards that govern the way devices communicate with base stations, which are responsible for providing the wireless connection. Further, each RAT has its own set of protocols and standards for communication, which define the frequency bands, modulation techniques, and other parameters used for transmitting and receiving data. Examples of RATs include GSM (Global System for Mobile Communications), CDMA (Code Division Multiple Access), UMTS (Universal Mobile Telecommunications System), LTE (Long-Term Evolution), and 5G. The choice of RAT depends on a variety of factors, including the network infrastructure, the available spectrum, and the mobile device's/device's capabilities. Mobile devices often support multiple RATs, allowing them to connect to several types of networks and provide optimal performance based on the available network resources.
[0059] While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation.
[0060] As the wireless technologies are advancing, there is a need to cope up with the 5G requirements and deliver a prominent level of service to the customers. Thus, faster communication between the network elements of a 5G communication network is becoming crucial day by day. For example, the existing interfaces used to connect a subscriber support system (SSS) and a management and orchestration (MANO) in a telecommunication network, are having a hardcore design, and are limited for a defined network service only. Therefore, several interfaces are required to provide a connectivity according to the requirements, therefore yielding a complex and costly solution. Further, modifying the traditional interfaces may lead to unpredictable behavior of the network resource implementing them.
[0061] Hence, there is a need for an interface that can provide enhanced connectivity between the SSS and the MANO, and that can be used to automate resource creation process.
[0062] The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing a subscriber support system_event routing manager (SSS_EM) interface for the MANO that can provide enhanced connectivity between the SSS and the MANO.
[0063] The various embodiments throughout the disclosure will be explained in more detail with reference to FIG. 1A – FIG. 6.
[0064] FIG. 1A illustrates an exemplary network architecture (100A) for implementing a system (108) for managing a service request in a network (106), in accordance with an embodiment of the present disclosure.
[0065] As illustrated in FIG. 1A, the network architecture (100A) may include one or more user equipments (UEs) (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more UEs (104-1, 104-2…104-N) may be collectively referred to as the UE (104). Although only three UEs (104) are depicted in FIG. 1A, however, any number of the UE (104) may be included without departing from the scope of the ongoing description.
[0066] In an embodiment, the UE (104) may include smart devices operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the UE (104) may include, but are not limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the UE (104) may include, but not limited to, intelligent, multi-sensing, network-connected devices, which may integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
[0067] Additionally, in some embodiments, the UE (104) may include, but not limited to, a handheld wireless communication device (e.g., a mobile phone, a smartphone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the UE (104) may include, but are not limited to, any electrical, electronic, electromechanical, or equipment, or a combination of one or more of the above devices, such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the UE (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the UE (104) may not be restricted to the mentioned devices and various other devices may be used.
[0068] Referring to FIG. 1A, the UE (104) may communicate with the system (108) through the network (106) for sending or receiving various types of data. In an embodiment, the network (106) may include at least one of a 5G network, 6G network, or the like. The network (106) may enable the UE (104) to communicate with other devices in the network architecture (100A) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any of a variety of different communication technologies such as a wide area network (WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like.
[0069] In an embodiment, the network (106) may include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network (106) may also include, by way of example but not limitation, one or more of a radio access network (RAN), a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
[0070] In an embodiment, the UE (104) is communicatively coupled with the network (106). The network (106) may receive a connection request from the UE (104). The network (106) may send an acknowledgment of the connection request to the UE (104). The UE (104) may transmit a plurality of signals in response to the connection request.
[0071] Although FIG. 1A shows exemplary components of the network architecture (100A), in other embodiments, the network architecture (100A) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1A. Additionally, or alternatively, one or more components of the network architecture (100A) may perform functions described as being performed by one or more other components of the network architecture (100A).
[0072] FIG. 1B illustrates an exemplary block diagram (100B) of the system (108) for managing the service request in the network (106), in accordance with an embodiment of the present disclosure.
[0073] Referring to FIG. 1B, in an embodiment, the system (108) may include one or more processor(s) (110), a memory (112), a plurality of interface(s) (114), a processing engine (116), a receiving unit (118) and a database (120). The one or more processor(s) (110) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (110) may be configured to fetch and execute computer-readable instructions stored in the memory (112) of the system (108). The memory (112) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (112) may include any non-transitory storage device including, for example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0074] In an embodiment, the interface(s) (114) may include a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (114) may facilitate communication through the system (108). The interface(s) (114) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, a processing engine (116) and a database (120).
[0075] The processing engine (116) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine (116). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine (116) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine (116) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine (116). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other examples, the processing engine (116) may be implemented by electronic circuitry. In an embodiment, the database (120) includes data that may be either stored or generated as a result of functionalities implemented by any of the components of the processor (110) or the processing engine (116).
[0076] In an embodiment, the receiving unit (118) is configured to receive at least one service request related to at least one network element (122-1, 122-2) from a subscriber support system (SSS) (124) using a first interface. A person of ordinary skill in the art will understand that at least one network element (122-1, 122-2) may collectively referred to as the network element (122). In an embodiment, the at least one network element includes a router, a switch, a firewall, a load balancer, or a server, etc. The at least one network element (122) is created using containers orchestrated by at least one microservice, with applications deployed on these containers. The SSS (124) continuously monitors the performance and status of the at least one network element (122). The at least one network element (122) can be provisioned on at least one microservice or other internal systems utilizing physical servers for the application. As the at least one network element (122) communicates with the SSS (124), if any issues are detected, the SSS (124) generates a ticket through the appropriate interface, and the at least one microservice is engaged to resolve the issue by provisioning necessary resources or taking corrective actions. When subscribers request new services or modifications (e.g., via a customer portal or customer support), the SSS (124) communicates these requests to the at least one microservice. In response, the at least one microservice may provision or deactivate the necessary network resources such as switches, routers, bandwidth, or network slices ensuring that the at least one service request is fulfilled efficiently. In an embodiment, the SSS (124) (an external system) allows subscribers or service users to manage their accounts, make payments, update information, and upgrade subscriptions and provisions. The SSS (124) further enables the subscribers to or the service users to access resources without direct interaction with customer service. The subscribers or the service users can request services from the SSS (124) depending on their or organizations defined needs. In an embodiment, the SSS (124) initiates the service request to allocate or manage the necessary resources from a plurality of network elements (122-1, 122-2).
[0077] In an embodiment, the first interface denotes a communication channel or interaction point established between the receiving unit (118) and the SSS (124). In an embodiment, the first interface facilitates receipt and processing of the at least one service request by the receiving unit (118) from the SSS (124). In an embodiment, the first interface encompasses various types of communication protocols and standards, including but not limited to, application programming interfaces (APIs), network sockets, or dedicated communication ports. In an embodiment, the first interface is designed to enable bidirectional communication between the receiving unit (118) and the SSS (124). In an embodiment, the first interface may support the transmission of data packets, messages, or service requests in a structured format, ensuring compatibility with the receiving unit’s processing capabilities. In an embodiment, the receiving unit (118) may employ predefined communication protocols associated with the first interface, for example, hypertext transfer protocol (HTTP)/hypertext transfer protocol secure (HTTPS) for web-based APIs, transmission control protocol (TCP) / internet protocol (IP) for socket-based communications. In an embodiment, upon initiation of the at least one service request from the SSS (124), the first interface establishes a communication session with the receiving unit (118). In an embodiment, the at least one service request is conveyed through the communication channel. In one embodiment, the first interface may be realized as a RESTful API endpoint. The RESTful API endpoint is a specific uniform resource locator (URL) where a RESTful web service receives HTTP requests to perform operations on resources, such as retrieving, creating, updating, or deleting data. In an embodiment, the at least one service request includes a request type that specifies the action to be performed, such as configuring a device, retrieving status information, or applying updates. For example, the at least one service request might request a configuration change for a router. In an embodiment, the at least one service request includes a network element identifier (ID) that identifies the network element (122) that the request pertains to, such as the unique ID or internet protocol (IP) address of the router, switch, or firewall. In one embodiment, each service request is transmitted as an HTTP POST or GET request containing relevant parameters and payload. In one embodiment, the first interface includes a subscriber support system_event routing manager (SSS_EM) interface.
[0078] In an embodiment, the processing engine (116) is coupled with the receiving unit (118) and the memory (112) and is configured to analyze the at least one received service request to extract one or more parameters from the at least one received service request. The processing engine (116) is configured to determine at least one network service that may interchangeably referred to as at least one microservice, associated with the at least one received service request. In an embodiment, the one or more extracted parameters include at least one of the network element identifier (ID) and a request type. In an embodiment, the processing engine (116) extracts the one or more parameters by parsing and analyzing service request. The extraction allows the processing engine (116) to identify and transmit the at least one received service request to the least one received service request to at least one microservice capable of handling the specified at least one network element and requested operation. During parsing, the processing engine (116) isolates the network element ID and request type from the service request data. The extraction is performed using predefined rules or formats that specify where these parameters are located within the at least one service request. For example, if the at least one service request is formatted in JavaScript Object Notation (JSON), the processing engine (116) extracts parameters by parsing the JSON object to retrieve values associated with keys like the network element ID and the request type. Once the one or more parameters are extracted, the processing engine (116) uses the extracted parameters to identify the at least one microservice. For instance, the processing engine (116) may match the network element ID to a microservice designed to manage that specific type of network element, or the processing engine (116) may route the at least one service request based on the operation or action type specified in the at least one service request. The processing engine (116) is configured to analyse by at least one network service, one or more key performance indicators (KPIs) associated with the at least one network element (122) to continuously track the performance of the network or application. The one or more KPIs may include metrics such as bandwidth usage, response time, latency, the number of active connections, Central Processing Unit (CPU), Random Access Memory (RAM), Disk Input/Output (Disk I/O) or Network Input/Output (Network I/O) usage. The processing engine (116) compares the monitored one or more KPIs with one or more threshold values. If the comparison reveals that the analysed one or more KPIs exceed or fall below one or more threshold values, indicating a performance issue or inefficiency, the processing engine (116) automatically provisions the one or more operations to address the problem. The one or more operations may include allocating additional resources, scaling up the service by spawning more containers, or initiating corrective actions to restore optimal performance without manual intervention. This ensures that the system (108) maintains the desired level of service quality by dynamically responding to changes in network conditions or service demands. The value of monitoring a KPI parameter, such as CPU usage, is critical for maintaining system performance and preventing resource bottlenecks. For example, if the one or more KPI being monitored is CPU usage, a predefined threshold can be set. In this case, if the CPU usage exceeds 80%, it triggers an automatic action, such as spawning a new resource or scaling up system capacity to handle the increased load. This approach ensures that the system (108) remains responsive and efficient, preventing potential slowdowns or outages caused by resource exhaustion. Monitoring thresholds like this help maintain optimal performance and resource allocation in real-time. In another example, when the user (102) complains about slow speed or performance, the processing engine (116) compares the request against predefined metrics. Suppose the system can handle up to 1,000 requests per second, but the KPIs indicate that the system is receiving more than 1,000 requests per second. In this case, based on the KPI data, the processing engine (116) automatically provisions additional resources to handle the increased load. The resource scaling may involve spawning more containers and distributing incoming requests across them, ensuring optimal throughput. Whenever a subscriber submits a request or complaint, such as slow download speeds, the processing engine (116) checks the associated KPIs for that service. If the KPIs reveal a performance issue and meet predefined thresholds or rules for scaling, the processing engine (116) automatically takes action by allocating more resources or containers to resolve the complaint. This automated process ensures that service quality is maintained without manual intervention.
[0079] In an embodiment, the processing engine (116) is configured to transmit the at least one received service request to the at least one determined microservice using a second interface. In an embodiment, the second interface comprises an event routing manager_microservice (EM_MS) interface that may interchangeably refer to as an event routing manager_Management and Orchestration (MANO) service (EM_MS) interface. The processing engine (116) utilizes a routing table or mapping system that associates the one or more extracted parameters with at least one determined microservice. In an embodiment, the processing engine (116) uses a service discovery mechanism to locate the determined microservice dynamically. The service discovery mechanism involves querying a service registry to obtain a current address and endpoint of the at least one microservice. The endpoint of at least one microservice may refers to the specific uniform resource locator (URL) or network address where the at least one microservice listens for incoming requests. The processing engine (116) establishes a communication channel using the second interface (EM_MS interface) with the at least one determined microservice using an application programming interface (API) call, messaging queues, or direct network connections. The at least one service request is forwarded through the communication channel to the at least one determined microservice. The at least one service request is transmitted in a structured format, such as JSON or extensible markup language (XML), that the at least one determined microservice can process. The at least one response message is transmitted through the second interface using defined communication protocols, such as HTTP/HTTPS, message queues, or other network protocols. For example, the at least one microservice might send a response with a status code of “200 OK” if the update was successful or “400 Bad Request” if there was an error. The SSS (124) receives the response and processes it accordingly.
[0080] In an embodiment, the at least one determined microservice dynamically manages network resources based on real-time subscriber demand. For instance, if the subscriber requests a higher quality of service (QoS) or an upgrade (e.g., faster bandwidth or enhanced network performance), the SSS communicates this to the at least one determined microservice. The at least one determined microservice then allocates additional resources, such as bandwidth or network slices, to meet the increased demand, ensuring that service levels are maintained or enhanced as needed.
[0081] The SSS tracks service usage of the subscriber such as bandwidth consumption or time spent on specific services and communicates this data to the at least one microservice. This information is critical for the at least one microservice to optimize resource allocation, ensuring that network resources are efficiently scaled up or down in line with actual usage. Additionally, the at least one microservice integrates this usage data into the billing system, enabling accurate and real-time charging based on service consumption. When SSS requests changes to service levels or activation of new services, the at least one microservice orchestrates the scaling of necessary resources. These resources can include network components like switches, routers, or even virtualized infrastructure, such as network slices and bandwidth. By scaling these resources dynamically, the at least one microservice ensures that the network (106) continues to meet subscriber demands while maintaining optimal performance.
[0082] In an embodiment, the at least one determined microservice is configured to process the at least one service request to perform one or more operations associated with the at least one received service request. In an embodiment, the one or more operations associated with the at least one service request include at least one of a resource provision (allocating resources), resource creation (establishing new resources), and resource initialization (preparing resources for use). In an embodiment, the at least one determined microservice executes the one or more operations specified in the at least one service request. The resource creation operation involves allocating and making available network resources required to meet the needs specified in the at least one service request. The resource creation operation includes assigning bandwidth, storage, or compute resources. For example, if the at least one service request specifies additional bandwidth for at least one network element, the at least one microservice adjusts the network configuration to provide the requested bandwidth. The resource creation operation refers to the establishment of new network resources or entities as specified by the at least one service request. The resource creation operation might involve creating new virtual machines, network interfaces, or storage volumes. In an embodiment, upon receiving the at least one service request to create a new resource, the at least one determined microservice initiates the creation process by provisioning the necessary infrastructure, configuring the new resource, and integrating it into the existing network environment. The resource initialization operation involves preparing and configuring newly created resources, so they are ready for use. The resource initialization operation includes setting up initial configurations, applying default settings, and ensuring the resource is operational. The least one determined microservice initializes the resource by applying initial configurations, performing resource checks, and ensuring that the resource meets the operational requirements defined in the least one service request. For example, if a new virtual network switch is created based on the one or more parameters of the at least one service request, the least one determined microservice configures the switch settings and performs tests to verify the functionality of the virtual network switch.
[0083] In an embodiment, the at least one determined microservice is configured to transmit at least one response message to the SSS (124) based on the at least one processed service request using the first interface and the second interface. In an embodiment, the at least one response message is transmitted to the ERM (126) using the second interface (EM_MS interface). In an embodiment, the ERM (126) transmits the at least one response message towards the SSS (124) using the first interface (SSS_EM interface). In an embodiment, the at least one response message indicates a status of the at least one processed service request. In an embodiment, the status of the at least one processed service request includes at least one of a success status and a failure status. In an embodiment, after processing the at least one service request, the at least one determined microservice generates the at least one response message. The at least one response message is generated based on an outcome of the at least one service request processing and includes information about the status of the at least one processed service request. The status provides feedback to the SSS (124) regarding the success or failure of the operations performed.
[0084] FIG. 1C illustrates an exemplary system architecture (100C) for managing the service request in the network (106), in accordance with an embodiment of the present disclosure.
[0085] In an embodiment, the event routing manager (ERM) (126) plays a significant role in event-driven systems where events are required be delivered from the event sources to the specific event consumers or downstream services. The ERM (126) acts as an intermediary connecting interface between the SSS (124) and a MANO (128). The MANO is the framework for managing and orchestrating network functions and resources in a virtualized network environment, such as in Network Functions Virtualization (NFV). The ERM (126) provides event distribution by efficiently distributing events to multiple consumers or services based on the event name. The ERM (126) helps decoupling and scalability the event-driven systems by allowing the system to scale independently. Further, the ERM (126) helps in adding multiple event producers and consumers in the event-driven systems without affecting the overall functionality or performance of the system (108). The ERM (126) supports dynamic routing in the event-driven systems by enabling the system to adapt to the changing requirements, providing error handling, and retry mechanisms in the event-driven systems. The ERM (126) provides various mechanisms (e.g., retry mechanism) to manage errors or failures during event routing and to ensure reliable delivery of the events even in transient failures or network issues. Further, in a distributed system, the ERM (126) incorporates service discovery mechanisms to locate dynamically and route events to the specific/dedicated service instances.
[0086] In an embodiment, the ERM (126) is a combination of an event-based architecture and a publish/subscribe architecture and takes advantage of both architectures. The ERM (126) combines event-based messaging with a publish/subscribe pattern and acts as the router responsible for receiving HTTP requests, extracting event information, and routing events to the relevant services based on the event name. Further, other microservices can also subscribe to specific events and can receive and process them accordingly.
[0087] In an embodiment, the ERM (126) supports acknowledgement reports and delivery reports by sending an acknowledgement report to the publisher, which publishes the event upon successfully receiving it, and sending a delivery report to the publisher after the subscriber’s service receives the event. Since the ERM (126) is based on the publish/subscribe and the event-based architecture, the ERM (126) supports the sequence order of events as well as parallel events. The ERM (126) can deliver events to consumers based on priority. The ERM (126) stores an event only in case one or more consumer microservices fail to receive the event, thus improving the efficiency of the system (108). Further, the ERM (126) supports a retry mechanism in case of failure. The ERM (126) retries for a certain configurable time and interval, and after the configurable time, it also archives the events so that the user (102) can trace all failure events easily.
[0088] In an embodiment, the ERM (126) supports acknowledgement report and delivery report, when producer/publisher service sends an event to the ERM (126) producer/publisher, an acknowledgment (ACK) signal gets received from the ERM (126). The ACK signal indicates that the ERM (126) has received that event. Further, after successfully delivering the event to the consumers/subscribers microservice, the ERM (126) delivers a delivery report to the producer/publisher that it has successfully delivered that event to consumers/subscribers.
[0089] Similarly, for some reason, if the event fails to deliver to the consumers/subscribers, the ERM (126) informs the producer/publisher microservice about the success as well as the failure of the event by providing a delivery report to the producer/publisher microservice.
[0090] Thus, the ERM (126) provides a centralized and efficient mechanism for event routing and distribution in event-driven architectures. The ERM (126) offers advantages such as decoupling, scalability, dynamic routing, error handling, and visibility, contributing to the overall flexibility, reliability, and maintainability of the system (108).
[0091] As shown in FIG. 1C, the SSS (124) initiates the at least one service request received from the network elements (122) e.g., network element 1 (122-1), network element 2 (122-2) to allocate or manage the resources. The SSS (124) initiates the at least one service request related to an event (e.g., resource allocation) towards the ERM (126) using the SSS_EM interface (the first interface). The SSS (124) is configured to establish communication with the MANO (128) through the SSS_EM interface. The ERM (126) accepts the at least one service request from the SSS (124) and sends the at least one service request using the EM_MS interface (the second interface) towards the MANO (128) which is configured for processing the at least one service request. The MANO (128) analyzes the at least one service request and provides the provisioning and initiating of the event related to the requested resource. The MANO (128) forwards the at least one service request to at the least one determined microservice to fulfill the event (e.g., resource creation). When the event is processed successfully by the microservice (e.g., resource creation), the ERM (126) receives at least one acknowledgment signal from the at least one determined microservice. The ERM (126) transmits at least one acknowledgement signal (response signal), using the SSS_EM interface, towards the SSS (124), indicating the status of the at least one service request. Further, once the resources are successfully allocated, the ERM (126) promptly sends a response signal to the SSS (124) to confirm the completion of the resource allocation process. The ERM (126) direct service requests to other external interfaces for further processing and interaction with the SSS.
[0092] In an embodiment, the ERM (126) forwards the request to at least one determined microservice on the EM_MS interface. The EM_MS interface allocates the necessary resources by spawning the server or application resource. Further, the EM_MS interface supports dynamic resource allocation by enabling the network (106) to redistribute resources to areas of demand immediately.
[0093] In an embodiment, the present disclosure provides an asynchronous event-based implementation to utilize the SSS_EM interface efficiently. In fault tolerance for any event failure, the SSS_EM interface works in the high availability mode and if one ERM instance goes down during the resource allocation service request processing then next available instance takes care of the resource allocation request.
[0094] In an embodiment, the SSS_EM interface provides advanced automation features and improved scalability. The SSS_EM interface promotes interoperability between the SSS (124) and the MANO (128), thus making it easier for them to work seamlessly together. The service operators scale their infrastructure effectively to meet rising demand using MANO (128) using the SSS_EM interface.
[0095] Further, the SSS_EM interface provides advanced automation features and improved scalability. The SSS_EM interface promotes interoperability between the SSS (124) and the MANO (128), thus making it easier for them to work seamlessly together. The SSS_EM interface can automate the process of additional resource creation, healing or termination depending on the user subscription request.
[0096] In an embodiment, the ERM (126) is primarily used for asynchronous communication between microservices, and thus allow them to publish events and subscribe to events of interest. In an implementation, the ERM (126) is not focused on direct client interaction.
[0097] In an embodiment, the ERM (126) mainly focuses on enabling microservices to scale independently and handle asynchronous events efficiently, which can be crucial for high throughput and resilience.
[0098] In an embodiment, the SSS (124) encompasses functionalities like customer service portals, helpdesk and ticketing systems, and service activation and deactivation features.
[0099] In an embodiment, the SSS (124) is oriented towards supporting subscribers (individual customers or entities) in their interactions with the service provider services with the features like self-service portals, customer care, and issue resolution.
[00100] In an embodiment, the SSS (124) aims to provide subscribers with a convenient and efficient way to manage their services and address any issues they encounter.
[00101] FIG. 2 illustrates an exemplary flow diagram of a method (200) for managing the service request in the network (106) in accordance with an embodiment of the present disclosure.
[00102] At step (202) of the method (200), the ERM (126) accepts the service request related to an event from the SSS (124).
[00103] At step (204) of the method (200), the ERM (126) determines whether the received service request is a valid request or an invalid request. In an embodiment, the ERM (126) determines whether the received service request is valid or invalid through a comprehensive validation process. For example, the validation begins with checking a syntax and format of the service request to ensure proper adherence to expected structures and data types. Following this, the ERM (126) verifies the authenticity of the service request by validating authentication credentials and tokens to ensure authorization. If any of these criteria are not met, the request is deemed invalid, and an error response is generated. Otherwise, the request is considered valid and routed to the defined microservice for processing.
[00104] At step (206) of the method (200), the ERM (126) is configured to END the process when the received service request is the invalid request.
[00105] At step (208) of the method (200), the ERM (126) sends the received service request to an appropriate microservice (MS) in the MANO (128) for request processing and resource creation when the service request is the valid request.
[00106] At step (210) of the method (200), the MANO (128) provides at least one of the provision, creation, and initialization of the requested resource for the service request.
[00107] At step (212) of the method (200), the ERM (126) receives the response signal from the appropriate microservice from the MANO (128) when the resource is provisioned, created, or initialized successfully by the microservice.
[00108] FIG. 3 illustrates another exemplary flow diagram of the method (300) for managing the service request in the network (106), in accordance with an embodiment of the present disclosure.
[00109] At step (302) of the method (300), the SSS (124) sends the service request related to an event to the ERM (126) through the SSS_EM interface.
[00110] At step (304) of the method (300), the ERM (126) (106) transmits the received provision service request to a network functions virtualization orchestrator (NFVO) (130) as the provision service request. The NFVO (130) is a functional element of the MANO (128).
[00111] At step (306) of the method (300), the NFVO (130) stores the received provision service request for generating a processed service request.
[00112] At step (308) of the method (300), the NFVO (130) transmits a processed service request to the ERM (126) in response to the received provision service request. The NFVO (130) generates the processed service request based on the provision service request it received. The processed service request includes details about the required processing actions, such as resource provisioning or configuration changes. The NFVO (130) validates the provision service request to ensure that the provision service request meets all required criteria and compliance with the network policies and prepares the provision service request for execution.
[00113] At step (310) of the method (300), the ERM (126) transmits the processed service request to a container network function (CNF) manager (132) of the MANO (128).
[00114] At step (312) of the method (300), the CNF manager (132) performs onboard, instantiation and termination functions/operations on the process service request. The onboarding operation includes loading and preparing the necessary software or configurations required for the new cloud-native function. The instantiation includes creating and deploying the cloud-native function instances based on the specifications in the service request. The termination operation includes removing or deactivating cloud-native function instances that are no longer needed.
[00115] At step (314) of the method (300), the CNF manager (132) transmits an infrastructure orchestration request to the ERM (126). The infrastructure orchestration request includes information about the current status of the onboarded, instantiated, or terminated functions and may also involve instructions for managing or adjusting infrastructure resources as needed.
[00116] At step (316) of the method (300), the ERM (126) transmits the infrastructure orchestration request to a virtualized infrastructure manager (VIM) (134) of the MANO (128).
[00117] At step (318) of the method (300), the VIM (134) processes the received infrastructure orchestration request. In an embodiment, the VIM (134) processes the received infrastructure orchestration request by decoding and interpreting the request details, which may involve instructions for managing virtualized resources such as servers, storage, or network elements. The VIM (134) then performs the necessary actions according to the request, such as allocating resources, adjusting configurations, or scaling infrastructure components.
[00118] At step (320) of the method (300), the VIM processes the received infrastructure orchestration request and transmits an acknowledgement (Request Ack) signal to the ERM (126). In an embodiment, once the required operations are complete, the VIM (134) transmits an acknowledgment (Request Ack) signal back to the ERM (126) to confirm that the orchestration request has been successfully processed. The acknowledgment includes status information indicating whether the request was completed as expected or if any issues were encountered.
[00119] At step (322) of the method (300), the ERM (126) sends a service request acknowledgement signal (response signal) to the SSS (124) in response to the received acknowledgement signal from the VIM (134). The ERM (126) having received the confirmation from the acknowledgement signal, responds by sending a corresponding acknowledgment signal to the SSS (124), indicating that the received service request has been successfully addressed and processed.
[00120] FIG. 4 illustrates an exemplary MANO framework architecture (400), in accordance with embodiments of the present disclosure.
[00121] As depicted in FIG. 4, the MANO framework architecture (400) may comprise several interconnected modules that work together to enable efficient resource allocation and management. The interconnected modules may include a user interface layer (402), NFV and software-defined networking (SDN) design functions (404), platform foundation services (406), platform core services (408), a platform operation, administration, and maintenance manager (410), and platform resource adapters and utilities (412).
[00122] The user interface layer (402) may serve as a primary point of interaction for the network operators and the network administrators. Through the user interface layer (402), various parameters associated with the ERM (126) may be configured and adjusted based on specific requirements. These parameters may include a number of microservices associated with ERMs, a number of ERMs that may be deployed simultaneously, and other relevant settings.
[00123] The NFV and SDN design functions (404) may work in conjunction with the platform core services (408) to analyze and route the service requests received by the ERM (126). The modules may be responsible for determining the target microservice within the MANO (128) framework based on a type of resource allocation required. The modules may utilize a mapping of service request types to target microservices maintained by the ERM (126) to ensure accurate routing. The NFV and SDN design functions (404) may include a virtualized network function (VNF) lifecycle manager (compute) that is a specialized component focused on managing the compute resources associated with VNF throughout their lifecycle. The NFV and SDN design functions (404) may include a VNF catalog that is a repository that stores and manages metadata, configurations, and templates for VNF, facilitating their deployment and lifecycle management. The NFV and SDN design functions (404) may include network services catalog, network slicing and service chaining manager, physical and virtual resource manager and CNF lifecycle manager. The network services catalog serves as a repository for managing and storing detailed information about network services, including their specifications and deployment requirements. The network slicing & service chaining manager is responsible for orchestrating network slices and service chains, ensuring efficient allocation and utilization of network resources tailored to various services. The physical and virtual resource manager oversees both physical and virtual resources, handling their allocation, monitoring, and optimization to ensure seamless operation across the network infrastructure. The CNF lifecycle manager manages the complete lifecycle of the CNF, including onboarding, instantiation, scaling, monitoring, and termination, thereby facilitating the efficient deployment and operation of network functions in a cloud-native environment.
[00124] The platform foundation services (406) may support an asynchronous event-based processing model implemented by the ERM (126), enabling concurrent handling of multiple service requests. They may also facilitate bi-directional communication interfaces (SSS_EM and EM_MS) used by the ERM (126) to interact with the SSS (124) and the MANO (128) framework microservices. The platform foundation services (406) may include microservices elastic load balancer, identity & access manager, command line interface (CLI), central logging manager, and the ERM. The microservices elastic load balancer ensures that incoming traffic is evenly distributed across multiple microservices, enhancing performance and availability. The identity and access manager handles user identity management and access control, enforcing permissions and roles to secure resources and services. The CLI offers a text-based method for users to interact with the platform, enabling command execution and configuration management. The central logging manager consolidates log data from various system components, providing a unified view for effective monitoring, troubleshooting, and data analysis.
[00125] The platform core services (408) are central to the processing and fulfillment of the service requests (i.e., at least one service request). The platform core services (408) may work together to allocate network slices, manage virtualized network functions, and orchestrate the underlying infrastructure resources based on each of the at least one service request routed by the ERM (126). The platform core services (408) may include NFV infrastructure monitoring manager, assurance manager, performance manager, policy execution engine, capacity monitoring manager, release management repository, configuration manager and Golden Configuration Template (GCT), NFV platform decision analytics platform, Not Only SQL database (NoSQL DB), platform schedulers jobs, VNF backup and upgrade manager, and microservice auditor. The NFV infrastructure monitoring manager tracks and oversees the health and performance of NFV infrastructure. The assurance manager ensures service quality and compliance with operational standards. The performance manager monitors system performance metrics to optimize efficiency. The policy execution engine enforces and executes policies across the platform. The capacity monitoring manager tracks resource usage and forecasts future needs. The release management repository manages software releases and version control. The configuration manager handles system configurations, ensuring consistency and automation. The GCT provides centralized oversight and management of platform operations. The NFV platform decision analytics platform utilizes data analytics to support decision-making. The NoSQL DB stores unstructured data to support flexible and scalable data management. The platform schedulers and jobs automate and schedule routine tasks and workflows. The VNF backup and upgrade manager oversees the backup and upgrading of VNFs. The microservice auditor ensures the integrity and compliance of microservices across the platform.
[00126] The platform operation, administration, and maintenance manager (410) may oversee operational aspects of the MANO framework architecture (400). The platform operation, administration, and maintenance manager (410) may be responsible for implementing a load-balancing mechanism used by the ERM (126) to distribute the service requests across multiple instances of the MANO (128) framework microservices.
[00127] The platform resource adapters and utilities (412) may provide necessary tools and interfaces for interacting with an underlying network infrastructure, i.e., the NFV architecture. These components may be crucial in translating the service requests into actionable commands for the resources (also referred to as the network resources) allocation and management. The platform resource adapters and utilities (412) may work closely with the platform core services (408) to ensure that allocated resources meet specified requirements. Together, these modules create a cohesive and efficient system for managing resource allocation. The platform resource adapters and utilities (412) may include platform external API adapter and gateway, generic decoder and indexer, orchestration adapter, API adapter, and NFV gateway. The platform external API adapter and gateway facilitates seamless integration with external APIs and manages data flow between the SSS and the platform. The generic decoder and indexer processes and organizes data from various formats such as XML, comma-separated values (CSV), and JSON, ensuring compatibility and efficient indexing. The orchestration adapter manages interactions with clusters, enabling container orchestration and scaling. The API adapter interfaces with services, allowing integration and management of cloud resources. The NFV gateway acts as a bridge for NFV communications, coordinating between NFV components and other platform elements.
[00128] The ERM (126) may interact with all these modules to facilitate an end-to-end process of a service request handling. The ERM (126) may receive the service requests through the user interface layer (402), utilize the NFV and SDN design functions (404) for the service requests analysis, leverage the platform core services (408) for the service requests fulfillment and use the platform resource adapters and utilities (412) for actual resources allocation for each of the service requests. The platform operations, administration and maintenance manager (410) may oversee this entire process, ensuring efficient operation and fault tolerance. This integrated approach may enable the MANO (128) framework to efficiently manage a complex process of resource allocation and management, potentially providing a flexible and responsive system capable of meeting diverse service requirements in a dynamic network environment.
[00129] FIG. 5 illustrates a computer system (500) in which or with which the embodiments of the present disclosure may be implemented.
[00130] As shown in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port(s) (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system connects.
[00131] The main memory (530) may be random access memory (RAM), or any other dynamic storage device commonly known in the art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions. Exemplary mass storage device (550) includes, but is not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks.
[00132] The bus (520) communicatively couples the processor (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect / Peripheral Component Interconnect Extended bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system.
[00133] Optionally, operator and administrative interfaces, e.g., a display, keyboard, joystick, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system. Other operator and administrative interfaces can be provided through network connections connected through the communication port(s) (560). Components described above are meant only to exemplify various possibilities. In no way should the aforementioned exemplary computer system limit the scope of the present disclosure.
[00134] FIG. 6 illustrates another exemplary flow diagram of the method (600) managing the service requests in the network (106), in accordance with an embodiment of the present disclosure.
[00135] At step 602, the method (600) includes receiving, by the receiving unit (118), at least one service request related to the at least one network element (122) from a subscriber support system (SSS) (124) using the first interface. In an embodiment, the first interface is the subscriber support system_event routing manager (SSS_EM) interface.
[00136] At step 604, the method (600) includes analyzing, by a processing engine (116), the at least one received service request to extract one or more parameters from the at least one received service request. In an embodiment, the one or more extracted parameters includes at least one of a network element identifier (ID) and a request type.
[00137] At step 606, the method (600) includes determining, by the processing engine (116), at least one microservice associated with the at least one received service request based on the one or more extracted parameters.
[00138] In an embodiment, the processing engine (116) is configured to analyse by at least one network service, one or more key performance indicators (KPIs) associated with the at least one network element (122) to continuously track the performance of the network or application. The one or more KPIs may include metrics such as bandwidth usage, response time, latency, the number of active connections, Central Processing Unit (CPU), Random Access Memory (RAM), Disk Input/Output (Disk I/O) or Network Input/Output (Network I/O) usage. The processing engine (116) compares the monitored one or more KPIs with one or more threshold values. If the comparison reveals that the analysed one or more KPIs exceed or fall below one or more threshold values, indicating a performance issue or inefficiency, the processing engine (116) automatically provisions the one or more operations to address the problem. The one or more operations may include allocating additional resources, scaling up the service by spawning more containers, or initiating corrective actions to restore optimal performance without manual intervention. This ensures that the method (600) maintains the desired level of service quality by dynamically responding to changes in network conditions or service demands. The value of monitoring a KPI parameter, such as CPU usage, is critical for maintaining system performance and preventing resource bottlenecks. For example, if the one or more KPI being monitored is CPU usage, a predefined threshold can be set. In this case, if the CPU usage exceeds 80%, it triggers an automatic action, such as spawning a new resource or scaling up system capacity to handle the increased load. This approach ensures that the method (600) remains responsive and efficient, preventing potential slowdowns or outages caused by resource exhaustion. Monitoring thresholds like this help maintain optimal performance and resource allocation in real-time.
[00139] At step 608, the method (600) includes transmitting, by the processing engine (116), the at least one received service request to the at least one determined microservice using a second interface. In an embodiment, the second interface includes an event routing manager_microservice (EM_MS) interface.
[00140] At step 610, the method (600) includes processing, by the at least one determined microservice, the at least one service request to perform one or more operations associated with the at least one received service request. In an embodiment, the one or more operations associated with the at least one service request includes at least one of a resource provision, a resource creation and a resource initialization.
[00141] At step 612, the method (600) includes transmitting, by the at least one determined microservice, at least one response message towards the SSS (124) using the first interface and the second interface. In an embodiment, the at least one response message indicates a status of the at least one processed service request. In an embodiment, the status of the at least one processed service request includes at least one of a success status and a failure status.
[00142] The present disclosure provides a technical advancement by providing the subscriber support system_event routing manager (SSS_EM) interface for providing improved connectivity between the subscriber support system (SSS) and a management and orchestration (MANO). The SSS_EM interface promotes interoperability between the SSS and management and orchestration (MANO) that promote seamless work between them. The SSS_EM interface adds new functionalities and capabilities to the existing MANO framework. The SSS_EM interface provides enhanced connectivity between the SSS and the MANO, and that can be used for container infrastructure monitoring, management, and analytics. The SSS_EM interface is used by the SSS to initiate a service request to allocate or manage the necessary resources. Thus, the present disclosure allows operators to scale infrastructure effectively in response to forecasted demand. Overall, the present disclosure provides a more agile and efficient approach to infrastructure scaling and resource management, overcoming the limitations of conventional techniques and improving the system’s ability to adapt to changing conditions.
[00143] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
As is evident from the above, the present disclosure described herein above has several technical advantages including:
- promoting interoperability between the subscriber support system (SSS) and management and orchestration (MANO) that promote a seamless work between them;
- providing an enhancement for the MANO framework;
- adding new functionalities and capabilities to the existing MANO framework;
- providing enhanced connectivity between the SSS and the MANO, which can be used for container infrastructure monitoring, management, and analytics; and
- providing advanced automation features and improved scalability in the network.

,CLAIMS:CLAIMS
We Claim:
1. A system (108) for managing a service request in a network (106), the system (108) comprising:
a receiving unit (118) configured to receive at least one service request from a subscriber support system (SSS) (124) using a first interface;
a memory (112); and
a processing engine (116) coupled with the receiving unit (118) to receive the at least one service request and is further coupled with the memory (112) to execute a set of instructions stored in the memory (112), the processing engine (116) is configured to:
analyze the at least one received service request to extract one or more parameters from the at least one received service request;
determine at least one network service associated with the at least one received service request based on the one or more extracted parameters;
transmit the at least one received service request to the at least one determined network service using a second interface;
process, by the at least one determined network service, the at least one received service request to perform one or more operations associated with the at least one received service request; and
transmit, by the at least one determined network service, at least one response message towards the SSS (124) using the first interface and the second interface.

2. The system (108) as claimed in claim 1, further configured to:
transmit the at least one response message indicating a status of the at least one processed service request, wherein the status of the at least one processed service request comprises at least one of a success status and a failure status.

3. The system (108) as claimed in claim 1, wherein the processing engine (116) is configured to:
analyse, by at least one network service, one or more key performance indicators (KPIs) associated with the at least one network element;
compare the analysed one or more KPIs with one or more threshold values; and
provision the one or more operations when the one or more KPIs exceed or fall below the one or more threshold values.

4. The system (108) as claimed in claim 1, wherein the one or more operations associated with the at least one received service request comprises at least one of a resource provision, a resource creation, a resource initialization and resource termination.

5. The system (108) as claimed in claim 1, wherein the first interface comprises a subscriber support system_event routing manager (SSS_EM) interface and the second interface comprises an event routing manager_microservice (EM_MS) interface.

6. The system (108) as claimed in claim 1, wherein the one or more extracted parameters comprise at least one of a network element identifier (ID) and a request type.

7. A method (600) for managing a service request in a network (106), the method (600) comprising:
receiving (602), by a receiving unit (118), at least one service request from a subscriber support system (SSS) (124) using a first interface;
analyzing (604), by a processing engine (116), the at least one received service request to extract one or more parameters from the at least one received service request;
determining (606), by the processing engine (116), at least one network service associated with the at least one received service request based on the one or more extracted parameters;
transmitting (608), by the processing engine (116), the at least one received service request to the at least one determined network service using a second interface;
processing (610), by the at least one determined network service, the at least one received service request to perform one or more operations associated with the at least one received service request; and
transmitting (612), by the at least one determined network service, at least one response message towards the SSS (124) using the first interface and the second interface.

8. The method (600) as claimed in claim 7, further comprising:
transmitting the at least one response message indicating a status of the at least one processed service request, wherein the status of the at least one processed service request comprises at least one of a success status and a failure status.

9. The method (600) as claimed in claim 7, further comprising:
analyse, by at least one network service, one or more key performance indicators (KPIs) associated with the at least one network element;
compare the analysed one or more KPIs with one or more threshold values; and
provision the one or more operations when the one or more KPIs exceed or fall below the one or more threshold values.

10. The method (600) as claimed in claim 7, wherein the one or more operations associated with the at least one received service request comprises at least one of a resource provision, a resource creation, a resource initialization and resource termination.

11. The method (600) as claimed in claim 7, wherein the first interface comprises a subscriber support system event routing manager (SSS_EM) interface and the second interface comprises an event routing manager_microservice (EM_MS) interface.

12. The method (600) as claimed in claim 7, wherein the one or more extracted parameters comprise at least one of a network element identifier (ID) and a request type.

Documents

Application Documents

# Name Date
1 202321073397-STATEMENT OF UNDERTAKING (FORM 3) [27-10-2023(online)].pdf 2023-10-27
2 202321073397-PROVISIONAL SPECIFICATION [27-10-2023(online)].pdf 2023-10-27
3 202321073397-FORM 1 [27-10-2023(online)].pdf 2023-10-27
4 202321073397-FIGURE OF ABSTRACT [27-10-2023(online)].pdf 2023-10-27
5 202321073397-DRAWINGS [27-10-2023(online)].pdf 2023-10-27
6 202321073397-DECLARATION OF INVENTORSHIP (FORM 5) [27-10-2023(online)].pdf 2023-10-27
7 202321073397-FORM-26 [28-11-2023(online)].pdf 2023-11-28
8 202321073397-Proof of Right [06-03-2024(online)].pdf 2024-03-06
9 202321073397-DRAWING [25-10-2024(online)].pdf 2024-10-25
10 202321073397-COMPLETE SPECIFICATION [25-10-2024(online)].pdf 2024-10-25
11 202321073397-FORM-5 [25-11-2024(online)].pdf 2024-11-25
12 Abstract.jpg 2025-01-17
13 202321073397-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321073397-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321073397-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321073397-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321073397-FORM 3 [24-02-2025(online)].pdf 2025-02-24