Sign In to Follow Application
View All Documents & Correspondence

Method And System For Management Of One Or More Network Functions (Nfs)

Abstract: ABSTRACT METHOD AND SYSTEM FOR MANAGEMENT OF ONE OR MORE NETWORK FUNCTIONS (NFs) The present disclosure relates to a system (108) and a method (500) for management of one or more resources associated with one or more Network Functions (NFs) (218). The system (108) includes a receiving unit (210) to receive data corresponding to each of the one or more NFs (218) via a probing unit (220). The system (108) includes a training unit (212) configured to train, an AI model utilizing the received data corresponding to each of the one or more NFs (218). The system (108) includes an updating unit (214) to update, one or more policies based on detection of a deviation on comparison of a real time data generated by the one or more NFs (218) with the one or more identified features. The system (108) includes an allocation unit (216) configured to dynamically allocate, one or more resources to each of the one or more NFs (218). Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2023
Publication Number
15/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India.

Inventors

1. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road
2. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road
3. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road
4. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road
5. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road
7. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road
8. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road
9. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road
10. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road
11. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road
12. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road
13. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road
14. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road
15. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road
16. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road
17. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road
18. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road
19. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road
20. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road
21. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road
22. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road
23. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road
24. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road
25. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road
26. Sunil meena
Reliance Corporate Park, Thane - Belapur Road
27. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road
28. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road
29. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road
30. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road
31. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road
32. Girish Dange
Reliance Corporate Park, Thane - Belapur Road
33. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR MANAGEMENT OF ONE OR MORE NETWORK FUNCTIONS (NFS)
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to wireless communication systems, more particularly relates to a method and a system for management of one or more resources associated with one or more Network Functions (NFs).
BACKGROUND OF THE INVENTION
[0002] Network functions (NFs) are important components in a telecommunication network as these components provide a well-defined functional behavior such as Access and Mobility Management Function (AMF), Session Management Function (SMF), User Plane Function (UPF), Policy Control Function (PCF) Unified Data Management (UDM) etc.
[0003] Due to increasing number of telecom subscribers, the volume of network data is increasing at a very fast pace as well, therefore this leads to enrichment complexity for each NF within the telecom network. The network engineers may find it complex to manage network data enrichment, since the process of extracting meaningful insights from the network data is quite a time-consuming task due to the sheer volume of data generated by each NF.
[0004] Further, due to sheer volume of data being generated by NFs, implementing and adapting policies for NFs based on real-time data and changing network conditions is again a time consuming and complex task, since manual policy updates are not agile enough to address dynamic network requirements.
[0005] Furthermore, NFs require specific resources to function optimally. Inefficient resource allocation led to suboptimal NF performance, impacting overall network quality and efficiency.
[0006] There is, therefore, a need for efficient mechanisms for managing NFs in a network.
SUMMARY OF THE INVENTION
[0007] One or more embodiments of the present disclosure provide a method and system for management of one or more resources associated with one or more Network Functions (NFs).
[0008] In one aspect of the present invention, the system for management of the one or more resources associated with the one or more Network Functions (NFs) is disclosed. The system includes a receiving unit configured to receive data corresponding to each of the one or more NFs via a probing unit. The system further includes a training unit configured to train, an AI model utilizing the received data corresponding to each of the one or more NFs to identify one or more features associated with each of the one or more NFs. The system further includes an updating unit configured to update, one or more policies based on detection of a deviation on comparison of a real time data generated by the one or more NFs with the one or more identified features. The system further includes an allocation unit configured to dynamically allocate, one or more resources to each of the one or more NFs based on the one or more updated policies.
[0009] In an embodiment, the data corresponds to usage of the one or more NFs, network traffic, and Streaming Data Records (SDR) resource allocation, and wherein the received data is enriched.
[0010] In an embodiment, the one or more features of each of the one or more NFs corresponds to trends, patterns, anomalies and correlation of the one or more NFs.
[0011] In an embodiment, dynamically allocating the resources comprises at least scaling up of the resources, scaling down of the resources, and rerouting of the network traffic associated with each of the one or more NFs.
[0012] In another aspect of the present invention, the method for management of the one or more resources associated with the one or more Network Functions (NFs) is disclosed. The method includes the step of receiving data corresponding to each of the one or more NFs via a probing unit. The method further includes the step of training an AI model utilizing the received data corresponding to each of the one or more NFs to identify one or more features associated with each of the one or more NFs. The method further includes the step of updating one or more policies based on detection of a deviation on comparison of a real time data generated by the one or more NFs with the one or more identified features. The method further includes the step of dynamically allocating one or more resources to each of the one or more NFs based on the one or more updated policies.
[0013] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive data corresponding to each of the one or more NFs via a probing unit. The processor is configured to train a logic model utilizing the received data corresponding to each of the one or more NFs to identify one or more features of each of the one or more NFs. The processor is configured to update one or more policies based on detection of a deviation on comparison of a real time data generated by the one or more NFs with the one or more identified features. The processor is configured to dynamically allocate resources of each of the one or more NFs based on the one or more updated policies.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for management of one or more resources associated with one or more network functions (NFs), according to one or more embodiments of the present invention;
[0017] FIG. 2 is an exemplary block diagram of a system for management of the one or more resources associated with the one or more NFs, according to one or more embodiments of the present invention;
[0018] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0019] FIG. 4 is a flow diagram for management of the one or more resources associated with the one or more NFs, according to one or more embodiments of the present invention; and
[0020] FIG. 5 is a schematic representation of a method for management of the one or more resources associated with the one or more NFs, according to one or more embodiments of the present invention.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] In the present invention the processing system enables automated data enrichment by extracting relevant insights from vast amounts of network data in (near) real time. Further, the processing system enables automated policy implementation and adaptation for NFs based on real-time network conditions and enriched data. Further, sufficient resources are allocated in a timely manner to the NFs thereby ensuring the NFs have adequate resources to function efficiently. Due to automated data enrichment, automated policy implementation and automated resource allocation, the NFs functional in an optimal manner, in turn the efficiency of network performance increases substantially.
[0026] FIG. 1 illustrates an exemplary block diagram of an environment 100 for management of one or more resources associated with one or more Network Functions (NFs) 218 (as shown in FIG. 2), according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for management of the one or more resources associated with the one or more NFs 218.
[0027] In an embodiment, the one or more NFs 218 refers to tasks or operations performed within the network 106. The NFs is at least one of virtualized network functions (VNFs) or Cloud native network functions (CNFs). The one or more NFs 218 includes, but not limited to, Access and Mobility Management Function (AMF), Session Management Function (SMF), User Plane Function (UPF), Policy Control Function (PCF). The one or more resources refers to the various computational, network and storage components that are required to execute and support the one or more NFs 218. The one or more resources includes, but not limited to, Central processing unit (CPU) cores, memory, disk storage, cache, bandwidth.
[0028] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0029] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0030] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0031] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0032] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0033] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to manage the one or more resources associated with the one or more NFs 218. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0034] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0035] FIG. 2 is an exemplary block diagram of the system 108 for management of the one or more resources associated with the one or more NFs 218, according to one or more embodiments of the present invention.
[0036] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. In an embodiment, the system 108 is communicably coupled to one or more network functions (NFs) 218 and a probing unit 220.
[0037] For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0038] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0039] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0040] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In order for the system 108 for management of the one or more resources associated with the one or more NFs 218, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving unit 210, a training unit 212, an updating unit 214, and an allocation unit 216 communicably coupled to each other for management of the one or more resources associated with the one or more NFs 218.
[0042] In one embodiment, each of the one or more modules, the receiving unit 210, the training unit 212, the updating unit 214, and the allocation unit 216 can be used in combination or interchangeably for management of the one or more resources associated with the one or more NFs 218.
[0043] The receiving unit 210, the training unit 212, the updating unit 214, the allocation unit 216 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0044] In one embodiment, the receiving unit 210 is configured to receive data corresponding to each of the one or more NFs 218 via the probing unit 220. The probing unit 220, actively or passively monitors network traffic and gathers data from one or more NFs 218 and other network entities. The probing unit 220 includes relevant information such as usage patterns, performance metrics, traffic details and resource allocation. The data corresponds to usage of the one or more NFs, network traffic, and Streaming Data Records (SDR), resource allocation. The usage of the one or more NFs 218 includes information regarding how the one or more NFs 218 are being utilized such as CPU, load, memory consumption and processing capacity. The network usage metrics includes, but not limited to, bandwidth consumption, the number of active sessions and request counts. The network traffic data includes detailed records of the data packets flowing through the network 106 including packet headers, payloads, source/destination IP addresses, protocols, etc. The SDR includes real-time or near real-time data from streaming applications or services. The streaming application includes information related to continuous streams of data flowing between one or more NFs 218. The resource allocation includes data related to scaling operations such as scale-up or scale-down decisions, load balancing or rerouting actions in response to traffic changes. The data is essential for analyzing network performance, resource utilization and overall operation of the one or more NFs 218.
[0045] Upon receiving the data corresponding to each of the one or more NFs 218, the training unit 212 is configured to train an Artificial Intelligence (AI) model. The AI model is trained by utilizing the received data corresponding to each of the one or more NFs 218. In an embodiment, the AI model is trained by utilizing the historical data corresponding to each of the one or more NFs 218. The AI model is trained to identify one or more features associated with each of the one or more NFs 218. The one or more features of each of the one or more NFs corresponds to trends, patterns, anomalies and correlation of the one or more NFs 218. The trends refer to the long-term directional change in the performance or usage of each of the one or more NFs 218 over time. For example, increasing CPU or memory usage of each of the one or more NFs 218 over time due to growing traffic demands. The patterns refer to regular, recurring behavior or sequence that occurs over specific time intervals in the operation of each of the one or more NFs 218. For example, daily traffic surges at predictable times, such as during business hours, when user activity peaks. The anomalies are an unexpected deviation from the normal behavior or expected operation of each of the one or more NFs 218. The anomalies include, but are not limited to, potential issues, errors, or unusual conditions that require attention. For example, unexpected CPU or memory usage increases without a corresponding increase in traffic. The correlation refers to the relationship between two or more metrics or behaviors of each of the one or more NFs 218. The correlations help to understand how the one or more NFs impact each other. For example, the traffic load and CPU usage correlation i.e., as traffic load increases, the CPU usage of the NF also increases.
[0046] Upon training the AI model to identify the one or more features associated with each of the one or more NFs 218. The trained AI model compares a real time data generated by the one or more NFs 218 with the one or more identified features. Upon comparing if the AI model detects a deviation in the one or more identified feature of the one or more NFs 218, the updating unit 214 is configured to update the one or more policies associated with the one or more NFs 218. The deviations include, but are not limited to anomalies, changes in trends, new correlations. The one or more policies refer to a set of predefined rules or guidelines that control how resources (such as CPU, memory, bandwidth, etc.) are allocated and managed for each of the one or more NFs 218. The one or more policies include, but are not limited to, resource scaling policies, load balancing and traffic rerouting policies, Quality of Service (QoS) policies, security and access control policies, energy efficiency policies, data retention and logging policies, anomaly detection policies. The resources scaling policies automatically scales up or down computational network or storage resources based on traffic load or demand. . The load balancing policies include policies for rerouting network traffic to optimize performance or to avoid congestion. The load balancing policies balance the network traffic between one or more NFs 218. The QoS policies prioritizes resources for certain types of traffic or one or more NFs 218 (For e.g., low-latency applications like gaming). The security and access control policies are the policies which enforces secure access to one or more NFs and data flows such as encryption or user authentication rules. The energy efficiency policies are the policies that reduce resource consumption during period of low demand by scaling down or turning off the one or more NFs 218. The data retention and logging policies manages storage resources by defining how long usage data, traffic logs, and SDR (Streaming Data Records) should be retained. The anomaly detection policies are the polices that trigger resource adjustments or rerouting based on detected anomalies or deviations from normal operating patterns.
[0047] Upon updating the one or more policies, the allocation unit 216 is configured to dynamically allocate one or more resources to each of the one or more NFs 218. The dynamic allocation of the one or more resources includes at least scaling up of the resources, scaling down of the resources and rerouting of the network traffic associated with each of the one or more NFs 218. The scaling up refers to increasing the resources allocated to each of the one or more NFs 218 when demand rises, such as during traffic surges or increased workloads. For example, adding CPU power, increasing memory, allocating more bandwidth. The scaling down of the resources is the process of reducing the resources allocated to each of the one or more NFs 218 when demand decreases, ensuring that resources are not wasted. For example, reducing CPU cores, reducing memory usage, lowering bandwidth allocation. The rerouting of the network traffic refers to the redirection of network traffic from one NF to another to manage load distribution or address performance issues. For example, load balancing, avoiding bottlenecks, handling failures.
[0048] Therefore, the system 108 provides automated data enrichment and policy -based automation. Further, the system 108 automates the process of data enrichment, extracting relevant insights from large amounts of network data. Further, the system 108 enables automated policy implementation and adaptation for one or more NFs based on real-time network conditions. Further, the system 108 improves network quality, reduces downtime and enhances the user or subscriber experience.
[0049] FIG. 3 is an exemplary block diagram of an architecture 300 of the system 108 for management of the one or more resources associated with the one or more NFs 218, according to one or more embodiments of the present invention.
[0050] The architecture 300 includes the probing unit 220, a processing hub 302 and the user interface 206. The processing hub 302 includes a data collection and integration unit 304, a data enrichment unit 306, a model training unit 308, a real-time monitoring unit 310 and a policy automation unit 312. Further, a data lake is communicable coupled to the real-time monitoring unit 310.
[0051] In an embodiment, the data collection and integration unit 304 receives the data corresponding to each of the one or more NFs 218 form the probing unit 220. The data is received from the probing unit 220 via a processing hub-probing unit interface. The data corresponds to usage of the one or more NFs, network traffic, and Streaming Data Records (SDR), resource allocation. The data collection and integration unit 304 performs all integration operations of the received data.
[0052] Upon receiving the data and integrating the received data, the data enrichment unit 306 preprocesses the received data. The preprocessing of the received data includes data cleaning, data normalization and transformation. The data cleaning includes removing any inconsistencies, errors or irrelevant data. The data normalization scales the data to a common range format. The transformation is performed if required. The transformation refers to converting data into a suitable format for analysis. The transformation is crucial after data cleaning and normalization, as the transformation ensures the data aligns with the requirements of subsequent processing or analysis tasks.
[0053] Upon enriching the data, the model training unit 308 trains the AI model by using the enriched data. The AI model is trained by utilizing the enriched data corresponding to each of the one or more NFs 218 to identify the one or more features associated with each of the one or more NFs 218. The AI model can also utilize the historical data stored in the data lake 314. The one or more features of each of the one or more NFs corresponds to trends, patterns, anomalies and correlation of the one or more NFs. Further, the enriched data and the model output are stored in the data lake 314.
[0054] Upon training the AI model, the real-time monitoring unit 310 compares the real-time data generated by the one or more NFs with the predictions made by the AI models. Upon comparison, if the deviation is detected in the one or more features of the one or more NFs 218, the real-time monitoring unit 310 generates the alerts or notification to the user interface 206.
[0055] Upon monitoring the data, the policy automation unit 312 manages the policy evaluation, and automation based on the received model output. The policy evaluation evaluates the existing NF policies and compares the existing NF policies with the enriched data. The policy evaluation identifies areas where policies need to update or adjusted. The policy automation scale resources up or down, reroute traffic, or take other actions to align NF operations with policies by updating the polices, when required. More specifically, the policy automation unit 312 updates the one or more policies and automatically allocates the one or more resources to each of the one or more NFs based on the one or more updated policies. Subsequently, the model outputs are transmitted to the user interface 206 to take proactive measures in order to prevent any service disruptions.
[0056] FIG. 4 is a flow diagram for management of the one or more resources associated with the one or more NFs 218, according to one or more embodiments of the present invention.
[0057] At step 402, the data collection and enrichment unit receive the data from the probing unit 220 via the processing hub-probing unit interface. The data corresponds to usage of the one or more NFs, network traffic, and Streaming Data Records (SDR), resource allocation. The data collection and integration unit 304 performs all integration operations of the received data.
[0058] At step 404, upon receiving the data and integrating the received data, the data enrichment unit 306 preprocesses the received data. The preprocessing of the received data includes data cleaning, data normalization and transformation. The data cleaning includes removing any inconsistencies, errors or irrelevant data. The data normalization scales the data to a common range format.
[0059] At step 406, upon preprocessing the received data, the real-time monitoring unit 310 monitors the received data corresponding to one or more NFs 218 and compares it with the trained AI model predictions to identify deviations or anomalies. The AI model is trained by utilizing the enriched data corresponding to each of the one or more NFs 218 to identify the one or more features associated with each of the one or more NFs 218. The AI model can also utilize the historical data stored in the data lake 314.
[0060] At step 408, upon the real-time monitoring unit 310 identifies deviations or anomalies, the policy automation unit 312 evaluates the one or more policies and automatically updates them if necessary.
[0061] At step 410, upon evaluating and updating the one or more policies, if the model output is not optimal, the AI model is retrained. In particular, the model output is considered as not optimal when the deviation is detected or the current configuration of resources, traffic handling, or policies is not performing as expected. Alternatively, if the model output is optimal the one or more resources are dynamically allocated to each of the one or more NFs 218. The model output is considered to be optimal when the real-time data aligns with the expected behavior and the one or more NFs 218 are functioning efficiently.
[0062] FIG. 5 is a flow diagram of a method 500 for management of the one or more resources associated with the one or more NFs 218, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0063] At step 502, the method 500 includes the step of receiving the data corresponding to each of the one or more NFs 218 via the probing unit 220 by the receiving unit 210. The data corresponds to usage of the one or more NFs, network traffic, and Streaming Data Records (SDR), resource allocation. The received data is enriched.
[0064] At step 504, the method 500 includes the step of training the AI model utilizing the received data corresponding to each of the one or more NFs 218 to identify one or more features associated with each of the one or more NFs 218 by the training unit 212. The one or more features of each of the one or more NFs 218 corresponds to trends, patterns, anomalies and correlation of the one or more NFs 218.
[0065] At step 506, the method 500 includes the step of updating the one or more policies based on detection of the deviation on comparison of the real time data generated by the one or more NFs 218 with the one or more identified features by the updating unit 214.
[0066] At step 508, the method 500 includes the step of dynamically allocating the one or more resources to each of the one or more NFs 218 based on the one or more updated policies by the allocation unit 216. The dynamically allocating of the resources includes at least scaling up of the resources, scaling down of the resources, and rerouting of the network traffic associated with each of the one or more NFs 218.
[0067] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive the data corresponding to each of the one or more NFs 218 via the probing unit 220. The processor 202 is further configured to train the logic model utilizing the received data corresponding to each of the one or more NFs 218 to identify one or more features of each of the one or more NFs 218. The processor 202 is further configured to update the one or more policies based on detection of the deviation on comparison of the real time data generated by the one or more NFs 218 with the one or more identified features. The processor 202 is further configured to dynamically allocate the resources of each of the one or more NFs 218 based on the one or more updated policies.
[0068] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0069] The present disclosure incorporates technical advancement of optimizing network performance and reducing resource wastage. The present invention allows the system to make informed decisions and adjust policies automatically in response to deviations in expected behavior, ensuring more responsive network management. The present invention minimizes potential downtime, ensuring higher service availability. The present invention ensures that resources are always efficiently utilized, preventing over-provisioning and under-provisioning. The present invention enhances the system's ability to make more accurate decisions regarding resource allocation. The present invention ensures that each NF operates at optimal performance and contributes to better Quality of Service (QoS) and improved overall network efficiency.
[0070] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0071] Environment- 100
[0072] User Equipment (UE)- 102
[0073] Server- 104
[0074] Network- 106
[0075] System -108
[0076] Processor- 202
[0077] Memory- 204
[0078] User Interface- 206
[0079] Database- 208
[0080] Receiving Unit- 210
[0081] Training Unit- 212
[0082] Updating unit- 214
[0083] Allocation Unit- 216
[0084] One or more network functions- 218
[0085] Probing Unit- 220
[0086] Processing hub- 302
[0087] Data collection and integration unit-304
[0088] Data enrichment unit- 306
[0089] Model training unit- 308
[0090] Real-time monitoring unit- 310
[0091] Policy automation unit- 312
[0092] Data lake- 314
,CLAIMS:CLAIMS:
We Claim:
1. A method (500) for management of one or more resources associated with one or more Network Functions (NFs) (218), the method (500) comprising the steps of:
receiving, by one or more processors (202), data corresponding to each of the one or more NFs (218) via a probing unit (220);
training, by the one or more processors (202), an AI model utilizing the received data corresponding to each of the one or more NFs (218) to identify one or more features associated with each of the one or more NFs (218);
updating, by the one or more processors (202), one or more policies based on detection of a deviation on comparison of a real time data generated by the one or more NFs (218) with the one or more identified features; and
dynamically allocating, by the one or more processors (202), one or more resources to each of the one or more NFs (218) based on the one or more updated policies.

2. The method (500) as claimed in claim 1, wherein the data corresponds to usage of the one or more NFs (218), network traffic, and Streaming Data Records (SDR), resource allocation, and wherein the received data is enriched.

3. The method (500) as claimed in claim 1, wherein the one or more features of each of the one or more NFs (218) corresponds to trends, patterns, anomalies and correlation of the one or more NFs (218).

4. The method (500) as claimed in claim 1, wherein dynamically allocating the resources comprises at least scaling up of the resources, scaling down of the resources, and rerouting of the network traffic associated with each of the one or more NFs (218).

5. A system (108) for management of one or more resources associated with one or more Network Functions (NFs) (218) , the system comprises:
a receiving unit (210) configured to receive, data corresponding to each of the one or more NFs (218) via a probing unit (220);
a training unit (212) configured to train, an AI model utilizing the received data corresponding to each of the one or more NFs (218) to identify one or more features associated with each of the one or more NFs (218);
an updating unit (214) configured to update, one or more policies based on detection of a deviation on comparison of a real time data generated by the one or more NFs (218) with the one or more identified features; and
an allocation unit (216) configured to dynamically allocate, one or more resources to each of the one or more NFs (218) based on the one or more updated policies.

6. The system (108) as claimed in claim 5, wherein the data corresponds to usage of the one or more NFs (218), network traffic, and Streaming Data Records (SDR), resource allocation, and wherein the received data is enriched.

7. The system (108) as claimed in claim 5, wherein the one or more features of each of the one or more NFs (218) corresponds to trends, patterns, anomalies and correlation of the one or more NFs (218).

8. The system (108) as claimed in claim 5, wherein dynamically allocating the resources comprises at least scaling up of the resources, scaling down of the resources, and rerouting of the network traffic associated with each of the one or more NFs (218).

Documents

Application Documents

# Name Date
1 202321067387-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf 2023-10-07
2 202321067387-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf 2023-10-07
3 202321067387-POWER OF AUTHORITY [07-10-2023(online)].pdf 2023-10-07
4 202321067387-FORM 1 [07-10-2023(online)].pdf 2023-10-07
5 202321067387-FIGURE OF ABSTRACT [07-10-2023(online)].pdf 2023-10-07
6 202321067387-DRAWINGS [07-10-2023(online)].pdf 2023-10-07
7 202321067387-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf 2023-10-07
8 202321067387-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321067387-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321067387-DRAWING [01-10-2024(online)].pdf 2024-10-01
11 202321067387-COMPLETE SPECIFICATION [01-10-2024(online)].pdf 2024-10-01
12 Abstract.jpg 2024-11-21
13 202321067387-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321067387-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321067387-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321067387-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321067387-FORM 3 [31-01-2025(online)].pdf 2025-01-31