Sign In to Follow Application
View All Documents & Correspondence

Method And System For Provisioning Policies For One Or More Network Functions

Abstract: ABSTRACT METHOD AND SYSTEM FOR PROVISIONING POLICIES FOR ONE OR MORE NETWORK FUNCTIONS The present disclosure relates to a system (108) and a method (500) for provisioning policies for one or more network functions (222). The system (108) includes a receiving unit (210) to receive data from one or more sources within a network (106). The system (108 includes an analyzing unit (212) to analyze, the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to an operation of the one or more network functions (222). The system (108) includes a categorizing unit (214) to categorize, based on the analysis, one or more policies into groups associated to each of the one or more network functions (222). The system (108) includes a provisioning unit (218) to provision, the one or more grouped policies to a network function (222) of the one or more network functions (222) based on the categorization. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2023
Publication Number
15/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD - 380006, GUJARAT, INDIA

Inventors

1. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
9. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
10. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
11. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
12. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
13. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
14. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
15. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
16. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
17. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
18. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
19. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
20. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
21. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
22. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
23. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
24. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
25. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
26. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR PROVISIONING POLICIES FOR ONE OR MORE NETWORK FUNCTIONS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to network management system, more particularly relates to a method and a system for provisioning policies for one or more network functions.
BACKGROUND OF THE INVENTION
[0002] At present, network engineers encounter numerous challenges in managing and provisioning policies for network functions (NF). One of the challenges includes manual policy upgradation. Generally, policy provisioning for NFs is a manual and time-consuming process, as it requires engineers to individually upgrade policies for each NF. Such an approach leads to inefficiencies and potential errors. Further, existing systems lack automation capabilities, which make it difficult to handle policy updates efficiently, especially in dynamic network environments. Another challenge relates to NF-specific policies required for different NFs. Different NFs often require specific policies tailored to their functionalities and requirements, which makes managing such NP specific policies very complex and error prone.
[0003] Another challenge relates to scalability issues. As a network expands the number of NFs increases, which makes policy provisioning very difficult. As a result, scalability issues are hindered and the ability to keep policies up to date is reduced. Hence, there is a need for an improved method and system that can provision policies for network functions.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and system for provisioning policies for one or more network functions.
[0005] In one aspect of the present invention, the system for provisioning policies for the one or more network functions is disclosed. The system includes a receiving unit configured to receive data from one or more sources within a network. The system further includes an analyzing unit configured to analyze, the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to an operation of the one or more network functions. The system further includes a categorizing unit configured to categorize, based on the analysis, one or more policies into groups associated to each of the one or more network functions. The system further includes a provisioning unit configured to provision, the one or more grouped policies to a network function of the one or more network functions based on the categorization.
[0006] In an embodiment, the one or more sources at least one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS), and wherein the data includes performance data pertaining to the one or more network functions and policy templates.
[0007] In an embodiment, the one or more trained models is trained utilizing historical data corresponding to an operation of the one or more network functions retrieved from a database.
[0008] In an embodiment, the provisioning unit is further configured to set up the one or more grouped policies at the network function and update the one or more grouped policies in response to identification of a deviation in performance of the network function.
[0009] In an embodiment, the system further includes a generating unit configured to generate one or more grouped policies based on the categorization of the one or more policies. In an embodiment, the system further includes a customizing unit configured to customize, the one or more grouped policies as per a performance characteristic and requirement of the network function.
[0010] In an embodiment, the generating unit is configured to track, performance of the network function subsequent to the provisioning of the one or more policies thereof. The generating unit is further configured to identify one or more deviations in the performance of the network function. The generating unit is further configured to generate at least one update to address the one or more identified deviations.
[0011] In another aspect of the present invention, the method of provisioning policies for the one or more network functions is disclosed. The method includes the step of receiving data from one or more sources within a network. The method further includes the step of analyzing the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to an operation of the one or more network functions. The method further includes the step of categorizing based on the analysis, one or more policies into groups associated to each of the one or more network functions. The method further includes the step of provisioning the one or more grouped policies to a network function of the one or more network functions based on the categorization.
[0012] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive data from one or more sources within a network. The processor is configured to analyze the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to an operation of the one or more network functions. The processor is configured to categorize based on the analysis, one or more policies into groups associated to each of the one or more network functions. The processor is configured to provision the one or more grouped policies to a network function of the one or more network functions based on the categorization.
[0013] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0015] FIG. 1 is an exemplary block diagram of an environment for provisioning policies for one or more network functions, according to one or more embodiments of the present invention;
[0016] FIG. 2 is an exemplary block diagram of a system for provisioning policies for the one or more network functions, according to one or more embodiments of the present invention;
[0017] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0018] FIG. 4 is a flow diagram for provisioning policies for the one or more network functions, according to one or more embodiments of the present invention; and
[0019] FIG. 5 is a schematic representation of a method of provisioning policies for the one or more network functions, according to one or more embodiments of the present invention.
[0020] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0021] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0022] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0023] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0024] The present invention provides systems and methods for provisioning policies for network functions. The present invention includes collecting relevant data from various sources. Examples of relevant data include but are not limited to network functions (NF), NF performance data and policy templates. The NF performance data is analyzed by advanced Artificial Intelligence / Machine Learning algorithms to determine policy upgrade requirements. Further, policies are categorized based on NF-specific requirements and performance trends. Then the updated policy configurations are generated based on the analysis and identified requirements.
[0025] FIG. 1 illustrates an exemplary block diagram of an environment 100 for provisioning policies for one or more network functions 222 (as show in FIG.2), according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for provisioning policies for the one or more network functions 222.
[0026] In an embodiment, the one or more network function refers to functional components or services within a network infrastructure. The network functions 222 are responsible for handling various tasks such as routing, switching, processing data, security, and ensuring the overall performance and operation of the network 106. The one or more network functions 222 includes, but not limited to routing, switching, firewalls, load balancing, session management, Quality of Service (Qos), Access and Mobility Management Function (AMF), Session Management Function (SMF), User Plane Function (UPF), network firewall function, Network Address Translation (NAT) function. In an embodiment, provisioning policies for one or more network functions 222 refer to the set of rules, guidelines, or configurations that govern how resources, services, or network functions 222 are allocated, managed, and optimized within a network environment. The provisioning policies are applied to ensure that the network functions 222 perform efficiently and meet the desired operational requirements, such as performance, security, and resource allocation. The provisioning policies includes, but not limited to performance-based policies, security policies, traffic management policies, Service-Level Agreement (SLA) compliance policies, network function lifecycle policies, compliance and regulatory policies, customization policies.
[0027] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0028] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0029] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0030] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0031] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0032] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to provide provisioning policies for the one or more network functions 222. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0033] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0034] FIG. 2 is an exemplary block diagram of the system 108 for provisioning policies for the one or more network functions 222, according to one or more embodiments of the present invention.
[0035] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0036] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0037] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0038] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0039] In order for the system 108 to provide provisioning policies for the one or more network functions 222, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving unit 210, an analyzing unit 212, a categorizing unit 214, a provisioning unit 216 a generating unit 218, and a customizing unit 220 communicably coupled to each other for provisioning policies for the one or more network functions 222.
[0040] In one embodiment, each of the one or more modules, the receiving unit 210, the analyzing unit 212, the categorizing unit 214, the provisioning unit 218, the generating unit 218, and the customizing unit 220 can be used in combination or interchangeably for provisioning policies for the one or more network functions 222.
[0041] The receiving unit 210, an analyzing unit 212, a categorizing unit 214, a provisioning unit 216, the generating unit 218, and the customizing unit 220 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0042] In one embodiment, the receiving unit 210 is configured to receive data from one or more sources within the network 106. The data includes performance data pertaining to the one or more network functions 222 and policy templates. The performance data refers to metrics and statistics that describe how the one or more network functions 222 are operating in real time or over a period of time. The performance data provides insights into the behavior, efficiency, and overall health of the one or more network functions 222. The performance data includes, but is not limited to resources utilization, latency and response time, throughput, error rates and packet loss, session management data, security events, jitter and variation in delay, availability and uptime. The resources utilization refers to Central Processing Unit (CPU) utilization, memory usage, bandwidth utilization, storage usage. The latency and response time refers to the time taken by the one or more network functions 222 to process a request or data packet and respond. The throughput refers to the amount of data processed by the one or more network functions 222 within a specific time frame (e.g., packets per second or bytes per second). The error rates and packet loss refer to the percentage of data packets that are lost, dropped, or corrupted by the one or more network functions 222. The session management data refers to the number of active sessions the one or more network functions 222 is handling, session setup time, and session drop rate. The security events refer to the data related to security incidents, such as detected threats, blocked traffic, or failed authentication attempts in security-related network functions like firewalls or intrusion detection systems. The jitter and variation in delay refers to the variability in packet delay as data passes through the one or more network functions 222. The availability and uptime refer to the amount of time the one or more network functions 222 is operational and available to handle traffic without downtime or interruptions.
[0043] The policy templates are predefined sets of rules, configurations, or guidelines that are applied to the one or more network functions 222 to control their behavior, performance, security, and resource allocation. The policy template includes, but not limited to Resource allocation policy templates, scaling and elasticity policy templates, Quality of Service (QoS) policy templates, security policy templates, traffic shaping and load balancing templates, Service-Level Agreement (SLA) compliance templates, lifecycle management policy templates. The resources allocation policy template defines the amount of CPU, memory, bandwidth, or storage allocated to the one or more network functions 222. The scaling and elasticity policy templates are the rules for when and how to scale the one or more network functions 222 based on performance thresholds. The QoS policy templates set priorities for different types of traffic to ensure critical services receive the necessary resources. The security policy templates contain firewall rules, encryption standards, or access control lists (ACLs) to secure the one or more network functions 222. The traffic shaping and load balancing templates includes the policies that define how traffic should be routed or shaped to ensure optimal performance. The SLA compliance templates ensure the one or more network functions 222 meet certain performance or uptime requirements as part of a contractual agreement. The lifecycle management policy templates include guidelines for the entire lifecycle of the one or more network functions 222, from deployment and configuration to updates and decommissioning.
[0044] In an embodiment, the received data is at least one of an alarm data and a counter data. The alarm data and the counter data are segregated from different databases 208. The alarm data refers to notifications or alerts generated when certain network conditions or performance thresholds are violated. The counter data refers to quantitative metrics collected over time that reflect the usage and performance of the one or more NFs 222.
[0045] The one or more sources refer to the various entities or components within the network 106 that generate or provide data used by the system for provisioning policies for the one or more network functions 222. The one or more sources include any element within the network infrastructure that collects, monitors, or reports on performance, operational metrics, or configuration data related to the one or more network functions 222 and services. The one or more sources includes, but are not limited to, file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS). The file input includes the files from local or external systems 108. The source path includes a defined directory or location for which data is pulled. The input stream includes the data which is received as continuous streams, possibly from live feeds or streaming services. The HTTP/2 is a protocol used for transmitting data over the web. The HDFS is a storage system for managing large datasets commonly used in big data applications. The NAS is a storage system that provides data access over the network 106, allowing multiple users to store and retrieve files in a centralized location.
[0046] In an embodiment a real-time data is received via a probing agent 302 from the one or more sources. The probing agent 302 is a component within the network 106 designed to collect real-time data about network traffic, performance metrics, or anomalies. The probing agent 302 monitors one or more network functions 222 performance and provides detailed insights into operational efficiency, latency, throughput, and error rates. The data from probing agent 302 can include but not limited to, packet flows, latency metrics, and traffic volume that are essential for performance analysis and policy adaptation.
[0047] In an embodiment, the probing agent 302 check a clear code from the one or more network functions 222 received through the one or more data sources. Further, the clear code data are used for analysis. The clear codes help in various flow analysis of the one or more network functions 222. Every clear code values signifies different functionalities. The clear code refers to a specific code or identifier used to represent certain states, statuses, or conditions of a network function. The clear codes are used to provide insights into the operational state of network elements and help identify issues, performance metrics, or events that need to be addressed. Each clear code typically signifies a unique event or operational state, such as successful operation, warnings, errors, or specific anomalies. The clear codes allow for more efficient monitoring, analysis, and troubleshooting of network functions.
[0048] Upon receiving the data, the analyzing unit 212 is configured to analyze the received data with one or more trained models. The one or more trained models is at least one of Artificial Intelligence/Machine learning (AI/ML) model. The AI/ML model is designed to analyze data from the one or more network functions 222 to optimize performance, detect issues, and make decisions about policy provisioning. The one or more trained models are trained utilizing historical data corresponding to an operation of the one or more network functions 222 retrieved from the database 208. The historical data corresponding toe the operation of the one or more network functions 222 are retrieved transmitting the queries to the database 208. The queries specify which data is required (e.g., performance metrics, logs, policy information) and often include criteria like time range, specific network functions, or operational parameters. The historical data refers to previously collected and stored information that pertains to the performance, operation, and behavior of network functions 222 over a period of time. The historical data includes performance metrics, anomalies and deviations, resources utilization trends, traffic patterns, event logs, configuration data, failure data. The performance metrics refers to the historical data that includes records of key performance indicators (KPIs) of network functions 222 over time. The anomalies and deviations refer to the historical data that includes logs or records of deviations from normal behavior, such as traffic spikes such as resource overloads, or unexpected downtime and security breaches. The resources utilization trends refer to historical data reflects how network resources (e.g., CPU, memory, storage) were consumed by the one or more network functions 222 over time. The traffic pattern includes historical data on the amount and type of traffic handled by the one or more network functions 222 (e.g., voice, video, data) and how this traffic fluctuates over time. The event logs refer to historical logs of events that occurred within the one or more network functions 222, such as system errors, security alerts, or configuration changes. The configuration data refers to past configurations and policies applied to the one or more network functions 222 that are stored as historical data. The failure data refers to records of past failures, outages, or incidents affecting network functions, along with their root causes, resolution steps, and duration.
[0049] In an embodiment, the received data is analyzed with one or more trained models to identify at least one of patterns, anomalies and trends pertaining to an operation of the one or more network functions 222. The patterns refer to recurring behaviors or regularities in the operation of the one or more network functions 222. The anomalies are deviations from the expected or normal operation of the one or more network functions 222. The anomalies are unusual events that could indicate potential issues, such as performance degradation, security breaches, or resource overload. The trends refer to longer-term directional changes in the performance or behavior of the one or more network functions 222.
[0050] In order to analyze the received data, the one or more trained models compare the received data with the historical data stored in the database 208. The one or more models use various machine learning techniques to identify patterns, anomalies, and trends in the data. The machine learning techniques include, but are not limited to, supervised learning, unsupervised learning anomaly detection, reinforcement learning, deep learning.

[0051] Based on analysis of the received data with the one or more trained models, the categorizing unit 214 is configured to categorize one or more policies into groups associated to each of the one or more network functions 222. The one or more policies are predefined rules or guidelines used to manage and control the one or more network functions 222. The one or more policies determine how network resources are allocated, how traffic is handled, and how various network operations are conducted. The one or more policies include, but are not limited to, traffic shaping policies, security policies, resource allocation policies, and scaling policies. In particular, based on the insights from the analysis, the categorizing unit 214 classifies one or more policies into different groups tailored to the operational needs of each network function. For example, if the analysis shows that a load balancer frequently handles high traffic loads, policies related to load distribution and scaling would be grouped and assigned to that network function. The grouping of one or more policies ensures that the right policies are applied to the appropriate one or more network functions 222 based on their operational requirements and performance characteristics.
[0052] Upon categorizing the one or more policies into groups, the provisioning unit 216 is configured to provision one or more grouped policies to the network function of the one or more network functions 222. The one or more grouped policies refers to a collection of policies that have been organized into categories or groups associated to each of the one or more network functions 222. The one or more grouped policies are sets of related policies that have been organized together based on their intended application to particular network function. The provisioning of the one or more grouped policies refers to the process of assigning and activating the grouped policies on the relevant network function of the one or more network functions 222. Further, the provisioning unit 216 is configured to set up the one or more grouped policies at the network function.
[0053] In an embodiment, the generating unit 218 is configured to generate the one or more grouped policies. The one or more grouped policies refers to a collection of policies that have been organized into categories or groups associated to each of the one or more network functions 222. The one or more grouped policies are sets of related policies that have been organized together based on their intended application to particular network function.
[0054] In an embodiment, the generating unit 216 is further configured to track the performance of the network function of the one or more network functions 222. The performance of the network function refers to how effectively the network function operates within the network 106, typically measured against predefined metrics or KPIs (Key Performance Indicators). The predefined metrics include, but are not limited to, latency, throughput, reliability, resources utilization, packet loss, error rate, jitter, service availability. The performance metrics assess whether the network function is meeting its intended objectives in terms of speed, reliability, resource utilization, and overall functionality.
[0055] Upon tracking the performance of the network function of the one or more network function, the generating unit 218 is configured to identify a one or more deviations in the performance of the network function of the one or more network function. The one or more deviations in the performance of the network function refers to variance from the optimal behavior of the network function based on the performance metrics. For instance, the deviation includes, but not limited to increased latency, reduced throughput, high packet loss, excessive resources utilization, increased error rate, low service availability. The one or more deviations are identified by monitoring performance metrics, comparing against expected behavior, analyzing with trained models. For example, if the performance metrics exceeds a predefined threshold (high CPU usage or increased latency), then the deviation is identified.
[0056] Upon identifying the one or more deviations in the performance of the network function, the generating unit 218 is configured to generate at least one update to address the one or more identified deviations. The at least one update includes, but not limited to, policy adjustment, resources reallocation, threshold modification, security updates. The policy adjustment refers to modifying existing policies that control various aspects of the network function to address deviations such as increased latency or high resource utilization. The resources reallocation refers to the adjusting the usage of the network function’s resources such as CPU, memory, bandwidth to meet the requirements. The threshold modification refers to updating the performance thresholds for predefined metrics such as latency, packet loss etc., to ensure the network function is operating within optimal limits. The security updates refer to applying updated security policies to address potential vulnerabilities or threats that could be affecting the performance of the network function. Further, in response to identification of the deviation in performance of the network function the provisioning unit 216 is further configured to update the one or more grouped policies at the network function.
[0057] In an embodiment, the customizing unit 220 is configured to customize the one or more grouped policies. The one or more grouped policies are customized as per a performance characteristics and requirement of the one or more network function. The performance characteristics are measurable attributes of the one or more network functions 222 that indicate how well the one or more network functions 222 are performing. The performance characteristics include, but are not limited to, throughput, latency, reliability, resource utilization, scalability, and security measures. The requirements of the one or more network functions 222 refer to the operational needs or conditions that must be met for the one or more network functions 222 to function effectively. The requirements include but are not limited to predefined service levels, configuration settings, scalability, security, and QoS.
[0058] Therefore, the system 108 automates policy provisioning by reducing manual effort and human errors. The system 108 ensures optimal network performance by upgrading the network function policies efficiently. The system 108 scales to handle a growing number of NFs and network expansion. The system 108 ensures policies remain effective by real time monitoring. The system 108 enhances the network’s overall quality and performance.
[0059] FIG. 3 is an exemplary block diagram of an architecture 300 of the system 108 for provisioning policies for the one or more network functions 222, according to one or more embodiments of the present invention.
[0060] The architecture 300 includes the probing agent 302, a processing hub 304 and the user interface 206. The processing hub 304 includes a data integrator 306, a data pre-processing unit 308, a model training unit 310, a real-time monitoring unit 312, and a policy management unit 314 communicably coupled to each other. The processing hub 304 includes a data lake 316 communicably coupled to real-time monitoring unit 312.
[0061] The data integrator 306 receives the data from the from the one or more sources. In an embodiment, the data is received via the probing agent 302. The data is received from the probing agent via a processing hub - probing agent interface. The data from one or more sources includes but not limited to, probing agent 302, file input, source path, input stream, Hypertext transfer protocol (HTTP2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS). The data integrator 306 performs all data integration operation of all the data that is being received from the probing agent 302.
[0062] Upon integrating the received data, the data pre-processing unit 308 is configured to preprocess the received data. The preprocessing of the received data involves a data cleaning, a data normalization and a transformation. The data cleaning refers to removing any inconsistencies, errors or irrelevant data. The data normalization refers to scaling the data to a common range or format. The transformation refers to changes if required to the received data. The preprocessed data is stored at the data lake 316.
[0063] Upon preprocessing the received data, the model training unit 310 is responsible for training AI/ML models using the preprocessed data. The AI/ML model is trained utilizing historical data corresponding to the operation of the one or more network functions 222. The data corresponding to the operation of the one or more network functions 222 is retrieved from the data lake 316. Further, the model training unit 310 monitors training status, evaluates model output, generates inferences, and handles model retraining if required.
[0064] Thereafter, the real-time monitoring unit 312 compares the real-time data generated by the one or more network functions 222 with the predictions made by the AI/ML model or trained model. Upon comparison, if the real-time monitoring unit 312 identifies the issues such as performance deviation, the real-time monitoring unit 312 generates the alert or notification and transmits to the user interface 206.
[0065] Subsequently, the policy management unit 314 manages the entire lifecycle of policy creation, analysis, categorization, and provisioning based on the model outputs. The policies are categorized and provisioned automatically to network functions 222 based on analysis results. In particular, the policy management unit 314 ensures that appropriate policies are provisioned to enhance the network’s performance or fix any anomalies.
[0066] In an embodiment, the user interface 206 displays the model’s outputs, alerts, and notifications. The user interface 206 allows users to take proactive measures based on insights, preventing service disruptions.
[0067] FIG. 4 is a flow diagram for provisioning policies for the one or more network functions 222, according to one or more embodiments of the present invention.
[0068] At step 402, the data is received from the probing agent 302 via the processing hub - probing agent interface. Upon receiving the data via the probing agent 302, the data integrator 306, integrates the received data.
[0069] At step 404, the data integrated by the data integrator 306 is provided to the data pre-processing unit 308, where the data undergoes pre-processing. The pre-processing includes data cleaning, normalization and feature extraction.
[0070] At step 406, upon pre-processing the data, the real-time monitoring of the data is performed, where real-time data generated by the one or more network functions 222 are compared with the predictions made by AI/ML models. The output of the real-time monitoring is provided to the policy management unit 314.
[0071] At step 408, thereafter, the policy management unit 314 performs policy provisioning, automated policy generation, policy analysis and categorization based on the model output.
[0072] At step 410, subsequently, a check is made on the model output. In case the model output is found to be not optimal, the model is retrained. In case the model output is found to be optimal, the policy is updated and adapted to meet the optimal requirements.
[0073] FIG. 5 is a flow diagram of a method 500 for provisioning policies for the one or more network functions 222 according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0074] At step 502, the method 500 includes the step of receiving the data from the one or more sources within the network 106 by the receiving unit 210. The one or more sources includes, but are not limited to, file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS). The data includes performance data pertaining to the one or more network functions 222 and policy templates. In an embodiment, the received data is at least one of the alarm data and the counter data.
[0075] At step 504, the method 500 includes the step of analyzing the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to the operation of the one or more network functions 222 by the analyzing unit 212. The one or more trained models is trained utilizing historical data corresponding to an operation of the one or more network functions 222 retrieved from the database 208.
[0076] At step 506, the method 500 includes the step of categorizing the one or more policies into groups associated to each of the one or more network functions 222 based on the analysis by the categorizing unit 214.
[0077] At step 508, the method 500 includes the step of provisioning the one or more grouped policies to the network function of the one or more network functions 222 based on the categorization by the provisioning unit 216. In an embodiment, the method 500 includes the step of generating the one or more grouped policies based on the categorization of the one or more policies by the generating unit 218. The generating unit 218 is further configured to track the performance of the network function subsequent to the provisioning of the of the one or more policies. Upon tracking, the generating unit 218 is configured to identify the one or more deviations in the performance of the network function and generate at least one update to address the one or more identified deviation. In an embodiment, the customizing unit 220 is configured to customize the one or more grouped policies as per a performance characteristic and requirement of the network function.
[0078] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive the data from the one or more sources within the network 106. The processor 202 is further configured to analyze the received data with the one or more trained models to identify the at least one of, patterns, anomalies, and trends pertaining to the operation of the one or more network functions 222. The processor 202 is further configured to categorize, based on the analysis, the one or more policies into groups associated to each of the one or more network functions 222. The processor 202 is further configured to provision, the one or more grouped policies to the network function of the one or more network functions 222 based on the categorization.
[0079] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0080] The present disclosure incorporates technical advancement of automating the policy provisioning and reduces manual effort and human errors. The present invention allows seamless data handling and improves the ability to monitor and manage network functions across different data formats. The present invention ensures optimal network performance by upgrading the network function wise policies efficiently. The present invention scales to handle a growing number of network functions and network expansion. The present invention ensures that the policies remain effective. The present invention adapts to changing network conditions and network function specific requirements. With AI-driven optimization, the present invention enhances the overall network’s quality and performance.
[0081] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.


REFERENCE NUMERALS

[0082] Environment- 100
[0083] User Equipment (UE)- 102
[0084] Server- 104
[0085] Network- 106
[0086] System -108
[0087] Processor- 202
[0088] Memory- 204
[0089] User Interface- 206
[0090] Database- 208
[0091] Receiving Unit- 210
[0092] Analyzing Unit- 212
[0093] Categorizing unit- 214
[0094] Provisioning Unit- 216
[0095] Generating Unit- 218
[0096] Customizing Unit- 220
[0097] Network functions- 222
[0098] Probing agent- 302
[0099] Processing hub-304
[00100] Data integrator- 306
[00101] Data pre-processing unit -308
[00102] Model training unit -310
[00103] Real-time monitoring unit – 312
[00104] Policy management unit-314
[00105] Data lake -316
,CLAIMS:CLAIMS:
We Claim:
1. A method (500) of provisioning policies for one or more network functions (222), the method (500) comprising the steps of:
receiving, by one or more processors (202), data from one or more sources within a network (106);
analysing, by the one or more processors (202), the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to an operation of the one or more network functions (222);
categorizing, by the one or more processors (202), based on the analysis, one or more policies into groups associated to each of the one or more network functions (222); and
provisioning, by the one or more processors (202), the one or more grouped policies to a network function (222) of the one or more network functions (222) based on the categorization.

2. The method (500) as claimed in claim 1, wherein the one or more sources is one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS), and wherein the data includes performance data pertaining to the one or more network functions (222) and policy templates.

3. The method (500) as claimed in claim 1, wherein the one or more trained models is trained utilizing historical data corresponding to an operation of the one or more network functions (222) retrieved from a database (208).

4. The method (500) as claimed in claim 1, wherein the method (500) comprises the step of:
generating, by the one or more processors (202), the one or more grouped policies based on the categorization of the one or more policies; and
customizing, by the one or more processors (202), the one or more grouped policies as per a performance characteristic and requirement of the network function (222).

5. The method (500) as claimed in claim 5, comprising:
tracking, by the one or more processors (202), performance of the network function (222) subsequent to the provisioning of the one or more policies thereof;
identifying, by the one or more processors (202), the one or more deviations in the performance of the network function (222); and
generating, by the one or more processors (202), at least one update to address the one or more identified deviation.

6. A system (108) for provisioning policies for one or more network functions (222), the system (108) comprising:
a receiving unit (210) configured to receive, data from one or more sources within a network (106);
an analyzing unit (212) configured to analyse, the received data with one or more trained models to identify at least one of, patterns, anomalies, and trends pertaining to an operation of the one or more network functions (222);
a categorizing unit (214) configured to categorize, based on the analysis, one or more policies into groups associated to each of the one or more network functions (222); and
a provisioning unit (216) configured to provision, the one or more grouped policies to a network function (222) of the one or more network functions (222) based on the categorization.

7. The system (108) as claimed in claim 7, wherein the one or more sources is one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS), , and wherein the data includes performance data pertaining to the one or more network functions (222) and policy templates.

8. The system (108) as claimed in claim 7, wherein the one or more trained models is trained utilizing historical data corresponding to an operation of the one or more network functions (222) retrieved from a database (208).

9. The system (108) as claimed in claim 7, comprising:
a generating unit (216) configured to generate, the one or more grouped policies based on the categorization of the one or more policies; and
a customizing unit (218) configured to customize, the one or more grouped policies as per a performance characteristic and requirement of the network function (222).

10. The system (108) as claimed in claim 11, wherein the generating unit (218) is configured to:
track, performance of the network function (222) subsequent to the provisioning of the one or more policies thereof;
identify, the one or more deviations in the performance of the network function (222); and
generate, at least one update to address the one or more identified deviation.

Documents

Application Documents

# Name Date
1 202321067382-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf 2023-10-07
2 202321067382-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf 2023-10-07
3 202321067382-POWER OF AUTHORITY [07-10-2023(online)].pdf 2023-10-07
4 202321067382-FORM 1 [07-10-2023(online)].pdf 2023-10-07
5 202321067382-FIGURE OF ABSTRACT [07-10-2023(online)].pdf 2023-10-07
6 202321067382-DRAWINGS [07-10-2023(online)].pdf 2023-10-07
7 202321067382-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf 2023-10-07
8 202321067382-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321067382-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321067382-DRAWING [07-10-2024(online)].pdf 2024-10-07
11 202321067382-COMPLETE SPECIFICATION [07-10-2024(online)].pdf 2024-10-07
12 Abstract.jpg 2024-12-20