Sign In to Follow Application
View All Documents & Correspondence

Method And System Of Management Of Network Functions (N Fs) In A Network

Abstract: ABSTRACT METHOD AND SYSTEM OF MANAGEMENT OF NETWORK FUNCTIONS (NFs) IN A NETWORK The present disclosure relates to a system (108) and a method (500) for management of one or more Network Functions (NFs) (226) in a network (106). The system (108) includes a receiving unit (210) to receive data from one or more data sources (228) in real time. The system (108) includes a selecting unit (212) to select data corresponding to one or more features from the received data. The system (108) includes a training unit (214) to train, one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDR) resource allocation history. The system (108) includes an identifying unit (216) to identify trends and patterns in the NF load pattern and the SDR resource allocation history. The system (108) includes an analyzing unit (218) to analyze, the identified trends and patterns to predict a load for each of the one or more NFs (226). Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2023
Publication Number
15/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM OF MANAGEMENT OF NETWORK FUNCTIONS (NFs) IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to network management, more particularly relates to a method and a system of management of one or more Network Functions (NFs) in a network.
BACKGROUND OF THE INVENTION
[0002] With the increase in number of users, the network service provisions have been implementing to up-gradations to enhance the service quality so as to keep pace with such high demand. With advancement of technology, there is a demand for the telecommunication service to induce up-to-date features into the scope of provision so as to enhance user experience and implement advanced monitoring mechanisms. There are regular data analyses to observe issues beforehand for which many data collection as well as assessment practices are implemented in a network.
[0003] A 5G vProbe (virtual probe) is a probing agent implemented to actively collect probing data preferably Streaming Data Record (SDR) from Network nodes. The network functions generate the SDR which are streamed towards vProbe, where these records are then finally indexed in the ATOM (adaptive troubleshooting and operations management platform) data lake. Then they can be further analyzed, which will aid in the overall network monitoring. The network engineers face the challenge of estimating the upcoming Network Function (NF) Software-Defined Radio (SDR) load for dynamic resource management. This means they need to predict the expected load that specific NFs would place on the network's SDR resources in the near future. Presently, there is no mechanism in place to estimate load statistics in advance. So, network engineers have to allocate resources based on manually analysis of live data which may lead to inefficient resource allocation due to inadequate load estimation can lead to underutilization or overutilization of SDR resources, impacting network performance and resource costs. As Telecom networks are dynamic, with varying traffic patterns and NF usage, predicting SDR load accurately in such an environment can be challenging. Accurate load estimation is essential to ensure that NFs receive the required SDR resources to maintain the quality of service for telecom customers. There is need of a system and method thereof to analyze available data for estimation of upcoming NF SDR load for dynamic resource management.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and system for management of one or more Network Functions (NFs) in a network.
[0005] In one aspect of the present invention, the system for management of the one or more Network Functions (NFs) in the network is disclosed. The system includes a receiving unit configured to receive data from one or more data sources in real time. The system further includes a selecting unit configured to select data corresponding to one or more features from the received data. The system further includes a training unit configured to train, one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDR) resource allocation history to identify patterns and relationship between a load of the one or more NFs and allocated resources. The system further includes an identifying unit configured to identify trends and patterns in the NF load pattern and the SDR resource allocation history based on the training of the one or more logic models. The system further includes an analyzing unit configured to analyze, the identified trends and patterns to predict a load for each of the one or more NFs.
[0006] In an embodiment, the data sources are at least one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS), and wherein the received data is pre-processed and standardized.
[0007] In an embodiment, the one or more features correspond to historical NF load, network traffic patterns, NF types, geo-location, and time of day.
[0008] In an embodiment, the NF load pattern and the SDR resource allocation history are retrieved from a database.
[0009] In an embodiment, the predicted load corresponds to the load on each of the one or more NFs for a limited time period.
[0010] In an embodiment, the receiving unit is configured to receive real time data pertaining to a NF load and SDR resource usage from the one or more NFs.
[0011] In an embodiment, the system includes a detecting unit configured to detect a deviation between the predicted load and the NF load on receipt of the real time data. The system further includes a triggering unit configured to trigger, an alert in response to detection of the deviation. The system further includes an allocating unit configured to allocate one or more resources based on the detected deviation.
[0012] In another aspect of the present invention, the method of management of the one or more Network Functions (NFs) in the network is disclosed. The method includes the step of receiving data from one or more data sources in real time. The method further includes the step of selecting data corresponding to one or more features from the received data. The method further includes the step of training one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDRs) resource allocation history to identify patterns and relationship between a load of the one or more NFs and allocated resources. The method further includes the step of identifying trends and patterns in the NF load patterns and the SDRs resource allocation history based on the training of the one or more logic models. The method further includes analyzing the identified trends and patterns to predict the load for each of the one or more NFs.
[0013] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive, receive, data from one or more data sources in real time. The processor is configured to select data corresponding to one or more features from the received data. The processor is configured to train, one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDR) resource allocation history to identify patterns and relationship between a load of the one or more NFs and allocated resources. The processor is configured to identify trends and patterns in the NF load pattern and the SDR resource allocation history based on the training of the one or more logic models. The processor is configured to analyze the identified trends and patterns to predict a load for each of the one or more NFs.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for management of one or more Network Functions (NFs) in a network, according to one or more embodiments of the present invention;
[0017] FIG. 2 is an exemplary block diagram of a system for management of the one or more Network Functions (NFs) in the network, according to one or more embodiments of the present invention;
[0018] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0019] FIG. 4 is a flow diagram for management of the one or more Network Functions (NFs) in the network, according to one or more embodiments of the present invention; and
[0020] FIG. 5 is a schematic representation of a method of management of the one or more Network Functions (NFs) in the network, according to one or more embodiments of the present invention.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] The present invention estimates upcoming load for dynamic resource management. The present invention further integrated with advanced machine learning methodology, to allocate resources dynamically based on real time data. For example, if the load estimation for a particular network function (NF) is 100GB of RAM and in real time the resource required is 70GB RAM then, the present system allocates the excess 30 GB to other NF or nodes which are in requirement. Further, the system is interfaced with virtual probe to collect various failure data from streaming data record (SDR). The system is also further configured to integrate and apply trained AI/ML models to the past and live data thus forecasting future load requirement and taking appropriate action based on estimation.
[0026] FIG. 1 illustrates an exemplary block diagram of an environment 100 for management of one or more Network Functions (NFs) in a network 106, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for management of the one or more Network Functions (NFs) 226 in the network 106.
[0027] In an embodiment, the one or more NFs 226 refers to tasks or operations performed within the network 106. The NFs is at least one of virtualized network functions (VNFs) or Cloud native network functions (CNFs). The one or more NFs 226 includes, but not limited to, router, firewalls, load balancers, Access and Mobility Management Function (AMF), Session Management Function (SMF), User Plane Function (UPF), Policy Control Function (PCF).
[0028] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0029] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0030] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0031] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0032] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0033] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to manage the one or more NFs 226 in the network 106. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0034] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0035] FIG. 2 is an exemplary block diagram of the system 108 for management of the one or more NFs 226 in the network 106, according to one or more embodiments of the present invention.
[0036] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208.
[0037] For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0038] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0039] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0040] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0041] In order for the system 108 to manage the one or more NFs 226 in the network 106, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a receiving unit 210, a selecting unit 212, a training unit 214, an identifying unit 216, an analyzing unit 218, a detecting unit 220, a triggering unit 222 and an allocating unit 224 communicably coupled to each other for management of the one or more NFs 226 in the network 106.
[0042] In one embodiment, each of the one or more modules includes the receiving unit 210, the selecting unit 212, the training unit 214, the identifying unit 216, the analyzing unit 218, the detecting unit 220, the triggering unit 222 and the allocating unit 224 can be used in combination or interchangeably for management of the one or more NFs 226 in the network 106.
[0043] The receiving unit 210, the selecting unit 212, the training unit 214, the identifying unit 216, the analyzing unit 218, the detecting unit 220, the triggering unit 222 and the allocating unit 224 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0044] In one embodiment, the receiving unit 210 is configured to receive data from one or more data sources 228 in real time. The data refers to the information or records collected from various sources in real-time or as historical inputs that are essential for managing and optimizing the one or more NFs 226. The data includes, but is not limited to, real-time data, historical data, feature data, pre-processed and standardized data. The real-time data includes the information received from live systems such as current load on one or more NFs 226, resource usage or traffic patterns. The real-time data could be in the form of network traffic metrics, service requests of Streaming Data Records (SDR) that reflect ongoing operations. The SDR refers to a data structure or log of streaming data related to network traffic, resource usage, or performance metrics. The SDRs contain detailed records of how network resources are allocated and used over time, particularly for services that are delivered in real-time (such as streaming media or real-time communications) In an embodiment, the real time data is received in continuous manner. Further, the real time data is collected from various sources such as network function logs, NF usage statistics, historical SDR allocation data, network traffic data, and more. The historical data includes the past records of one or more NFs 226 load, resources allocation history, network traffic behavior and SDR resource usage over time. The historical data is crucial for identifying patterns, trends, and for training logic models used to predict future loads. The feature data is a particular data endpoint corresponding to relevant characteristics such as NF types, geo-location, time of the day and network traffic patterns. The pre-processed and standardized data includes the data from the one or more data resources that undergoes preparation to ensure it is consistently clean and ready for analysis or model training.
[0045] The one or more data sources 228 are at least one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS). The file input includes the files from local or external systems 108. The source path includes a defined directory or location for which data is pulled. The input stream includes the data which is received as continuous streams, possibly from live feeds or streaming services. The HTTP/2 is a protocol used for transmitting data over the web. The HDFS is a storage system for managing large datasets commonly used in big data applications. The NAS is a storage system that provides data access over the network 106, allowing multiple users to store and retrieve files in a centralized location. In an embodiment, the received data is pre-processed and standardized. The pre-processing of the data includes cleaning and transforming the raw data into a usable format. More specifically, the pre-processing of the data includes, but not limited to, data cleaning, normalization, filtering and aggregation. The cleaning of data includes removing or correcting errors such as incomplete, duplicate or irrelevant data. The normalization includes transforming the data into a consistent format especially when data comes from different sources. The normalization includes converting different date formats, adjusting time zones, or unifying different units of measurement. The filtering and aggregation include, extracting relevant data features (e.g., NF load patterns, resource usage) from the bulk data and summarizing it as needed. The standardization involves ensuring that data from different sources is uniform and comparable. The standardization includes, but is not limited to, consistent scaling, data encoding and format consistency. The consistent scaling ensures that all features (such as NF load, resource allocation history) are on a similar scale or range. The data encoding converts the categorical data (like NF types, geo-locations) into a numerical format that can be used by machine learning models. The format consistency unifies data formats (e.g., converting all timestamps to a single format). In an embodiment, the receiving unit 210 is configured to receive real-time data pertaining to NF load and SDR resources usage from the one or more NFs 226. The NF load refers to the amount of traffic or processing demand that each of the one or more NFs 226 is handling at any given time. The NF load represents utilization or workload of the one or more NFs 226 based on the network conditions. The NF load could be measured through various parameters, such as traffic volume, processing requests, resource utilization. The traffic volume is the amount of data passing through the one or more NFs 226. The processing requests is the number of requests or transactions the one or more NFs 226 processes. The processing requests include, but are not limited to routing, switching, security functions etc. The resources utilization is the usage of Central Processing Unit (CPU), memory or bandwidth resources by the one or more NFs 226 to handle the traffic. The SDR resource usage refers to the consumption of network resources based on the data streams passing through the system 108. The SDR are logs or records that capture key metrics about the resource usage tied to streaming data. The SDR resource usage includes, but is not limited to bandwidth usage, CPU/memory utilization, session management, historical resource allocation.
[0046] Upon receiving the data from the one or more data sources 228, the selecting unit 212 selects the data corresponding to one or more features from the received data. The one or more features correspond to historical NF load, network traffic pattern, NF types, geo-location and time of day. The historical NF load refers to past records of how much traffic or demand each of the one or more NFs 226 handled over a certain period. The historical NF load helps in identifying patterns in network usage and assists in predicting future loads based on the past performance. The network traffic pattern represents the flow of data across the network 106 including the type, volume, direction of traffic. The NF types include one or more NFs 226 performing various tasks such as routing, firewalling, load balancing. The geo-location data helps in tailoring predictions and resource allocation to specific areas. The time of day refers to the network usage depending on the time of day (e.g., more traffic during business hours).
[0047] Upon selecting the one or more features from the received data, the training unit 214 is configured to train one or more logic models. The one or more logic models are mathematical or computational models that learn from historical and real-time data to make predictions or decisions. The one or more logic models are trained by utilizing the selected data, NF load patterns, and SDR resource allocation history. The selected data includes the one or more features such as the historical NF load, the network traffic patterns, the NF types, the geo-location, and the time of day. The NF load patterns refer to past behaviors in how the one or more NFs 226 handles different types of loads over time. The SDR resource allocation history refers to historical records of how resources were distributed to the one or more NFs 226 to handle streaming data. The one or more logic models are trained to identify patterns and relationship between a load of the one or more NFs 226 and allocated resources. The identifying patterns include recognizing recurring trends or behaviors in the NF load and resource allocation. The relationship between the load of one or more NFs 226 and allocated resources refers to how the demand or workload placed on each of the one or more NFs 226 influences the allocation of resources like bandwidth, CPU, or memory for the each of the one or more NFs 226 to perform efficiently. The load refers to the amount of network traffic, data processing, or service requests that each of the one or more NFs 226 can handle. The allocated resources are the computational or network resources that are provided to each of the one or more NFs 226 to process the load.
[0048] Upon training the one or more logic models, the identifying unit 216 configured to identify trends and patterns in the NF load pattern and the SDR resource allocation history. The trends represent the direction or trajectory of data over time, showing how specific aspects of NF load or resource allocation change. The trends include, but are not limited to, increase or decrease in load, seasonal or time-based trends, resources allocation shifts. The patterns refer to consistent, recognizable sequences or correlations in the data that occur under specific conditions. The patterns include, but are not limited to repeated load behavior, correlation between load and resource allocation, geographical or network segment-specific patterns. The NF load refers to the historical behavior of the NF load over time, including the peaks, troughs, and average load levels experienced by the each of the one or more NFs 226. The SDR allocation history refers to the historical records of how resources were allocated in response to the NF load. The NF load pattern and the SDR resource allocation history is retrieved from the database 208. The SDR resource allocation history is retrieved from the database 208 by accessing historical data related to how resources have been allocated in response to the NF load.
[0049] Upon identifying the trends and patterns in the NF load pattern and the SDR resource allocation history, the analyzing unit 218 is configured to analyze the identified trends and patterns to predict the load for each of the one or more NFs 226. The predicted load corresponds to the load on each of the one or more NFs 226 for a limited time period. The predicted load refers to the forecasting of how much traffic or processing demand that the each of the one or more NFs 226 will experience within the limited time period. The predicted load includes, but is not limited to past load trends, resource allocation history, and time-sensitive patterns. The limited time period refers to a defined duration during which the load on each of the one or more NFs 226 is predicted. The defined duration refers to the specific timeframe. For example, the defined duration includes, but not limited to short-term forecasting (such as 5 minutes, 30 minutes etc.), medium-term forecasting (such as 4 hours, 12 hours, 24 hours etc.), long-term forecasting (such as 1 week, 2 weeks etc.).
[0050] Upon predicting the load, the detecting unit 220 is configured to detect a deviation between the predicted load and the NF load on receipt of the real time data. The deviation refers to the difference or disparity between the predicted load on the each of the one or more NFs 226 and the actual NF load measured in real time. The deviation is quantified as the difference between the predicted load and the actual load.
[0051] In response to detection of the deviation, the triggering unit 222 is configured to trigger an alert. The alert is a notification or signal generated in response to the detection of the deviation between the predicted load and the actual load on each of the one or more NFs 226. The alerts are at least one of, threshold-based alerts, warning alerts, critical alerts. The alerts include, but are not limited to, details of the deviation, affected NF, timestamp, recommended actions. For example, the alert is generated when the actual load on an NF exceeds a pre-defined threshold, indicating a potential overloading issue. In particular, the alert message indicates: “Load threshold exceeded for NF. Actual load: 85%, Predicted load: 60%. Timestamp: 2024-10-04 14:35”.
[0052] Subsequently, the allocating unit 224 is configured to allocate one or more resources based on the detected deviation. The one or more resources that can be allocated include, but are not limited to computational resources, network resources, storage resources. Therefore, the system 108 ensures that SDR resources are optimally utilized, reducing waste and operational costs. The system 108 enhances the Quality of Service (QoS) by maintaining consistent QoS telecom customers even during peak demand periods. The system 108 can significantly reduce operational costs by avoiding over-allocation of resources. The system 108 can scale to handle a large number of one or more NFs 226 and adapt to changing network conditions. The enhanced system 108 performance leads to faster processing and more accurate outcomes. The system 108 leads to improved network performance.
[0053] FIG. 3 is an exemplary block diagram of an architecture 300 of the system 108 for management of the one or more NFs 226 in the network 106, according to one or more embodiments of the present invention.
[0054] The architecture 300 includes a probing unit 302, a processing hub 304. The processing hub 304 includes an integrator 306, a model training unit 308, a real-time monitoring unit 310, a load prediction unit 312, and a dynamic resource allocation unit 314. The load prediction unit 312 is communicable coupled to a data lake 316. Further, the user interface 206 is communicably coupled to the processing hub 304.
[0055] In an embodiment, the integrator 306 receives the real-time data from the probing unit 302. The probing unit 302 is at least one of the data sources. The probing unit 302 is responsible for collecting real-time data from the network 106. The probing unit 302 acts as a monitoring tool that captures relevant metrics such as traffic patterns, resource utilization and other key performance indicators related to the one or more NFs 226. More specifically, the probing unit 302 continuously monitors the network environment, collecting metrics such as load levels, traffic patterns, and key performance indicators (KPIs) related to the one or more NFs 226. The collection of data occurs in real time to capture the current state of the network 106. The real-time from the probing unit 302 is received via a processing hub- probing unit interface. Upon receiving the real time data from the probing unit 302, the integrator 306 integrates the received data. In an embodiment, the data is pre-processed and standardized.
[0056] Upon integrating the received data, the integrator 306 transmits the received data to the model training unit 308. The model training unit 308 is responsible for training the one or more logic models based on the real-time data and NF load patterns. The one or more logic models are trained to identify patterns and relationships between the NF load and the allocated resources. The one or more logic models identifies relationships between NF load and the allocated resources by using historical and the SDR allocation data. The model training unit 308 includes, but not limited to, training status, model output, inference, model retraining.
[0057] Upon training the one or more logic models, the real-time monitoring unit 310 monitors the trends and the patterns from the trained models by comparing the predictions made by the trained models with the real-time data. The real-time monitoring unit 310 monitors the trends and the patterns to detect the deviations. If deviations are detected, the alert or the notification is transmitted to the user interface 206.
[0058] Subsequently, the load prediction unit 312 predicts the NF load based on the historical data, the real-time inputs, and the one or more logic models. More specifically, the load prediction unit 312 detects the upcoming load data based on the model output received based on the model used. Upon predicting the load, if the load is required, the dynamic resource allocation unit 314 allocates the one or more resources dynamically based on the model output. The dynamic resource allocation unit 314 is responsible for automatically assigning resources based on the load predictions or detected deviations in real-time. In an embodiment, the data lake 316 stores both the historical and the real-time data related to NF load patterns, SDR resource allocation history.
[0059] FIG. 4 is a flow diagram for management of the one or more NFs 226 in the network 106, according to one or more embodiments of the present invention.
[0060] At step 402, the real-time data is received from the probing unit 302 via the processing hub- probing unit interface. The data is received from one or more sources. The one or more sources include, but not limited to, network function logs, NF usage statistics, historical SDR allocation data, network traffic data. Upon receiving the data, the received data is integrated by the integrator 306. In an embodiment, the received data is preprocessed and standardized.
[0061] At step 404, upon integrating the received data, the model training unit 308 trains the one or more logic models. The one or more logic models are trained based on the real-time data and NF load patterns. The one or more logic models are trained to identify patterns and relationships between the NF load and the allocated resources. The one or more logic models identifies relationships between NF load and the allocated resources by using historical and the SDR allocation data. The model training unit 308 includes, but not limited to, training status, model output, inference, model retraining. In an embodiment, various machine learning algorithms are employed to train the one or more logic models. The one or more logic models includes, but are not limited to, regression models, time series forecasting models, or more advanced techniques like neural networks, depending on the complexity of the problem and the available data.
[0062] At step 406, upon training the one or more logic models, the load prediction unit 312 predicts the load by analyzing the historical data. More specifically, the load prediction unit 312 predicts the upcoming load for each of the one or more NFs 226. The predictions are made for a specific time frame, typically covering the immediate future (e.g., the next hour, day, or week).
[0063] At step 408, subsequently, the real-time monitoring unit 310 monitors the trends and the patterns from the trained models by comparing the predictions made by the trained models with the real-time data. Upon monitoring, the real-time monitoring unit 310 detects if any deviations are present between the predicted load and the real-time data. If the one or more NFs 226 load exceeds or falls below the predicted value, it triggers the alert.
[0064] At step 410, upon predicting the load, the model output optimal checks whether the deviation is present between the predicted load and the real-time data. If the deviation is not detected, the one or more logic models are retrained. Alternatively, if the deviation is detected, the one or more resources are dynamically allocated based on the predicted load.
[0065] FIG. 5 is a flow diagram of a method 500 for management of the one or more NFs 226 in the network 106, according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0066] At step 502, the method 500 includes the step of receiving the data from the one or more data sources 228 in real time by the receiving unit 210. The data sources are at least one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS). In an embodiment, the received data is pre-processed and standardized.
[0067] At step 504, the method 500 includes the step of selecting the data corresponding to the one or more features from the received data by the selecting unit 212. The one or more features correspond to historical NF load, network traffic patterns, NF types, geo-location, and time of day.
[0068] At step 506, the method 500 includes the step of training the one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDRs) resource allocation history by the training unit 214. The one or more logic models are trained to identify patterns and relationship between a load of the one or more NFs 226 and allocated resources.
[0069] At step 508, the method 500 includes the step of identifying the trends and patterns in the NF load patterns and the SDRs resource allocation history based on the training of the one or more logic models by the identifying unit 216.Tthe NF load pattern and the SDR resource allocation history is retrieved from the database 208.
[0070] At step 510, the method 500 includes the step of analyzing the identified trends and patterns to predict the load for each of the one or more NFs 226 by the analyzing unit 218. The predicted load corresponds to the load on each of the one or more NFs 226 for a limited time period. In an embodiment, the deviation between the predicted load and the NF load is detected by the detecting unit 220. Upon detection, the alert is triggered in response to detection of the deviation by the triggering unit 222. Subsequently, the one or more resources are allocated based on the detected deviation by the allocating unit 224.
[0071] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive data from the one or more data sources 228 in real time. The processor 202 is further configured to select the data corresponding to the one or more features from the received data. The processor 202 is further configured to train the one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDR) resource allocation history to identify the patterns and relationship between the load of the one or more NFs 226 and allocated resources. The processor 202 is further configured to identify the trends and patterns in the NF load pattern and the SDR resource allocation history based on the training of the one or more logic models. The processor 202 is further configured to analyze the identified trends and patterns to predict a load for each of the one or more NFs 226.
[0072] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0073] The present disclosure incorporates technical advancement enhances the overall network performance by predicting the future loads and optimizing the allocation of resources. The present invention enables proactive network management, allowing administrators to preemptively allocate resources or take corrective actions. The present invention maintains the network stability and performance by minimizing service disruptions. By accurately predicting NF load and dynamically allocating resources, the present invention ensures that SDR resources are optimally utilized, reducing waste and operational costs. The present invention enhances the Quality of Service (QoS) by maintaining consistent QoS for telecom customers, even during peak demand periods. The present invention can significantly reduce operational costs by avoiding over-allocation of resources of the service providers. The present invention can scale to handle a large number of NFs and adapt to changing network conditions. The present invention provides improved network performance.
[0074] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.


REFERENCE NUMERALS

[0075] Environment- 100
[0076] User Equipment (UE)- 102
[0077] Server- 104
[0078] Network- 106
[0079] System -108
[0080] Processor- 202
[0081] Memory- 204
[0082] User Interface- 206
[0083] Database- 208
[0084] Receiving Unit- 210
[0085] Selecting Unit- 212
[0086] Training unit- 214
[0087] Identifying Unit- 216
[0088] Analyzing Unit- 218
[0089] Detecting Unit- 220
[0090] Triggering Unit- 222
[0091] Allocating unit- 224
[0092] One or more NFs- 226
[0093] One or more data sources- 228
[0094] Probing unit-302
[0095] Integrator- 306
[0096] Model training unit- 308
[0097] Real-time monitoring unit- 310
[0098] Load prediction unit- 312
[0099] Dynamic resource allocation unit- 314
[00100] Data lake- 316
,CLAIMS:CLAIMS:
We Claim:
1. A method (500) of management of one or more Network Functions (NFs) (226) in a network (106), the method comprising the steps of:
receiving, by one or more processors (202), data from one or more data sources (228) in real time;
selecting, by the one or more processors (202), data corresponding to one or more features from the received data;
training, by the one or more processors (202), one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDRs) resource allocation history to identify patterns and relationship between a load of the one or more NFs (226) and allocated resources;
identifying, by the one or more processors (202), trends and patterns in the NF load patterns and the SDRs resource allocation history based on the training of the one or more logic models; and
analysing, by the one or more processors (202), the identified trends and patterns to predict the load for each of the one or more NFs (226).

2. The method (500) as claimed in claim 1, wherein the data sources is at least one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS), and wherein the received data is pre-processed and standardized.

3. The method (500) as claimed in claim 1, wherein the one or more features correspond to historical NF load, network traffic patterns, NF types, geo-location, and time of day.

4. The method (500) as claimed in claim 1, wherein the NF load pattern and the SDR resource allocation history is retrieved from a database (208).

5. The method (500) as claimed in claim 1, wherein the predicted load corresponds to the load on each of the one or more NFs (226) for a limited time period.

6. The method (500) as claimed in claim 1, comprising the steps of:
detecting, by the one or more processors (202), a deviation between the predicted load and the NF load;
triggering, by the one or more processors (202), an alert in response to detection of the deviation; and
allocating, by the one or more processors (202), one or more resources based on the detected deviation.

7. A system (108) for management of one or more Network Functions (NFs) (226) in a network (106), the system (108) comprises:
a receiving unit (210) configured to receive, data from one or more data sources (228) in real time;
a selecting unit (212) configured to select, data corresponding to one or more features from the received data;
a training unit (214) configured to train, one or more logic models utilizing the selected data, NF load patterns, and Streaming Data Records (SDR) resource allocation history to identify patterns and relationship between a load of the one or more NFs (226) and allocated resources;
an identifying unit (216) configured to identify, trends and patterns in the NF load pattern and the SDR resource allocation history based on the training of the one or more logic models; and
an analysing unit (218) configured to analyse, the identified trends and patterns to predict a load for each of the one or more NFs (226).

8. The system (108) as claimed in claim 7, wherein the data sources is at least one of a file input, a source path, an input stream, Hyper Text Transfer Protocol 2 (HTTP 2), Hadoop Distributed File System (HDFS), Network Attached Storage (NAS), and wherein the received data is pre-processed and standardized.

9. The system (108) as claimed in claim 7, wherein the one or more features correspond to historical NF load, network traffic patterns, NF types, geo-location, and time of day.

10. The system (108) as claimed in claim 7, wherein the NF load pattern and the SDR resource allocation history is retrieved from a database (208).

11. The system (108) as claimed in claim 7, wherein the predicted load corresponds to the load on each of the one or more NFs (226) for a limited time period.

12. The system (108) as claimed in claim 7, wherein the receiving unit (210) is configured to receive, real time data pertaining to a NF load and SDR resource usage from the one or more NFs.

13. The system (108) as claimed in claim 12, the system (108) comprises
a detecting unit (220) configured to detecting, a deviation between the predicted load and the NF load on receipt of the real time data;
a triggering unit (222) configured to trigger, an alert in response to detection of the deviation; and
an allocating unit (224) configured to allocate, one or more resources based on the detected deviation.

Documents

Application Documents

# Name Date
1 202321067394-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf 2023-10-07
2 202321067394-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf 2023-10-07
3 202321067394-POWER OF AUTHORITY [07-10-2023(online)].pdf 2023-10-07
4 202321067394-FORM 1 [07-10-2023(online)].pdf 2023-10-07
5 202321067394-FIGURE OF ABSTRACT [07-10-2023(online)].pdf 2023-10-07
6 202321067394-DRAWINGS [07-10-2023(online)].pdf 2023-10-07
7 202321067394-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf 2023-10-07
8 202321067394-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321067394-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321067394-DRAWING [06-10-2024(online)].pdf 2024-10-06
11 202321067394-COMPLETE SPECIFICATION [06-10-2024(online)].pdf 2024-10-06
12 Abstract.jpg 2024-12-07
13 202321067394-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321067394-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321067394-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321067394-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321067394-FORM 3 [31-01-2025(online)].pdf 2025-01-31