Sign In to Follow Application
View All Documents & Correspondence

Method And System For Forecasting Resource Requirements In A Network

Abstract: ABSTRACT METHOD AND SYSTEM FOR FORECASTING RESOURCE REQUIREMENTS IN A NETWORK The present disclosure relates to a system (120) and a method (400) for forecasting resource requirements in a network (105). The method includes the step of collecting, by one or more processors (205), data pertaining to a network metrics from at least one of a plurality of network functions in the network (105). The method includes the step of identifying, by the one or more processors (205), utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network (105). The method includes the step of predicting, by the one or more processors (205), utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 July 2023
Publication Number
42/2024
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Inventors

1. Vitap Pandey
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
2. Kalikivayi Srinath
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
3. Aayush Bhatnagar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
4. Supriya De
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
5. Chandra Kumar Ganveer
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
6. Kumar Debashish
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
7. Tilala Mehul
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
8. Sunil Meena
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
9. Rahul Verma
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
10. Gourav Gurbani
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
11. Sanjana Chaudhary
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
12. Jugal Kishore Kolariya
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
13. Ankit Murarka
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
14. Gaurav Kumar
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,
15. Kishan Sahu
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi,

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR FORECASTING RESOURCE REQUIREMENTS IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention generally relates to wireless communication networks, and more particularly relates to a method and system for forecasting resource requirements in the wireless communication network.
BACKGROUND OF THE INVENTION
[0002] In cloud-native environments, the demand for network resources can vary greatly depending on the workload. This is because cloud-native environments are made up of a large number of microservices, which are small, independent services that communicate with each other. The workload of a microservice can change frequently, depending on the number of users that are using it, the type of data that they are processing, and the other microservices that it is communicating with.
[0003] This variability in workload can lead to problems, such as resource underutilization and overprovisioning. For instance, if the demand for network resources is lower than the capacity of the network, then some of the resources will be underutilized. Alternatively, if the demand for network resources is higher than the capacity of the network, then the network will be overloaded.
[0004] Both resource underutilization and overprovisioning can have negative consequences for performance and cost. Resource underutilization can lead to wasted resources and increased costs. Overprovisioning can lead to performance problems, such as latency and jitter.
[0005] Thus, it is important to be able to forecast future demand for network resources.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and a system for forecasting resource requirements in a network.
[0007] In one aspect of the present invention, the method for forecasting resource requirements in the network is disclosed. The method includes the step of collecting, by one or more processors, data pertaining to a network metrics from at least one of a plurality of network functions in the network. The method includes the step of identifying, by the one or more processors, utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network. The method includes the step of predicting, by the one or more processors, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns.
[0008] In one embodiment, the data pertaining to the network metrics includes at least one of, a Central Processing Unit (CPU) utilization, a memory usage, and a network traffic.
[0009] In another embodiment, the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model.
[0010] In yet another embodiment, the trained model is trained utilizing at least one of, historical data and historical patterns pertaining to the network metrics.
[0011] In yet another embodiment, the trained model learns trends/patterns related to the network metrics.
[0012] In yet another embodiment, the one or more resources includes at least one of, CPU resources, memory resources, and a network bandwidth.
[0013] In yet another embodiment, the step of identifying, utilizing the trained model, relationship and patterns between the collected network metrics and the one or more resources, includes the steps of performing, by the one or more processors, at least one of, a trend analysis and a pattern analysis to identify the relationship and the patterns between the collected network metrics and the one or more resources.
[0014] In yet another embodiment, the pattern analysis pertains to analyzing the hysteresis pattern of the network metrics utilizing the trained model.
[0015] In yet another embodiment, the step of predicting, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns, includes the steps of defining, by the one or more processors, one or more thresholds pertaining to the network metrics in order to predict the one or more resource requirements based on the identified relationship and the patterns between the collected network metrics and the one or more resources utilized in the network. Further, the method includes the step of determining, by the one or more processors, whether the network metrics are within the defined thresholds or exceed the defined thresholds. Further, the method includes the step of in response to determining, by the one or more processors, a deviation in at least one of, the network metrics related to exceeding the defined thresholds based on the comparison of the network metrics with the defined thresholds, inferring by the one or more processors, future requirements of the one or more resources in the network.
[0016] In yet another embodiment, the one or more actions includes at least one of, scaling/adding new resources to the network.
[0017] In another aspect of the present invention, the system for forecasting resource requirements in a network is disclosed. The system includes an agent manager configured to collect data pertaining to a network metrics from at least one of a plurality of network functions in the network. The system includes an identification unit, configured to identify, utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network. The system includes a forecasting engine configured to predict, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns.
[0018] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to collect data pertaining to a network metrics from at least one of a plurality of network functions in the network. The processor is configured to identify, utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network. The processor is configured to predict, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns.
[0019] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0021] FIG. 1 is an exemplary block diagram of an environment for forecasting resource requirements in a network, according to one or more embodiments of the present disclosure;
[0022] FIG. 2 is an exemplary block diagram of a system for forecasting resource requirements in the network, according to one or more embodiments of the present disclosure;
[0023] FIG. 3 is a sequence flow diagram illustrating the system for forecasting resource requirements in the network, according to one or more embodiments of the present disclosure;
[0024] FIG. 4 is a flow diagram illustrating a method for forecasting resource requirements in the network, according to one or more embodiments of the present disclosure;
[0025] FIG. 5 is a flow diagram illustrating a method for identifying a relationship and patterns between the collected network metrics and one or more resources utilized in the network, according to one or more embodiments of the present disclosure; and
[0026] FIG. 6 is a flow diagram illustrating a method for predicting future requirements of the one or more resources based on the identified relationship and patterns, according to one or more embodiments of the present disclosure.
[0027] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0028] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0029] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0030] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0031] FIG. 1 illustrates an exemplary block diagram of an environment 100 for forecasting resource requirements in a network 105, according to one or more embodiments of the present disclosure. The environment 100 includes the network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 for forecasting resource requirements in the network 105. The resource refers to the various components and capacities necessary for the network operation, performance, and management. In an embodiment, the resource includes, but not limited to, the server 115, a container, a Network Function (NF), and the like. The server 115 refers to a physical or virtual machine that provides computational power, storage, and network management capabilities. The container includes all the necessary code, runtime, system tools, libraries, and settings to run an application. The NF is a functional building block within a network infrastructure that performs specific tasks, such as routing, firewalling, load balancing, or intrusion detection.
[0032] As per the illustrated embodiment and for the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the first UE 110a, the second UE 110b, and the third UE 110c connected to the network 105, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 110”.
[0033] In an embodiment, the UE 110 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0034] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The network 105 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 105 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0036] The environment 100 includes the server 115 accessible via the network 105. The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0037] The environment 100 further includes the system 120 communicably coupled to the server 115 and the UE 110 via the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity.
[0038] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0039] FIG. 2 illustrates an exemplary block diagram of the system 120 for forecasting resource requirements in the network 105, according to one or more embodiments of the present disclosure. The system 120 includes one or more processors 205, a memory 210, and a distributed data lake 230. The one or more processors 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. As per the illustrated embodiment, the system 120 includes one processor 205. However, it is to be noted that the system 120 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 205 as per the requirement of the network 105.
[0040] Among other capabilities, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROMs, FLASH memory, unalterable memory, and the like.
[0041] The distributed data lake 230 is a data repository providing storage and computing for structured and unstructured data, such as for machine learning, streaming, or data science. The distributed data lake 230 allows the user and/or an organization to ingest and manage large volumes of data in an aggregated storage solution for business intelligence or data products. The distributed data lake 230 may be implemented and utilize different technologies.
[0042] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0043] In order for the system 120 to forecast resource requirements in the network 105, the processor 205 includes an Agent Manager (AM) 215, an identification unit 220, and a forecasting engine 225 communicably coupled to each other for forecasting resource requirements in the network 105.
[0044] The AM 215, the identification unit 220, and the forecasting engine 225, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0045] The AM 215 is configured to collect data pertaining to the network metrics from at least one of a plurality of Network Functions (NFs) in the network 105. The AM 215 interacts with the plurality of NFs on a southbound interface. The plurality of NFs performs one or more tasks. In an embodiment, the one or more tasks can range from basic data forwarding and routing to more complex operations such as authorization, policy management, charging, subscriber management, mobility, roaming, security enforcement, load balancing, or data analytics. In an embodiment, the data pertaining to the network metrics includes at least one of, a Central Processing Unit (CPU) utilization, a memory usage, and a network traffic. In an embodiment, the network metrics are collected for containers and the containers are hosted on the server 115. The AM 215 is hosted on the first and second host 310, 315. In another embodiment, the AM 215 is configured to collect the container, docker, image, volumes, and daemon type of service statistics along with Kubernetes.
[0046] A mapping of the first and the second host 310, 315 and the container are completed based on the network metrics, which ensures that the network metrics of each individual container running on a specific host server effectively. The AM 215 is configured to transmit the collected data pertaining to the network metrics from the at least one of the plurality of NFs to the identification unit 220.
[0047] The identification unit 220 is configured to identify a relationship and patterns between the collected network metrics and one or more resources utilizing a trained model. The relationship refers to the direct connections or dependencies between the collected network metrics and the one or more resources. Let us consider for an example, the server 115 may require additional storage capacity as it accumulates more customer data over time, leading to a relationship between data growth and storage resource utilization. The patterns refer to recurring trends or behaviors observed in the data over time, such as periodic spikes in bandwidth usage during specific hours. The patterns include, but are not limited to, traffic patterns, usage patterns, resource utilization patterns, and the like. Let’s consider for an example, the virtual machine hosting a web application shows a pattern of higher memory consumption during daytime hours when user traffic is at its peak, and lower consumption during nighttime hours with reduced user activity.
[0048] The trained model may collect the historical data and insights about one or more resources from previous analyses. The historical data helps the trained model to recognize the patterns, correlations, and anomalies related to resource utilization and the network metrics. In an embodiment, if the demand for network resources is lower than the capacity of the network, then the one or more resources will be underutilized. The forecasting engine 225 can take one or more actions to address underutilization of network resources based on its predictions and analysis of the network metrics. In an example, the forecasting engine 225 can generate alerts or notifications when it detects potential underutilization or inefficient resource usage patterns. The forecasting engine 225 is configured to allow the network administrators to take proactive actions to address the issues and optimize resource utilization.
[0049] As per one embodiment, the trained model is at least one of a, an Artificial Intelligence/Machine Learning (AI/ML) model. The AI/ML model is configured to run on the data to determine anomalies or trigger forecasting for the network metrics. The AI/ML model is responsible for running AI/ML techniques on the network metrics that are stored in the distributed data lake 230. The AI/ML model is used to identify any anomalies in the network metrics or to forecast future trends. The ML model utilizes a variety of ML techniques, such as supervised learning, unsupervised learning, and reinforcement learning.
[0050] In one embodiment, the supervised learning is a type of machine learning algorithm, which is trained on a labeled dataset. The supervised learning refers that each training example paired with an output label. The supervised learning algorithm learns to map inputs to a correct output. In one embodiment, the unsupervised learning is a type of machine learning algorithm, which is trained on data without any labels. The unsupervised learning algorithm tries to learn the underlying structure or distribution in the data in order to discover patterns or groupings. In one embodiment, the reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize cumulative reward. The agent receives feedback in the form of rewards or penalties based on the actions it takes, and it learns a path that maps states of the environment to the best actions.
[0051] The trained model is trained utilizing at least one of, historical data and historical patterns pertaining to the network metrics to identify and learn trends and patterns within the network 105. The historical data is used to analyze past network performance and identify trends or patterns. In an embodiment, the trained model learns the trends/patterns related to the network metrics. The trained model is configured to analyze the trends over time, such as gradual increases in bandwidth usage or recurring patterns of downtime, which aids in understanding the long-term behavior of the network 105. The historical patterns refer to recurring behaviors or trends observed in the historical data over a period of time. The historical patterns can include regular fluctuations, anomalies, and trends that provide insights into the network 105 typically operates. The historical patterns might encompass traffic volumes, latency variations, error rates, and other performance indicators that have been recorded and analyzed over time to predict future behaviors and detect the anomalies.
[0052] The trained model is configured to recognize the patterns in the network metrics. For instance, the patterns may refer to detect periodic increases in error rates that correspond with specific events or identify correlations between high packet loss rates and certain network configurations. For example, the data may show that CPU utilization is correlated with network traffic, which increases network traffic, and also increases CPU utilization. The pattern is used to define a threshold for CPU utilization. If the CPU utilization exceeds the defined threshold, then a new CPU resource may be provisioned. The trained model is configured for predicting and optimizing the allocation of network resources based on the learned trends. In an embodiment, the one or more resources managed by the trained model includes at least one of, CPU resources, memory resources, and a network bandwidth.
[0053] The identification unit 220 is configured to perform at least one of, a trend analysis and a pattern analysis to identify the relationship and the patterns between the collected network metrics and the one or more resources. The trend analysis aids in predicting the future behavior of the network metrics, which is useful for optimizing resource allocation and ensuring efficient network performance. Let us consider for example, by analyzing historical data of CPU usage, the trend analysis provides that CPU usage spikes every Monday morning as employees start their workweek. Knowing the pattern, the identification unit 220 is configured to proactively allocate one or more CPU resources during the time to handle the increased load.
[0054] In one embodiment, the pattern analysis pertains to analyzing the hysteresis pattern of the network metrics utilizing the trained model. In an example, by performing the pattern analysis on network traffic data, the identification unit 220 is configured to identify that a specific combination of user activities (e.g., file downloads, video streaming, and cloud-based application usage) consistently leads to network congestion during certain periods. The pattern realizes that when the video streaming activity increases beyond defined thresholds, it significantly impacts the performance of other services like file downloads and cloud-based applications.
[0055] Upon performing the at least one of, the trend analysis and the pattern analysis, the forecasting engine 225 is configured to predict future requirements of the one or more resources based on the identified relationship and patterns utilizing the trained model. The results of the trained model are sent to the forecasting engine 225. The forecasting engine 225 is configured to define one or more thresholds pertaining to the network metrics in order to predict one or more resource requirements. The one or more thresholds are utilized to predict future resource requirements based on the identified relationship and the patterns between the collected network metrics and the one or more resources utilized in the network 105.
[0056] Further, the forecasting engine 225 is configured to determine whether the network metrics are within the defined thresholds or exceed the defined thresholds. In an embodiment, if the network metrics are within the defined thresholds, the forecasting engine 225 maintains the current resource allocation and continues to monitor the network metrics. If the network metrics exceed the defined thresholds, the forecasting engine 225 triggers alerts and predicts the need for additional resources. The forecasting engine 225 is configured to determine a deviation in at least one of, the network metrics related to exceeding the defined threshold based on the comparison of the network metrics with the defined thresholds. The forecasting engine 225 is configured to infer the future requirements of the one or more resources in the network 105.
[0057] Upon exceeding the defined threshold, the forecasting engine 225 is configured to trigger one or more actions based on the determined future requirements of the one or more resources in the network 105. In an embodiment, the one or more actions includes at least one of, scaling/adding new resources to the network 105. In another embodiment, the one or more actions includes at least one of, scaling/adding a new instance of the plurality of NFs. The new instance involves provisioning, configuring, and launching a new virtual machine (VM), the container, or an application instance to run a specific workload or service. In yet another embodiment, the one or more actions involves provisioning more network bandwidth, increasing CPU or memory resources, optimizing storage, or making other necessary network adjustments to prevent performance issues. By doing so, the system 120 implementing the one or more thresholds and continuously monitoring the network metrics, which ensures that the network 105 operates efficiently and predict future requirements of the one or more resources utilizing the trained model, thus improving processing speed of the processor 205, avoiding the network traffic, and reducing requirement of memory space.
[0058] FIG. 3 is a sequence flow diagram illustrating the system 120 for forecasting the pattern analysis pertains to analyzing the hysteresis pattern of the network metrics utilizing the trained model, according to one or more embodiments of the present disclosure.
[0059] In an example, the system 120 includes an infrastructure manager 305 that is a central component of the system 120. The infrastructure manager 305 interacts with a Graphical User Interface (GUI)/dashboard on the southbound and the AM 215 on the northbound via a Hypertext Transfer Protocol (HTTP) interface. The infrastructure manager 305 allocates host Internet Protocols (IPs) to the AM 215 and manages the provisioning and scaling of the new instance of the plurality of NFs. The network instance is a virtual or logical representation of the network 105 within a physical network infrastructure. The network instance allows for the segmentation and isolation of one or more resources, configurations, and traffic, providing enhanced security, scalability, and flexibility.
[0060] In an example, the system 120 includes the AM 215 included in the first host 310 and the second host 315 that interacts with the NFs on the southbound interface. The AM 215 receives the network metric from the at least one of the plurality of NFs, such as network traffic, CPU utilization, and memory usage. The AM 215 is configured to transmit the network metric to a metric ingestion layer 320. In an embodiment, the metric ingestion layer 320 refers as a broker topic, which is a messaging system that distributes the network metric to other components of the system 120.
[0061] The AM 215 is responsible for collecting the network metrics from the at least one of the plurality of NFs. The AM 215 uses a variety of methods to collect the network metrics, such as polling, sampling, and event-based collection. The AM 215 is configured to transmit the collected network metrics to the broker in a format that is easy for the other components of the system 120 to understand.
[0062] Further, the infrastructure manager 305 is responsible for managing the provisioning and scaling of the new instance of the plurality of NFs. The infrastructure manager 305 is configured to receive one or more requests from the AM 215 for new network instances. The infrastructure manager 305 is configured to allocate the host IPs to the AM 215 and provision the network instance. The infrastructure manager 305 is also configured to monitor the performance of the network instance and scale the network instance up or down as needed.
[0063] The infrastructure manager 305 interacts with the GUI/dashboard on the southbound interface, which allows a user to view and manage the system 120. In an embodiment, the user includes at least one of a network operator. The infrastructure manager 305 also interacts with the AM 215 on the northbound interface, which allows the AM 215 to communicate with the other components of the system 120.
[0064] In an example, the system 120 further includes a metric ingestion layer 320 that consumes the network metrics from the broker topics and creates a Comma-Separated Values (CSV) file for the same. The CSV file is processed by an infrastructure enrichment layer 325. The metric ingestion layer 320 is responsible for consuming the network metrics from the broker topics. The broker topics are the channels through which the AM 215 transmits the network metrics to the other components of the system 120. The metric ingestion layer 320 creates the CSV file for the network metrics, which is easy to process by the infrastructure enrichment layer 325. The metric ingestion layer 320 also performs some data cleansing, such as removing duplicate records and correcting typos, which ensures that the network metrics sent to the infrastructure enrichment layer 325 are accurate and consistent.
[0065] Upon transmitting the network metrics from the metric ingestion layer 320, the infrastructure enrichment layer 325 is configured to fetch/pull the CSV files which are being created by the metric ingestion layer 320. The CSV files are pushed to an infrastructure normalizer 330 for processing. The infrastructure enrichment layer 325 is responsible for enriching the network metrics that are received from the metric ingestion layer 320. The infrastructure enrichment layer 325 adds additional information to the network metrics, such as timestamps and metadata. The infrastructure enrichment layer 325 also performs some basic data analysis on the data, such as identifying trends and anomalies. The information is used by the other components of the system 120 to make better decisions about network resource provisioning and scaling of the new instance of the plurality of NFs.
[0066] Upon receiving the CSV files from the infrastructure enrichment layer 325, the infrastructure normalizer 330 is a data normalization platform which intelligently processes the network metrics, filters out and stores the shrink network metrics into the distributed data lake 230. The infrastructure normalizer 330 is responsible for normalizing the network metrics that are received from the infrastructure enrichment layer 325. The infrastructure normalizer 330 is configured to convert the data into a standard format and removes any outliers or anomalies. The normalized data is then stored in the distributed data lake 230, which is a repository for storing large amounts of data. The infrastructure normalizer 330 also performs some basic data mining on the data, such as identifying patterns and correlations. The information is used by the other components of the system 120 to make better decisions about network resource provisioning and scaling of the new instance of the plurality of NFs.
[0067] Upon processing the network metrics from the infrastructure normalizer 330, the AI/ML model 335 runs on the network metrics to find any anomalies or trigger forecasting for the network metrics. The AI/ML model 335 is configured to transmit the results to the forecasting engine 225, a reporting & alarm engine 340, and an anomaly detection engine 345. The AI/ML model 335 is responsible for running AI/ML techniques on the network metrics that are stored in the distributed data lake 230. The AI/ML techniques are used to identify any anomalies in the network metrics or to forecast future trends. The results of the AI/ML techniques are sent to the forecasting engine 225, a reporting & alarm engine 340, and an anomaly detection engine 345.
[0068] Upon receiving the results of the AI/ML techniques, the system 120 further includes the forecasting engine 225 that receives a request from the AI/ML model 335 to take pre-emptive action using the defined threshold. The forecasting engine 225 has the capability to network expansion based on data trends from the AI/ML techniques. The forecasting engine 225 is responsible for taking the pre-emptive action based on the results of the AI/ML techniques. The forecasting engine 225 can take one or more actions such as provisioning new network instances or scaling up existing network instances. The forecasting engine 225 uses the network metrics from the AI/ML model 335 to forecast future demand for network resources. The forecasting engine 225 compares the forecast to the current demand and takes one or more actions if the forecast indicates that the demand will exceed the current capacity.
[0069] Further, the system 120 further includes the reporting and alarm engine 340 that receives the request from the AI/ML model 335 to generate alarms based on using the defined threshold. The reporting & alarm engine 340 has the capability to network expansion in closed loop automation. The reporting & alarm engine 340 is responsible for generating the alarm based on the results of the AI/ML techniques. The reporting & alarm engine 340 is configured to generate alarms such as "network congestion" or "network outage." The reporting & alarm engine 340 also sends reports to the user about the system's performance. The reports include information such as the current demand for the one or more resources, the forecast for future demand, and the actions that have been taken by the forecasting engine 225.
[0070] Further, the system 120 further includes the anomaly detection engine 345 that receives an anomaly request from the AI/ML model 335 to generate take action in a closed loop. The anomaly detection engine 345 has the capability to network expansion in closed loop automation. The anomaly detection engine 345 is responsible for detecting anomalies in the network metrics. The anomaly detection engine 345 can detect anomalies such as "spikes in network traffic" or "sudden drops in CPU utilization." The anomaly detection engine 345 sends reports to the user about the anomalies that have been detected. The reports include information such as the type of anomaly, the time at which it occurred, and the impact of the anomaly on the system 120.
[0071] FIG. 4 is a flow diagram illustrating a method for forecasting resource requirements in the network, according to one or more embodiments of the present disclosure. For the purpose of description, the method 400 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0072] At step 405, the method 400 includes the step of collecting the data pertaining to the network metrics from the at least one of the plurality of NFs in the network 105 by the AM 215. The AM 215 interacts with the plurality of NFs on the southbound interface. In an embodiment, the data pertaining to the network metrics includes at least one of, the CPU utilization, the memory usage, and the network traffic. The AM 215 is configured to transmit the collected data pertaining to the network metrics from at least one of the plurality of NFs to the identification unit 220.
[0073] At step 410, the method 400 includes the step of identifying utilizing the trained model, the relationship and patterns between the collected network metrics and the one or more resources utilized in the network 105. In an embodiment, the trained model is at least one of an Artificial Intelligence/Machine Learning (AI/ML) model. The AI/ML model is configured to run on data to determine anomalies or trigger forecasting for the network metrics. The AI/ML model is responsible for running ML techniques on the data that is stored in the distributed data lake 230. The AI/ML model is used to identify any anomalies in the data or to forecast future trends.
[0074] The trained model is trained utilizing at least one of, historical data and historical patterns pertaining to the network metrics to identify and learn trends and patterns within the network 105. In an embodiment, the trained model learns the trends/the patterns related to the network metrics. The trained model is configured for predicting and optimizing the allocation of network resources based on the learned trends. In an embodiment, the one or more resources managed by the trained model includes at least one of, CPU resources, memory resources, and a network bandwidth.
[0075] At step 415, the method 400 includes the step of predicting future requirements of the one or more resources based on the identified relationship and patterns utilizing the trained model by the forecasting engine 225. The forecasting engine 225 is configured to define one or more thresholds pertaining to the network metrics in order to predict one or more resource requirements. The one or more thresholds are utilized to predict future resource requirements based on the identified relationship and the patterns between the collected network metrics and the one or more resources utilized in the network 105. By doing so, the method 400 implementing the one or more thresholds and continuously monitoring the network metrics, which ensures that the network 105 operates efficiently and predict future requirements of the one or more resources utilizing the trained model, thus improving processing speed of the processor 205, avoiding the network traffic, and reducing requirement of memory space.
[0076] FIG. 5 is a flow diagram illustrating the method 500 for identifying a relationship and patterns between the collected network metrics and the one or more resources utilized in the network 105, according to one or more embodiments of the present disclosure.
[0077] At step 505, the method 410 includes the step of performing, at least one of, the trend analysis and the pattern analysis to identify the relationship and the patterns between the collected network metrics and the one or more resources by the one or more processors 205. The trend analysis helps in predicting the future behavior of the network metrics, which can be useful for optimizing resource allocation and ensuring efficient network performance. Let us consider for example, by analyzing historical data of CPU usage, the trend analysis provides that CPU usage spikes every monday morning as employees start their workweek. Knowing this pattern, the identification unit 220 is configured to proactively allocate one or more CPU resources during this time to handle the increased load.
[0078] In an embodiment, the pattern analysis pertains to analyzing the hysteresis pattern of the network metrics utilizing the trained model. The results of the AI/ML model are sent to the forecasting engine 225. By performing the pattern analysis on network traffic data, the identification unit 220 identifies that a specific combination of user activities (e.g., file downloads, video streaming, and cloud-based application usage) consistently leads to network congestion during certain periods. The pattern realizes that when the video streaming activity increases beyond defined thresholds, it significantly impacts the performance of other services like file downloads and cloud-based applications.
[0079] FIG. 6 is a flow diagram illustrating the method 600 for predicting future requirements of the one or more resources based on the identified relationship and patterns, according to one or more embodiments of the present disclosure.
[0080] At step 605, the method 415 includes the step of defining one or more thresholds pertaining to the network metrics in order to predict one or more resource requirements by the forecasting engine 225. The one or more thresholds are utilized to predict future resource requirements based on the identified relationship and the patterns between the collected network metrics and the one or more resources utilized in the network 105.
[0081] At step 610, the method 415 includes the step of determining whether the network metrics are within the defined thresholds or exceed the defined thresholds by the forecasting engine 225. In an embodiment, if the network metrics are within the defined thresholds, the forecasting engine 225 maintains the current resource allocation and continues to monitor the network metrics. If the network metrics exceed the defined thresholds, the forecasting engine 225 triggers alerts and predicts the need for additional resources.
[0082] At step 615, the method 415 includes the step of determining the deviation in at least one of the network metrics related to exceeding the defined threshold by the forecasting engine 225. Future requirements of the one or more resources in the network 105 are inferred based on the comparison of the network metrics with the defined thresholds.
[0083] At step 620, the method 415 includes the step of triggering one or more actions based on the determined future requirements of the one or more resources in the network 105 by the forecasting engine 225. In an embodiment, the one or more actions includes at least one of, scaling/adding the new instance of the plurality of NFs to the network 105. In another embodiment, the one or more actions involves provisioning more network bandwidth, increasing CPU or memory resources, optimizing storage, or making other necessary network adjustments to prevent performance issues.
[0084] The present invention discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by a processor 205. The processor 205 is configured to collect data pertaining to a network metrics from at least one of a plurality of network functions in the network 105. The processor 205 is configured to identify utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network 105. The processor 205 is configured to predict, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns.
[0085] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0086] The present disclosure incorporates technical advancement for predicting future requirements of the one or more resources based on the identified relationship and patterns utilizing the trained model. The trained model learns the trends/patterns related to the network metrics. The present invention implements the one or more thresholds and continuously monitoring the network metrics, which ensures that the network operates efficiently and predict future requirements of the one or more resources utilizing the trained model, thus improving processing speed of the processor, avoiding the network traffic, and reducing requirement of memory space.
[0087] The disclosed system and method for forecasting resource requirements in the network offer several notable advantages, including:
[0088] Improved accuracy in forecasting network resource requirements: The system uses AI/ML techniques to learn from the historical data and identify the patterns that can be used to predict future demand. This allows the system to make more accurate predictions than traditional forecasting methods, which rely on historical data.
[0089] Reduced resource underutilization and overprovisioning: The system can help to prevent resource underutilization and overprovisioning. This can lead to improved performance, reduced costs, and increased scalability.
[0090] Improved performance and cost savings: The system can help to improve the performance of network resources by ensuring that the right amount of resources is provisioned. This can lead to reduced costs, as organizations will not be overprovisioning resources.
[0091] Increased scalability and flexibility: The system is scalable and can be used in a variety of environments. This makes it a valuable tool for organizations that need to forecast future demand for the network resources.
[0092] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS
[0093] Environment – 100;
[0094] Network – 105;
[0095] UE- 110;
[0096] Server – 115;
[0097] System – 120;
[0098] Processor -205;
[0099] Memory – 210;
[00100] Agent Manager– 215;
[00101] Identification unit– 220;
[00102] Forecasting engine– 225;
[00103] Distributed data lake– 230;
[00104] Infrastructure manger- 305;
[00105] First host-310;
[00106] Second host-315;
[00107] Metric ingestion layer-320;
[00108] Infrastructure enrichment layer-325;
[00109] Infrastructure normalizer-330;
[00110] AI/ML model-335;
[00111] Reporting and alarm engine- 340;
[00112] Anomaly detection engine-345.

,CLAIMS:CLAIMS
We Claim:
1. A method (400) for forecasting resource requirements in a network (105), the method (400) comprising the steps of:
collecting (405), by one or more processors (205), data pertaining to a network metrics from at least one of a plurality of network functions in the network (105);
identifying (410), by the one or more processors (205), utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network (105); and
predicting (415), by the one or more processors (205), utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns.

2. The method (400) as claimed in claim 1, wherein the data pertaining to the network metrics includes at least one of, a Central Processing Unit (CPU) utilization, a memory usage, and a network traffic.

3. The method (400) as claimed in claim 1, wherein the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model.

4. The method (400) as claimed in claim 1, wherein the trained model is trained utilizing at least one of, historical data and historical patterns pertaining to the network metrics.

5. The method (400) as claimed in claim 1, wherein the trained model learns trends/patterns related to the network metrics.

6. The method (400) as claimed in claim 1, wherein the one or more resources includes, at least one of, CPU resources, memory resources, and a network bandwidth.

7. The method (400) as claimed in claim 1, wherein the step of identifying (410), utilizing the trained model, relationship and patterns between the collected network metrics and the one or more resources, includes the steps of:
performing (505), by the one or more processors (205), at least one of, a trend analysis and a pattern analysis to identify the relationship and the patterns between the collected network metrics and the one or more resources.

8. The method (400) as claimed in claim 7, wherein the pattern analysis pertains to analyzing the hysteresis pattern of the network metrics utilizing the trained model.

9. The method (400) as claimed in claim 1, wherein the step of predicting (415), utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns, includes the steps of:
defining (605), by the one or more processors (205), one or more thresholds pertaining to the network metrics in order to predict the one or more resource requirements based on the identified relationship and the patterns between the collected network metrics and the one or more resources utilized in the network (105);
determining (610), by the one or more processors (205), whether the network metrics are within the defined thresholds or exceed the defined thresholds; and
in response to determining (615), by the one or more processors (205), a deviation in at least one of, the network metrics related to exceeding the defined thresholds based on the comparison of the network metrics with the defined thresholds, inferring by the one or more processors (205), future requirements of the one or more resources in the network (105).

10. The method (400) as claimed in claim 1, wherein the step of predicting (415), utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns, further includes the step of:
triggering (620), by the one or more processors (205), one or more actions based on the determined future requirements of the one or more resources in the network (105).

11. The method (400) as claimed in claim 9, wherein the one or more actions includes at least one of, scaling/adding new resources to the network (105).

12. A system (120) for forecasting resource requirements in a network (105), the system (120) comprises:
an agent manager (215), configured to, collect, data pertaining to a network metrics from at least one of a plurality of network functions in the network (105);
an identification unit (220), configured to, identify, utilizing a trained model, a relationship and patterns between the collected network metrics and one or more resources utilized in the network (105); and
a forecasting engine (225), configured to, predict, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns.

13. The system (120) as claimed in claim 12, wherein the data pertaining to the network metrics includes at least one of, a Central Processing Unit (CPU) utilization, a memory usage, and a network traffic.

14. The system (120) as claimed in claim 12, wherein the trained model is at least one of a, Artificial Intelligence/Machine Learning (AI/ML) model.

15. The system (120) as claimed in claim 12, wherein the trained model is trained utilizing at least one of, historical data and historical patterns pertaining to the network metrics.

16. The system (120) as claimed in claim 12, wherein the trained model learns trends/patterns related to the network metrics.

17. The system (120) as claimed in claim 12, wherein the one or more resources includes, at least one of, CPU resources, memory resources, and a network bandwidth.

18. The system (120) as claimed in claim 12, wherein the identification unit (220) identifies, utilizing the trained model, the relationship and the patterns between the collected network metrics and the one or more resources, by:
performing, at least one of, a trend analysis and a pattern analysis to identify the relationship and the patterns between the collected network metrics and the one or more resources.

19. The system (120) as claimed in claim 18, wherein the pattern analysis pertains to analyzing the hysteresis pattern of the network metrics utilizing the trained model.

20. The system (120) as claimed in claim 1, wherein the forecasting engine (225) predicts, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and the patterns, by:
defining, one or more thresholds pertaining to the network metrics in order to predict one or more resource requirements based on the identified relationship and patterns between the collected network metrics and the one or more resources utilized in the network (105);
determining, whether the network metrics are within the defined thresholds or exceed the defined thresholds; and
in response to determining, a deviation in at least one of, the network metrics related to exceeding the defined threshold based on the comparison of the network metrics with the defined thresholds, inferring, future requirements of the one or more resources in the network (105).

21. The system (120) as claimed in claim 1, wherein the forecasting engine (225) predicts, utilizing the trained model, future requirements of the one or more resources based on the identified relationship and patterns, is further configured to:
trigger, one or more actions based on the determined future requirements of the one or more resources in the network (105).

22. The system (120) as claimed in claim 21, wherein the one or more actions includes at least one of, scaling/adding new resources to the network (105).

Documents

Application Documents

# Name Date
1 202321047847-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf 2023-07-15
2 202321047847-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf 2023-07-15
3 202321047847-FORM 1 [15-07-2023(online)].pdf 2023-07-15
4 202321047847-FIGURE OF ABSTRACT [15-07-2023(online)].pdf 2023-07-15
5 202321047847-DRAWINGS [15-07-2023(online)].pdf 2023-07-15
6 202321047847-DECLARATION OF INVENTORSHIP (FORM 5) [15-07-2023(online)].pdf 2023-07-15
7 202321047847-FORM-26 [03-10-2023(online)].pdf 2023-10-03
8 202321047847-Proof of Right [08-01-2024(online)].pdf 2024-01-08
9 202321047847-DRAWING [13-07-2024(online)].pdf 2024-07-13
10 202321047847-COMPLETE SPECIFICATION [13-07-2024(online)].pdf 2024-07-13
11 Abstract-1.jpg 2024-08-28
12 202321047847-FORM-9 [15-10-2024(online)].pdf 2024-10-15
13 202321047847-FORM 18A [16-10-2024(online)].pdf 2024-10-16
14 202321047847-Power of Attorney [24-10-2024(online)].pdf 2024-10-24
15 202321047847-Form 1 (Submitted on date of filing) [24-10-2024(online)].pdf 2024-10-24
16 202321047847-Covering Letter [24-10-2024(online)].pdf 2024-10-24
17 202321047847-CERTIFIED COPIES TRANSMISSION TO IB [24-10-2024(online)].pdf 2024-10-24
18 202321047847-FORM 3 [03-12-2024(online)].pdf 2024-12-03
19 202321047847-FER.pdf 2024-12-27
20 202321047847-OTHERS [03-02-2025(online)].pdf 2025-02-03
21 202321047847-FER_SER_REPLY [03-02-2025(online)].pdf 2025-02-03

Search Strategy

1 searchstrategyE_26-12-2024.pdf