Abstract: ABSTRACT METHOD AND SYSTEM FOR PREDICTING ONE OR MORE EVENTS The present disclosure relates to a system (108) and a method (500) for predicting one or more events. The system (108) includes a transceiver (210) to receive, a request from one or more sources (224) to predict the one or more events. The system (108) includes a selecting unit (212) to select one or more features based on the received request. The system (108) includes a configuring unit (214) to configure one or more hyperparameters. The system (108) further includes a fetching unit (218) to fetch the trained model from a storage unit (216). The system (108) includes a feeding unit (220) to feed, the trained model with at least one of, the received performance data, the selected one or more features and the configured one or more hyperparameters. The system (108) includes a predicting unit (222) to predict, utilizing the trained model, the one or more events. Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
METHOD AND SYSTEM FOR PREDICTING ONE OR MORE EVENTS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to network management, more particularly relates to a method and a system for predicting one or more events.
BACKGROUND OF THE INVENTION
[0002] With the increase in number of users, the network service provisions have to be upgraded to incorporate increased users and to enhance the service quality so as to keep pace with such high demand. There are a lot of factors that need to be cared for when considering the quality of a network. To maintain the health of a network regular monitoring of various parameters has to be done, like monitoring performance of various network elements and network functions etc. Network functions play a vital role in improving the quality of a network by the way of managing traffic, delegating node allocation, managing performance of routing device etc. The network functions in a network generate an immense amount of performance data, including key performance indicators (KPIs) and counters that are computed to provide valuable information regarding system status, health, performance and security. The Network Function (NF) performance data includes computed values of various Key Performance Indicators (KPIs) and counters for each NF. Without the predicted performance KPI data of near future, forthcoming issues such as network congestion, equipment failures, etc. are being identified only after the end user complains about ineffective network service and the resolution is further time consuming and tedious. An IPM (Integration Performance Management) module which is responsible for real time KPI computation consistently computes various Key Performance Indicator (KPI) and counter values on periodic or on demand basis. The IPM may require a fly prediction for a KPI value for the purpose of upgrading the network monitoring and management policies in order to keep up with enhanced technology.
[0003] There is a need to introduce a system and a method thereof for predicting KPI data quickly and accurately for future periods which may provide the IPM with the required data sooner and aid in addressing network issues proactively.
SUMMARY OF THE INVENTION
[0004] One or more embodiments of the present disclosure provide a method and system for predicting one or more events.
[0005] In one aspect of the present invention, the system for predicting the one or more events is disclosed. The system includes a transceiver configured to receive, a request from one or more sources. The system further includes a selecting unit, configured to select, one or more features based on the received request. The system further includes a fetching unit, configured to fetch the trained model from a storage unit, based on the received request. The system further includes a feeding unit, configured to feed, the trained model with at least one of, the received performance data, the selected one or more features and the configured one or more hyperparameters. The system further includes a predicting unit, configured to predict, utilizing the trained model, the one or more events.
[0006] In an embodiment, the request includes at least one of, performance data and details of a trained model to be utilized. In an embodiment, the performance data is at least one of network function performance data which includes at least one of, Key Performance Indicators (KPIs) and counters.
[0007] In an embodiment, the request is at least one of, an Application Programming Interface (API) request.
[0008] In an embodiment, the one or more sources include at least one of, an Integration Performance Management (IPM) module.
[0009] In an embodiment, the one or more features pertain to network KPIs and counters of network function performance data. In an embodiment, the system comprises a configuring unit (214) configured to configure one or more hyperparameters.
[0010] In an embodiment, the fetching unit fetches, the trained model from a storage unit, by extracting, at least one of, name of the trained model to be utilized based on the request and fetching, the trained model from the storage unit based on the extracted name of the trained model.
[0011] In an embodiment, predicting the one or more events pertains to predicting future performance data including at least one of, future KPIs and counters.
[0012] In an embodiment, the trained model is pre-trained and stored in the storage unit.
[0013] In an embodiment, the transceiver is further configured to transmit a response pertaining to the predicted one or more events to the one or more sources.
[0014] In another aspect of the present invention, the method for predicting the one or more events is disclosed. The method includes the step of receiving a request from one or more sources. The method further includes the step of selecting one or more features based on the received request. The method further includes the step of fetching the trained model from a storage unit, based on the received request. The method further includes the step of feeding the trained model with at least one of, the received performance data, the selected one or more features and the configured one or more hyperparameters. The method further includes the step of predicting utilizing the trained model, the one or more events.
[0015] In another aspect of the invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions is disclosed. The computer-readable instructions are executed by a processor. The processor is configured to receive a request from one or more sources. The processor is configured to select one or more features based on the received request. The processor is configured to fetch the trained model from a storage unit, based on the received request. The processor is configured to feed, the trained model with at least one of, the received performance data, the selected one or more features and the configured one or more hyperparameters. The processor is configured to predict, utilizing the trained model, the one or more events.
[0016] In another aspect of invention, User Equipment (UE) is disclosed. The UE includes one or more primary processors communicatively coupled to one or more processors, the one or more primary processors coupled with a memory. The processor causes the UE to transmit plurality of dashboards to the one or more processers.
[0017] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0019] FIG. 1 is an exemplary block diagram of an environment for predicting one or more events, according to one or more embodiments of the present invention;
[0020] FIG. 2 is an exemplary block diagram of a system for predicting the one or more events, according to one or more embodiments of the present invention;
[0021] FIG. 3 is an exemplary block diagram of an architecture implemented in the system of the FIG. 2, according to one or more embodiments of the present invention;
[0022] FIG. 4 is a flow diagram for predicting the one or more events, according to one or more embodiments of the present invention, according to one or more embodiments of the present invention; and
[0023] FIG. 5 is a schematic representation of a method of predicting the one or more events, according to one or more embodiments of the present invention, according to one or more embodiments of the present invention.
[0024] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0025] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0026] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0027] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0028] The present invention introduces a system interacting with the Integrated Performance Management (IPM) by means of an interface. The system is configured to leverage API based integration of already trained Artificial Intelligence/Machine Learning (AI/ML) based prediction models with real time Network Function (NF) performance data, which facilitates on the fly prediction and analysis of KPI values of future period thus proactively addressing anticipated network issues. The most unique aspect of the present invention is Application Programming Interface (API) based integration with IPM that allows the usage of readily trained AI/ML models and obtain on demand predictions of future NF performance data.
[0029] FIG. 1 illustrates an exemplary block diagram of an environment 100 for predicting one or more events, according to one or more embodiments of the present invention, according to one or more embodiments of the present disclosure. In this regard, the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106 and a system 108 communicably coupled to each other for predicting the one or more events.
[0030] In an embodiment, the one or more events refers to predicted occurrences or outcomes related to network performance. The one or more events includes, but are not limited to, performance fluctuations, network anomalies, capacity overloads, service-level breaches, predictive maintenance.
[0031] As per the illustrated embodiment and for the purpose of description and illustration, the UE 102 includes, but not limited to, a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the UE 102 may include a plurality of UEs as per the requirement. For ease of reference, each of the first UE 102a, the second UE 102b, and the third UE 102c, will hereinafter be collectively and individually referred to as the “User Equipment (UE) 102”.
[0032] In an embodiment, the UE 102 is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as a smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0033] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include, by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0034] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0035] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The network 106 may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, a VOIP or some combination thereof.
[0036] The environment 100 further includes the system 108 communicably coupled to the server 104 and the UE 102 via the network 106. The system 108 is configured to predict the one or more events. As per one or more embodiments, the system 108 is adapted to be embedded within the server 104 or embedded as an individual entity.
[0037] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0038] FIG. 2 is an exemplary block diagram of the system 108 for predicting the one or more events, according to one or more embodiments of the present invention.
[0039] As per the illustrated embodiment, the system 108 includes one or more processors 202, a memory 204, a user interface 206, and a database 208. In an embodiment, the system 108 is communicable coupled with a one or more sources 224.
[0040] For the purpose of description and explanation, the description will be explained with respect to one processor 202 and should nowhere be construed as limiting the scope of the present disclosure. In alternate embodiments, the system 108 may include more than one processor 202 as per the requirement of the network 106. The one or more processors 202, hereinafter referred to as the processor 202 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0041] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204. The memory 204 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0042] In an embodiment, the user interface 206 includes a variety of interfaces, for example, interfaces for a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The user interface 206 facilitates communication of the system 108. In one embodiment, the user interface 206 provides a communication pathway for one or more components of the system 108. Examples of such components include, but are not limited to, the UE 102 and the database 208.
[0043] The database 208 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 208 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0044] In order for the system 108 to predict the one or more events, the processor 202 includes one or more modules. In one embodiment, the one or more modules includes, but not limited to, a transceiver 210, a selecting unit 212, a configuring unit 214, a storage unit 216, a fetching unit 218, a feeding unit 220, and a predicting unit 222 communicably coupled to each other for predicting the one or more events.
[0045] In one embodiment, each of the one or more modules, the transceiver 210, the selecting unit 212, the configuring unit 214, the storage unit 216, the fetching unit 218, the feeding unit 220, and the predicting unit 222 can be used in combination or interchangeably for predicting the one or more events.
[0046] The transceiver 210, the selecting unit 212, the configuring unit 214, the storage unit 216, the fetching unit 218, the feeding unit 220, and the predicting unit 222 in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0047] In one embodiment, the transceiver 210 is configured to receive the request from the one or more sources 224. The request is at least one of, an Application Programming Interface (API) request. The request includes at least one of performance data and details of a trained model to be utilized. The trained model refers to a machine learning model that has been developed and optimized to perform a task, such as predicting events or outcomes, based on previously learned data. The detail of the trained model refers to the information about the pre-trained model that is utilized to carry out the prediction.
[0048] The performance data refers to measurable metrics that provide insights into operational state and efficiency of Network Functions (NFs). The performance data is at least one of network function performance data which includes at least one of Key Performance Indicators (KPIs) and counters. The KPIs refer to quantifiable values that reflect how effectively NFs are performing. For example, latency, throughput, packet loss and error rates. The counters refer to the numerical values that count occurrences or activities within the network 106, such as the number of successful or failed connections, data packets transmitted or received, or resource usage.
[0049] The one or more sources 224 refers to the entities or components that initiate the request for event prediction. The one or more sources 224 include at least one of, an Integration Performance Management (IPM) 302 module. The IPM 302 is responsible for monitoring and managing network performance. The roles of IPM 302 includes, but not limited to, monitoring network performance, analyzing performance trends, triggering alerts and responses, interfacing with predicting systems. The components of IPM 302 include, but not limited to, data collection engine, analytics engine, alert and notification system, integration interface, storage unit. The IPM 302 plays a central role in managing network performance by monitoring real-time data, analyzing it for trends, sending performance data to predictive systems, and triggering alerts when necessary. The IPM 302 ensures the network operates optimally and helps the system 108 in predicting future performance events. In an embodiment, the IPM 302 and the system 108 are communicated through API medium. Further, the communication can be based on Hypertext Transfer Protocol (HTTP) request which uses JavaScript Object Notation (JSON)/ Extensible Markup Language (XML) for carrying information.
[0050] Upon receiving the request from the one or more sources 224, the selecting unit 212 is configured to select the one or more features based on the received request. The received request includes, but is not limited to, details of the trained model. The details of the trained model include, but is not limited to, names of the trained model. The one or more features pertain to KPIs and counters of network function performance data. The one or more features refers to data attributes or variables selected from the network function performance data that the system 108 uses to make predictions. The data attributes are the measurable variables in a dataset that are used to make predictions. The KPIs refers to the metrics that measure the performance of NFs such as latency, throughput, packet loss, jitter. The counter of network function performance data refers to numerical metrics that count occurrences of network activities such as connection attempts, data packet counts, resource utilization. The one or more features includes, but are not limited to, time-based data, geographical data, past performance metrics, previous failure or degradation incidents, user activity patterns, subscription plans, customer churn indicators, QoS (Quality of Service) policies, network slicing parameters, virtualization resource usage. In an embodiment, the request includes predefined criteria or parameters that guide the feature selection. Thus, the selection process is automated and driven by the data within the request, such as specific KPIs, counters, or performance metrics. In particular, the selecting unit 212 would parse the request, identify the relevant parameters or model details, and automatically select the features needed for the prediction. In another embodiment, the user is allowed to manually select the one or more features via the user interface 206.
[0051] Upon selecting the one or more features, the configuring unit 214 configures one or more hyperparameters. The one or more hyperparameters refers to the configuration settings used to control the behavior and performance of the machine learning model during the prediction process. The one or more hyperparameters are set before the training or prediction process and are not learned from the data but are crucial in optimizing how the model works. The one or more hyperparameters include, but not limited to, learning rate, number of layer or units, batch size, regularization parameters. In particular, based on the selected features and the nature of the request, the configuring unit 214 sets or adjusts the one or more hyperparameters for the machine learning model. The configuration of the one or more hyperparameters includes but is not limited to setting values such as the learning rate, batch size, or regularization parameters to best fit the prediction. The one or more hyperparameters are tuned to optimize how the machine learning model processes the data and generates predictions.
[0052] Upon configuring the one or more hyperparameters, the fetching unit 218 is configured to fetch the trained model from the storage unit 216 based on the received request. Further, the fetching unit 218 fetches the trained model from the storage unit 216 by extracting at least one of, name of the trained model to be utilized based on the request. For example, the name of the trained model is at least one of network performance predictor, KPI forecasting model, anomaly detection model, traffic load predictor, resource utilization model, failure prediction model. Based on the extracting the name of the trained model, the fetching unit 218 is configured to fetch the trained model from the storage unit 216.
[0053] Upon fetching the trained model from the storage unit 216, the feeding unit 220 is configured to feed the trained model. The trained model is fed with at least one of the received performance data, the selected one or more features and the configured one or more hyperparameters. The received performance data includes real-time or historical performance metrics such as the KPIs and counters, which are essential for making predictions. The selected one or more features are the data attributed identified from the performance data that are relevant to the prediction. The configured one or more hyperparameters includes, but not limited to, learning rate, batch size or regularization parameters. The configured one or more parameters control the behavior of the trained model during the prediction. In an embodiment, the trained model is fed with at least one of checkpoint and train-test split.
[0054] Upon feeding the trained model, the predicting unit 222 is configured to predict the one or more events by utilizing the trained model. The predicting of the one or more events pertains to predicting future performance data including at least one of future KPIs and counters. The future performance data includes anticipated changes in network performance metrics based on current trends and historical patterns. The future KPIs include predicted key performance indicators such as expected latency, throughput or packet loss at future time intervals. The future counters include the predicted numerical counters such as the expected number of connection attempts or data packets transmitted in the near future. In an embodiment, the user can provide time interval for future predictions through Graphical User Interface (GUI) or Command Line Interface (CLI) or through request. In an embodiment, the predicted one or more events are displayed on the GUI.
[0055] Upon predicting the one or more events, the transceiver 210 is further configured to transmit a response pertaining to the predicted one or more events to the one or more sources 224. The response pertaining to the predicted one or more events to the one or more sources 224 includes, but not limited to, predicted events, future KPIs, future counters, timeframes. In an embodiment, the response includes, but is not limited to, predicted KPI and counter values pertaining to one or more events. Further, the response is transmitted to the one or more sources via the API.
[0056] Therefore, the system 108 can quickly predict the network function parameters. Further, the system 108 detects the security threats and fraudulent activities within the telecommunication network early. The system 108 Assists in policy redefining for better monitoring, management and security. The system 108 helps in rerouting traffic based on predicted network congestion and performance KPIs. The system 108 provides dynamic resource allocation for better operation.
[0057] FIG. 3 is an exemplary block diagram of an architecture 300 of the system 108 for predicting the one or more events according to one or more embodiments of the present invention.
[0058] The architecture 300 includes an IPM 302, a processing hub 304. The processing hub 304 includes a data integrator 306, a data pre-processor 308, a model training unit 310, and the predicting unit 222. The model training unit 310 is communicable coupled to a data lake 314.
[0059] The IPM 302 transmits the request to predict the one or more events. The request includes at least one of the performance data and details of the trained model. The performance data is at least one of the network function performance data which includes at least one of the Key Performance Indicators (KPIs) and the counters. The request is at least one of the API requests. The IPM 302 is responsible for real time KPI computation.
[0060] Upon receiving the request from the IPM 302, the data integrator 306 collects the performance data and details of the trained model and transmits it to the processing hub 304. Subsequently the performance data and details of the trained model are transmitted to the data pre-processor 308. The data pre-processor 308 pre-processes the collected data. The collected data is pre-processed by cleaning and normalizing the collected data. Further, the pre-processed data is stored at the data lake 314.
[0061] Upon pre-processing the collected data, the model training unit 310 fetches the trained model from the data lake 314 based on the received request from the IPM 302. In an embodiment, the trained model is pre-trained and stored in the data lake 314. Further, the model training unit 310 feeds the trained model with at least one of the received performance data, the selected one or more features and the configured one or more hyperparameters.
[0062] Subsequently, the predicting unit 222 predicts the one or more events by utilizing the trained model. In an embodiment, the prediction of the one or more events are predicted for future time interval based on the received request from the IPM 302.The predicting the one or more events pertain to predicting future performance data including at least one of, the future KPIs and the counters. Upon predicting the one or more events, the predicting unit 222 transmits the response pertaining to the predicted one or more events to the IPM 302.
[0063] FIG. 4 is a flow diagram for predicting the one or more events according to one or more embodiments of the present invention.
[0064] At step 402, the API request is received from the IPM 302 to predict the one or more events. The API request includes at least one of the performance data and details of the trained model. The performance data is at least one of the network function performance data which includes at least one of the Key Performance Indicators (KPIs) and the counters.
[0065] At step 404, upon receiving the API request from the IPM 302, the received performance data and details of the trained model are integrated. Upon integrating the received performance data and details of the trained model, the received performance data and details of the trained model are preprocessed. The preprocessing of the received performance data and details of the trained model includes data cleaning and normalization.
[0066] At step 406, subsequently, the trained models are fetched from the data lake 314 based on the received API request from the IPM 302. In an embodiment, the trained model is pre-trained and stored in the data lake 314.
[0067] At step 408, upon fetching the trained model from the data lake 314, the received performance data, the selected one or more features and the configured one or more hyperparameters are fed into the fetched trained model.
[0068] At step 410, subsequently, the future KPIs and counters are predicted by utilizing the trained model.
[0069] At step 412, upon predicting the future KPIs and counters, the response pertaining to the predicted future KPIs and counters to the IPM 302.
[0070] FIG. 5 is a flow diagram of a method 500 for predicting the one or more events according to one or more embodiments of the present invention. For the purpose of description, the method 500 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[0071] At step 502, the method 500 includes the step of receiving the request from the one or more sources 224 to predict the one or more events by the transceiver 210. The request includes at least one of the performance data and details of a trained model to be utilized. The performance data is at least one of the network function performance data which includes at least one of the KPIs and the counters. The request is at least one of the API requests. The one or more sources 224 include at least one of the IPM 302 module. In an embodiment, the trained model is pre-trained and stored in the storage unit 216.
[0072] At step 504, the method 500 includes the step of selecting the one or more features based on the received request by the selecting unit 212. The one or more features pertain to KPIs and counters of network function performance data. Further, the method 500 includes the step of configuring the one or more hyperparameters by the configuring unit 214.
[0073] At step 506, the method 500 includes the step of fetching the trained model from the storage unit 216, based on the received request by the fetching unit 218.The fetching unit 218 fetches the trained model from the storage unit 216 by extracting the at least one of name of the trained model. Based on the extracted name of the trained model, the fetching unit is configured to fetch the trained model from the storage unit 216.
[0074] At step 508, the method 500 includes the step of feeding the trained model with at least one of the received performance data, the selected one or more features and the configured one or more hyperparameters by the feeding unit 220.
[0075] At step 510, the method 500 includes the step of predicting the one or more events by utilizing the trained model. The predicting of the one or more events pertains to predicting future performance data including at least one of the future KPIs and the counters. Upon predicting the one or more events, the transceiver 210 is configured to transmit the response pertaining to the predicted one or more events to the one or more sources 224.
[0076] The present invention further discloses a non-transitory computer-readable medium having stored thereon computer-readable instructions. The computer-readable instructions are executed by the processor 202. The processor 202 is configured to receive the request from the one or more sources 224 to predict the one or more events. The request includes at least one of, performance data and details of the training model to be utilized. The processor 202 is further configured to select the one or more features based on the received request. The processor 202 is further configured to configure the one or more hyperparameters. The processor 202 is further configured to fetch the trained model from a storage unit, based on the received request. The processor 202 is further configured to feed the trained model with at least one of the received performance data, the selected one or more features and the configured one or more hyperparameters. The processor 202 is further configured to predict, utilizing the trained model, the one or more events.
[0077] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0078] The present disclosure incorporates technical advancement of predicting the network function parameters quickly. Further, the present invention detects the security threats and fraudulent activities within the telecommunication network early. Further, the present invention assists in policy redefining for better monitoring, management and security. Further, the present invention helps in rerouting traffic based on predicted network congestion and performance KPIs. The present invention provides dynamic resource allocation for better operation. The present invention improves the accuracy of predictions and allows better resource allocation. The present invention ensures that the model can adapt to varying conditions and performance metrics, potentially leading to more relevant and timely predictions. The present invention easily integrates into existing network management frameworks, facilitating improved data-driven decision-making. The present invention contributes to better resource utilization and network optimization, ultimately leading to cost savings and improved service quality.
[0079] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[0080] Environment- 100
[0081] User Equipment (UE)- 102
[0082] Server- 104
[0083] Network- 106
[0084] System -108
[0085] Processor- 202
[0086] Memory- 204
[0087] User Interface- 206
[0088] Database- 208
[0089] Transceiver - 210
[0090] Selecting Unit- 212
[0091] Configuring unit- 214
[0092] Storage Unit- 216
[0093] Fetching Unit- 218
[0094] Feeding Unit- 220
[0095] Predicting Unit- 222
[0096] One or more sources- 224
[0097] IPM- 302
[0098] Data integrator- 306
[0099] Data pre-processor- 308
[00100] Model training unit-310
[00101] Data lake- 314
,CLAIMS:CLAIMS:
We Claim:
1. A method (500) for predicting one or more events, the method (500) comprising the steps of:
receiving, by the one or more processors (202), a request from one or more sources (224);
selecting, by the one or more processors (202), one or more features based on the received request;
fetching, by the one or more processors (202), the trained model from a storage unit (216), based on the received request;
feeding, by the one or more processors (202), the trained model with at least one of, the received performance data, the selected one or more features and the configured one or more hyperparameters; and
predicting, by the one or more processors (202), utilizing the trained model, the one or more events.
2. The method (500) as claimed in claim 1, wherein the request includes at least one of, performance data and details of a trained model to be utilized.
3. The method (500) as claimed in claim 1, wherein the performance data is at least one of network function performance data which includes at least one of, Key Performance Indicators (KPIs) and counters.
4. The method (500) as claimed in claim 1, wherein the request is at least one of, an Application Programming Interface (API) request.
5. The method (500) as claimed in claim 1, wherein the one or more sources (224) include at least one of, an Integration Performance Management (IPM) (302) module.
6. The method (500) as claimed in claim 1, wherein the one or more features pertain to KPIs and counters of network function performance data.
7. The method (500) as claimed in claim 1, wherein the method (500) comprises the step of configuring, by the one or more processors (202), one or more hyperparameters.
8. The method (500) as claimed in claim 1, wherein the step of, fetching, the trained model from a storage unit (216), based on the received request, includes the steps of:
extracting, by the one or more processors (202), at least one of, name of the trained model to be utilized based on the request; and
fetching, by the one or more processors (202), the trained model from the storage unit (216) based on the extracted name of the trained model.
9. The method (500) as claimed in claim 1, wherein predicting the one or more events pertain to predicting future performance data including at least one of, future KPIs and counters.
10. The method (500) as claimed in claim 1, wherein the trained model is pre-trained and stored in the storage unit (216).
11. The method (500) as claimed in claim 1, wherein the method further comprising the steps of:
transmitting, by the one or more processors (202), a response pertaining to the predicted one or more events to the one or more sources (224).
12. A system (108) for predicting one or more events, the system (108) comprising:
a transceiver (210), configured to, receive, a request from one or more sources (224);
a selecting unit (212), configured to, select, one or more features based on the received request;
a fetching unit (218), configured to, fetch, the trained model from a storage unit (216), based on the received request;
a feeding unit (220), configured to, feed, the trained model with at least one of, the received performance data, the selected one or more features and the configured one or more hyperparameters; and
a predicting unit (222), configured to, predict, utilizing the trained model, the one or more events.
13. The system (108) as claimed in claim 12, wherein the request includes at least one of, performance data and details of a trained model to be utilized.
14. The system (108) as claimed in claim 12, wherein the performance data is at least one of network function performance data which includes at least one of, Key Performance Indicators (KPIs) and counters.
15. The system (108) as claimed in claim 12, wherein the request is at least one of, an Application Programming Interface (API) request.
16. The system (108) as claimed in claim 12, wherein the one or more sources (224) include at least one of, an Integration Performance Management (IPM) (302) module.
17. The system (108) as claimed in claim 12, wherein the one or more features pertain to KPIs and counters of network function performance data.
18. The system (108) as claimed in claim 12, wherein the system (108) comprises a configuring unit (214) configured to configure one or more hyperparameters.
19. The system (108) as claimed in claim 10, wherein the fetching unit fetches, the trained model from a storage unit (216), by:
extracting, at least one of, name of the trained model to be utilized based on the request; and
fetching, the trained model from the storage unit (216) based on the extracted name of the trained model.
20. The system (108) as claimed in claim 10, wherein predicting the one or more events pertain to predicting future performance data including at least one of, future KPIs and counters.
21. The system (108) as claimed in claim 10, wherein the trained model is pre-trained and stored in the storage unit (216).
22. The system (108) as claimed in claim 10, wherein the transceiver is further configured to transmit, a response pertaining to the predicted one or more events to the one or more sources (224).
| # | Name | Date |
|---|---|---|
| 1 | 202321068459-STATEMENT OF UNDERTAKING (FORM 3) [11-10-2023(online)].pdf | 2023-10-11 |
| 2 | 202321068459-PROVISIONAL SPECIFICATION [11-10-2023(online)].pdf | 2023-10-11 |
| 3 | 202321068459-FORM 1 [11-10-2023(online)].pdf | 2023-10-11 |
| 4 | 202321068459-FIGURE OF ABSTRACT [11-10-2023(online)].pdf | 2023-10-11 |
| 5 | 202321068459-DRAWINGS [11-10-2023(online)].pdf | 2023-10-11 |
| 6 | 202321068459-DECLARATION OF INVENTORSHIP (FORM 5) [11-10-2023(online)].pdf | 2023-10-11 |
| 7 | 202321068459-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 8 | 202321068459-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321068459-DRAWING [11-10-2024(online)].pdf | 2024-10-11 |
| 10 | 202321068459-COMPLETE SPECIFICATION [11-10-2024(online)].pdf | 2024-10-11 |
| 11 | Abstract.jpg | 2025-01-06 |