Abstract: ABSTRACT SYSTEM AND METHOD FOR FORECASTING ONE OR MORE ALERTS The present invention relates to a system (108) and a method (600) for forecasting one or more alerts. The method (600) includes step of retrieving, historic data pertaining to one or more alerts associated with performance metrics from one or more data sources (110). Further, computing at least one of, one or more features including at least one of, one or more thresholds, time of peak of the performance metrics and time of dip of the performance metrics. Furthermore training, an Artificial Intelligence/ Machine Learning (AI/ML) model (220) with at least one of, the computed one or more features. Thereafter forecasting, utilizing the trained AI/ML model (220), one or more alerts associated with the performance metrics. Ref. Fig. 2
DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR FORECASTING ONE OR MORE ALERTS
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION
THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.
FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for forecasting one or more alerts.
BACKGROUND OF THE INVENTION
[0002] With increase in number of users, the network service provisions have to be upgraded to incorporate increased users and to enhance the service quality so as to keep pace with such high demand. There are a lot of factors that need to be cared for when considering quality of a network. To maintain health of a network regular monitoring of various parameters has to be done, like monitoring performance of various network elements and network functions etc. Network functions play a vital role in improving the quality of a network by the way of managing traffic, delegating node allocation, managing performance of routing device etc. A network function is associated with micro-services executing several tasks in parallel. The data generated by the network services are vast and analysis of such data is essential for enhancement of user experience and to improve service quality. The network functions in a network generate an immense amount of performance data, including key performance indicators (KPIs) and counters. With time there may be degradation of quality of network services or may be due to external factors like adverse weather, the counter values may reach their maximum or minimum value i.e., threshold. When values of KPI (key performance indicators) or counters reach threshold then there may be service impact.
[0003] Breach alerts for Key Performance Indicators (KPIs) and counters are sent by IPM (Integrated Performance Management) service to notify network operators and administrators when certain predefined thresholds are exceeded. These alerts sent are critical for maintaining the network’s health, ensuring quality of service (QoS). But addressing the issues in real time after being notified about the breach definitely consumes significant amount of time that may further impact customer experience. Therefore, it is imperative that a possible breach alert would be sent prior to actual happening. There is a need of a system and a method thereof to analyze the data and predict the breach beforehand so that prior maintenance or repair would be feasible.
[0004] In the contemporary network maintenance and management approach, the steps are taken to resolve issue after the problem occurs which is accompanied by service disruption and downtime. Usual network management is primarily reactive, where the approach is to address network quality issues only after they are reported by users or when they resulted in noticeable service disruptions. This reactive approach negatively impacted user satisfaction and service reliability. This approach also causes inefficient allocation of resources thus adding onto unnecessary operational cost.
[0005] There is a requirement of a system and method thereof to monitor the KPIs and counter values, predict their change pattern with accuracy in a configurable time interval and notify operators or concerned personnel about the possible threshold breach so as to prompt proactive response.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provides a method and a system for forecasting one or more alerts.
[0007] In one aspect of the present invention, the method for forecasting one or more alerts are disclosed. The method includes the step of retrieving, by the one or more processors, from one or more data sources, historic data pertaining to one or more alerts associated with performance metrics. The method further includes the step of computing, by the one or more processors, at least one of, one or more features. The method further includes the step of training, by the one or more processors, an Artificial Intelligence/ Machine Learning (AI/ML) model with at least one of, the computed one or more features. The method further includes the step of forecasting, by the one or more processors, utilizing the trained AI/ML model, one or more alerts associated with the performance metrics.
[0008] In another embodiment, the one or more alerts associated with the performance metrics are raised when predefined thresholds are breached by the performance metrics.
[0009] In yet another embodiment, the performance metrics is at least one of, Key Performance Indicators (KPIs) and counters.
[0010] In yet another embodiment, the one or more data sources include at least one of, an Integration Performance Management (IPM) module, and network configurations.
[0011] In yet another embodiment, the step of, retrieving, from one or more data sources, historic data pertaining to one or more alerts associated with the performance metrics, further includes the step of preprocessing, by the one or more processors, the historic data.
[0012] In yet another embodiment, the one or more features includes at least one of, one or more thresholds, time of peak of the performance metrics and time of dip of the performance metrics.
[0013] In yet another embodiment, the one or more processors, computes the one or more features based on at least one of, data visualization and plotting techniques.
[0014] In yet another embodiment, the step of, training, the AI/ML model with at least one of, the computed one or more features includes the steps of, identifying, by the one or more processors, trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics, the trends/patterns are related to the predefined thresholds breached by the performance metrics and enabling, by the one or more processors, the AI/ML model to learn the identified trends/patterns associated with the time of peak of the performance metrics and the time of dip of the performance metrics.
[0015] In yet another embodiment, the step of, forecasting, utilizing the trained AI/ML model, one or more alerts associated with the performance metrics, incudes the step of forecasting, by the one or more processors, utilizing the trained AI/ML model, the one or more alerts based on the learnt, at least one of, the trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics.
[0016] In yet another embodiment, the forecasting of the one or more alerts is performed by the one or more processors based on at least one of, a given future data range or one or more new data sources.
[0017] In yet another embodiment, the step of, forecasting, utilizing the trained AI/ML model, one or more alerts associated with the performance metrics, further includes the step of transmitting, by the one or more processors, the forecasted one or more alerts to a user via a user interface.
[0018] In yet another embodiment, upon training the AI/ML model, the method incudes the steps of, storing, by the one or more processors, the trained AI/ML model in a data lake by providing a distinct name to the trained AI/ML model and creating, by the one or more processors, a searchable catalogue in the data lake to allow third parties to use the trained AI/ML model based on providing the training name in the searchable catalogue for different training and use cases.
[0019] In another aspect of the present invention, the system for forecasting one or more alerts are disclosed. The system includes a retrieving unit, configured to, retrieve, from one or more data sources, historic data pertaining to one or more alerts associated with performance metrics. The system further includes a computing unit, configured to, compute, one or more features. The system further includes a training unit, configured to, train, an Artificial Intelligence/ Machine Learning (AI/ML) model with at least one of, the computed one or more features. The system further includes a forecasting engine, configured to, forecast, utilizing the trained AI/ML model, one or more alerts associated with the performance metrics.
[0020] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor. The processor is configured to retrieve, from one or more data sources, historic data pertaining to one or more alerts associated with performance metrics. The processor is further configured to compute, one or more features. The processor is further configured to train, an Artificial Intelligence/ Machine Learning (AI/ML) model with at least one of, the computed one or more features. The processor is further configured to forecast, utilizing the trained AI/ML model, one or more alerts associated with the performance metrics.
[0021] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0023] FIG. 1 is an exemplary block diagram of an environment for forecasting one or more alerts, according to one or more embodiments of the present invention;
[0024] FIG. 2 is an exemplary block diagram of a system for forecasting one or more alerts, according to one or more embodiments of the present invention;
[0025] FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention;
[0026] FIG. 4 is an exemplary architecture for forecasting one or more alerts, according to one or more embodiments of the present disclosure;
[0027] FIG. 5 is an exemplary signal flow diagram illustrating the flow for forecasting one or more alerts, according to one or more embodiments of the present disclosure; and
[0028] FIG. 6 is a flow diagram of a method for forecasting one or more alerts, according to one or more embodiments of the present invention.
[0029] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0030] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0031] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0032] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0033] Various embodiments of the present invention provide a system and a method for forecasting one or more alerts. The most unique aspect of the present invention is the ability to predict potential network issues and take proactive measures in maintaining network health and minimizing service disruptions that may be anticipated in future. The invention employs advanced Artificial Intelligence/Machine Learning (AI/ML) model to forecast the one or more alerts before network issues occur and transmit notification to the user or operator for optimal measures.
[0034] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for forecasting one or more alerts, according to one or more embodiments of the present invention. The environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, a system 108, and one or more data sources 110. In one embodiment, the one or more alerts refer to notifications or warnings that are triggered when performance metrics vary beyond the range of predefined thresholds or fall below acceptable levels. The one or more alerts also indicates that the performance of the system 108 is impacted. The present invention provides the system 108 and a method for forecasting the one or more alerts for performance metrics such as Key Performance indicators (KPIs) and counters at future time intervals. The forecasting one or more alerts are a possible one or more alerts which indicates that the performance metrics would reach the predefined thresholds and cause the performance degradation in the network 106. In other words, the system provides a prior notification of the one or more alerts to the user, so that a user can resolve the issues proactively.
[0035] For the purpose of description and explanation, the description will be explained with respect to one or more user equipment’s (UEs) 102, or to be more specific will be explained with respect to a first UE 102a, a second UE 102b, and a third UE 102c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
[0036] In an embodiment, each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0037] The network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0038] The network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
[0039] The environment 100 includes the server 104 accessible via the network 106. The server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
[0040] The environment 100 further includes the one or more data sources 110. In one embodiment, the data sources are origins from which the data is collected and utilized for at least one of, but not limited to, analysis, research, and decision-making. In one embodiment, the one or more data sources 110 is at least one of, but not limited to, network functions, network elements, network configurations and an Integration Performance Management (IPM) module. In particular, the one or more data sources 110 is associated with the sources included within the network 106 and outside the network 106.
[0041] In one embodiment, the IPM module, typically refers to a component within the system 108 or is embedded as the individual entity that focuses on monitoring, analyzing, and optimizing the performance of integrated applications and services. In particular, the IPM generates reports or maintains records associated with the performance metrics.
[0042] The environment 100 further includes the system 108 communicably coupled to the server 104, the UE 102, and the one or more data sources 110 is via the network 106. The system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
[0043] Operational and construction features of the system 108 will be explained in detail with respect to the following figures.
[0044] FIG. 2 is an exemplary block diagram of the system 108 for forecasting the one or more alerts, according to one or more embodiments of the present invention.
[0045] As per the illustrated and preferred embodiment, the system 108 for forecasting the one or more alerts, includes one or more processors 202, a memory 204, a data lake 206 and an Artificial Intelligence/ Machine Learning (AI/ML) model 220. The one or more processors 202, hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions. However, it is to be noted that the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure. Among other capabilities, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
[0046] As per the illustrated embodiment, the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202. The memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for forecasting the one or more alerts. The memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0047] The environment 100 further includes the data lake 206. As per the illustrated embodiment, the data lake 206 is configured to store data retrieved from the one or more data sources 110. The data lake 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of the data lake 206 types are non-limiting and may not be mutually exclusive e.g., the database can be both commercial and cloud-based, or both relational and open-source, etc.
[0048] As per the illustrated embodiment, the system 108 includes the AI/ML model 220. In an alternate embodiment, the system 108 includes a plurality of AI/ML models 220. The AI/ML model 220 is a machine learning model that performs tasks such as recognizing patterns, forecasting one or more alerts, making predictions, and solving problems, enhance decision-making, and provide insights across various fields. For example, the AI/ML model 220 facilitates in solving real-world problems without extensive manual intervention.
[0049] As per the illustrated embodiment, the system 108 includes the processor 202 for forecasting one or more alerts. The processor 202 includes a retrieving unit 208, a computing unit 210, a configuring unit 212, a training unit 214, and a forecasting engine 216. The processor 202 is communicably coupled to the one or more components of the system 108 such as the memory 204, the data lake 206 and the model 220. In an embodiment, operations and functionalities of the retrieving unit 208, the computing unit 210, the configuring unit 212, the training unit 214, the forecasting engine 216 and the one or more components of the system 108 can be used in combination or interchangeably.
[0050] In one embodiment, initially the retrieving unit 208 of the processor 202 is configured to retrieve historic data from the one or more data sources 110. Herein, the retrieved data pertains to one or more alerts associated with performance metrics. In one embodiment, the one or more alerts associated with the performance metrics are raised when predefined thresholds are breached by the performance metrics. The predefined thresholds are set by at least one of, the AI/ML model based on historical data of the performance metrics. Herein, the retrieving unit 208 retrieves historic data from the one or more data sources 110 which are present within the network 106 and outside the network 106. In one embodiment, the one or more data sources 110 are at least one of, but not limited to, the IPM module, and network configuration. In one embodiment, the one or more data sources 110 transmits data to the system 108 periodically. In an alternate embodiment, the historic data is retrieved from the one or more data sources 110 based on demand of the system 108.
[0051] In one embodiment, the retrieving unit 208 retrieves historic data from the one or more data sources 110 via an interface. For example, the interface is at least one of, but not limited to IPM interface. In particular, the IMP interface is the medium through which the system 108 and the one or more data sources 110 such as the IPM module communicates. In one embodiment, the interface includes at least one of, but not limited to, one or more APIs which are used for retrieving data from the one or more data sources 110.
[0052] The one or more APIs are sets of rules and protocols that allow different entities to communicate with each other. The one or more APIs define the methods and data formats that entities can use to request and exchange information, enabling integration and functionality across various platforms. In particular, the APIs are essential for integrating different systems, accessing services, and extending functionality.
[0053] In one embodiment, the performance metrics is at least one of, but not limited to, Key Performance Indicators (KPIs) and counters. The KPIs and counters are essential tools for measuring and evaluating the performance of various systems, including network 106. The KPIs are quantifiable metrics that assess the efficiency, effectiveness, and overall performance of the system 108. For example, the KPIs are at least one of, but not limited to, a reliability, an efficiency, a throughput, a latency, a network availability and a packet loss. In one embodiment, the counters are specific metrics that track the occurrence of particular events over time. For example, the counter measures the total amount of data transmitted over the network 106 during a specific period.
[0054] In one embodiment, upon retrieving the historic data from the one or more data sources 110, the retrieving unit 208 is configured to integrate the historic data retrieved from the one or more data sources 110. Herein, integrating data involves combining data retrieved from the one or more data sources 110 to provide a unified view or to enable comprehensive analysis. The processes of integrating data are essential for gaining insights, improving decision-making, and ensuring consistency across system 108.
[0055] Upon retrieving the historic data from the one or more data sources 110, the retrieving unit 208 is further configured to preprocess the historic data received from the one or more data sources 110. In particular, the retrieving unit 208 is configured to preprocess the retrieved historic data to ensure the data consistency and quality of the data within the system 108. The retrieving unit 208 performs at least one of, but not limited to, data normalization, data definition and data cleaning procedures.
[0056] While preprocessing, the retrieving unit 208 performs at least one of, but not limited to, reorganizing the data, removing the redundant data, formatting the data, removing null values from the data, handling missing values. The main goal of the preprocessing is to achieve a standardized data format across the entire system 108. The preprocessing eliminates duplicate data and inconsistencies which reduces manual efforts. The retrieving unit 208 ensures that the preprocessed data is stored appropriately in at least one of, the data lake 206 for subsequent retrieval and analysis.
[0057] Upon preprocessing the historic data, the computing unit 210 of the processor 202 is configured to compute at least one of, but not limited to, one or more features. Herein the one or more features includes at least one of, but not limited to, one or more thresholds, a time of peak of the performance metrics and a time of dip of the performance metrics. In other words, the computing unit 210 calculates at least one of, but not limited to, the one or more thresholds, the time of peak and the time of dip related to the performance metrics. In one embodiment, the computing unit 210 calculates the one or more thresholds for the performance metrics to define acceptable performance limits and to identify when performance metrics are deviating from the computed one or more thresholds.
[0058] In one embodiment, the computing unit 210 calculates the one or more thresholds for the performance metrics based on historical data associated with the performance metrics. Herein, the computed one or more thresholds are the thresholds which are set to identify that the performance metrics are reaching towards a maximum limit of the performance metrics. The maximum limit is a limit or a predefined acceptable range which is reached by the performance metrics. When the maximum limit is breached, one or more potential issues take place. For example, let us consider the performance metrics (KPI) such as the latency with the maximum limit is 160 milliseconds (ms). Based on the maximum limit, the computing unit 210 computes the one or more thresholds such as 130ms.
[0059] In one embodiment, calculating the peak time of the performance metrics involves identifying the specific time period during which the performance metrics reaches its maximum value. For example, let us assume that every day from 10AM to 12 AM a greater number of requests are arrived at a network function, then the performance metrics such as the latency of the network function is increased from 50 ms to 150 ms. So, the 10AM to 12 AM is considered as the specific time period during which the performance metrics reaches its maximum value.
[0060] Similarly, calculating the time of dip time of the performance metrics involves identifying the specific time period when the performance metrics reaches its minimum value. For example, let us assume that every day from 1PM to 3 PM less requests are arrived at the network function, then the performance metrics such as the latency of the network function is decreases from 50 ms to 40 ms. So, the 1PM to 3 PM is considered as the specific time period during which the performance metrics reaches its minimum value.
[0061] Herin, the computing unit 210 computes at least one of, but not limited to, the one or more thresholds, the time of peak and the time of dip associated with the performance metrics based on at least one of, but not limited to, a data visualization and plotting techniques. The data visualization and plotting techniques are a powerful way to communicate information clearly and effectively through graphical representations.
[0062] In one embodiment, the data visualization and plotting techniques are crucial for conveying complex information in a digestible format like charts and graphs. By selecting the appropriate visualization method, the computing unit 210 makes computed one or more features including at least one of, but not limited to, the one or more thresholds, the time of peak and the time of dip associated with the performance metrics in simpler form which is easily accessible by the system 108 for model 220 training. Hereafter, the computed one or more features associated with the performance metrics is referred to computed data and can be used interchangeably without limiting the scope of the invention.
[0063] Upon computing, the configuring unit 212 of the processor 202 configures one or more hyperparameters of the model 220 in order to train the AI/ML model 220 using the computed data. In one embodiment, the configuring unit 212 configures the one or more hyperparameters of the AI/ML model 220 based on historical data related to the performance metrics. Herein, the one or more hyperparameters of the AI/ML model 220 includes at least one of, but not limited to, a learning rate, a batch size, and a number of epochs. Subsequent to configuring the one or more hyperparameters of the AI/ML model 220, the configuring unit 212 infers that the model 220 is ready for training.
[0064] Upon configuring the one or more hyperparameters, the training unit 214 of the processor 202 is configured to train the AI/ML model 220 with at least one of, but not limited to, the preprocessed data, the computed data. In one embodiment, the training unit 214 retrieves the preprocessed data from the data lake 112 for training the AI/ML model 220.
[0065] In an alternate embodiment, the system 108 includes a plurality of AI/ML models 220 from which the training unit 214 selects an appropriate AI/ML model 220 for training. Thereafter, the selected AI/ML model 220 is trained using the preprocessed data, and the computed data.
[0066] In one embodiment, for training the AI/ML model 220, the training unit 214 splits the preprocessed data and the computed data into at least one of, but not limited to, training data and testing data. Further, the training unit 214 feeds the training data to the AI/ML model 220. Based on the fed training data, the AI/ML model 220 learns one or more trends/patterns in the fed training data. Subsequent to training, the trained AI/ML model 220 is fed with the testing data in order to evaluate performance of the trained AI/ML model 220.
[0067] In one embodiment, when the trained AI/ML model 220 generates an output based on the testing data, the training unit 214 evaluates the performance of the trained AI/ML model 220. In one embodiment, validation metrics such at least one of, but not limited to, accuracy and error are checked to evaluate the performance of the trained AI/ML model 220. In one embodiment, the output generated by the trained AI/ML model 220 is again fed back to the trained AI/ML model 220 by the training unit 214, so that based on the generated output, the trained AI/ML model 220 is trained again. In particular, after generating the output, the model 220 keeps on training and updating itself in order to achieve better output.
[0068] In one embodiment, while training the training unit 214 is further configured to track the progress of the training and store the trained AI/ML model 220 in the data lake 206 by providing a distinct name to the trained AI/ML model 220. For example, the training unit 214 may provide the name model A to the trained AI/ML model 220. In one embodiment, the training unit 214 creates a searchable catalogue in the data lake 206 to allow third parties to use the trained AI/ML model 220 based on providing the training name in the searchable catalogue for different training and use cases. For example, the training unit 214 may provide the training name such as forecasting. So, when the third parties want to use the trained AI/ML model 220, the third parties may search forecasting in the searchable catalogue and then may choose a AI/ML model A for different training and use cases.
[0069] In one embodiment, based on the performance evaluation of the of the trained AI/ML model 220, the training unit 214 may again configure the one or more hyperparameters of the trained AI/ML model 220 to optimize the performance of the trained AI/ML model 220. In one embodiment, when the performance of the trained AI/ML model 220 is optimized, then the trained model 220 is inferred as the optimal AI/ML model 220 which can be used for further analysis.
[0070] In one embodiment, based on training, the trained AI/ML model 220 identifies at least one of, but not limited to, trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics by applying one or more logics. Herein, the trends/patterns are related to the predefined thresholds breached by the performance metrics In one embodiment, the one or more logics may include at least one of, but not limited to, a k-means clustering, a hierarchical clustering, a Principal Component Analysis (PCA), an Independent Component Analysis (ICA), a deep learning logics such as Artificial Neural Networks (ANNs), a Convolutional Neural Networks (CNNs), a Recurrent Neural Networks (RNNs), a Long Short-Term Memory Networks (LSTMs), a Generative Adversarial Networks (GANs), a Q-Learning, a Deep Q-Networks (DQN), a Reinforcement Learning Logics, etc. Further, the training unit 214 enables the trained AI/ML model 220 to learn the identified trends/patterns associated with the time of peak of the performance metrics and the time of dip of the performance metrics.
[0071] Upon training the AI/ML model 220, the forecasting engine 216 of the processor 202 is configured to forecast one or more alerts utilizing the trained AI/ML model 220. In one embodiment, based on learnt at least one of, the trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics, the forecasting engine 216 forecasts the one or more alerts. In particular the forecasting engine 216 utilizes the trained AI/ML model 220 to identify current trends/patterns of the performance metrics.
[0072] Further, the forecasting engine 216 compares the identified trends/patterns of the performance metrics with the learnt at least one of, the trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics. Based on the comparison, if the forecasting engine 216 determines that the performance metrics deviates from the learnt trends/patterns associated with the performance metrics, then the forecasting engine 216 infers the deviation of the performance metrics as breach in the predefined thresholds based on which one or more alerts are forecasted.
[0073] In alternate embodiment, the forecasting engine 216 utilizes the trained AI/ML model 220 to forecast the one or more alerts based on at least one of, but not limited to, a given future date range or one or more new data sources 110. In one embodiment, if the trained AI/ML model 220 performance is optimal, the new data is provided to the AI/ML model 220 by at least one of, but not limited to, the user. In particular, the user provides the future date range so that the possible one or more alerts within the provided future date range is forecasted by the forecasting engine 216. In one embodiment, the user selects the one or more new data sources 110 so that so that the possible one or more alerts related to the one or more new data sources 110 is forecasted by the forecasting engine 216.
[0074] In one embodiment, in order to forecast the one or more alerts, the forecasting engine 216 utilizes the trained AI/ML model 220 to identify current performance metrics which are continuously received at the system 108 from the one or more data sources 110. Further, the forecasting engine 216 compares the identified performance metrics with the computed one or more thresholds which are set by the trained AI/ML model 220. Based on the comparison, if the forecasting engine 216 determines that performance metrics deviate from the computed one or more thresholds, then the forecasting engine 216 infers the deviation as the forecasted one or more alerts.
[0075] For example, let us consider the performance metrics such as the latency with the maximum limit as 160ms. Based on the maximum limit, let us assume that the one or more thresholds is 130ms. So, when the one or more thresholds (130ms) is breached by the performance metrics, then the forecasting engine 216 infers the breached performance metrics as the forecasted one or more alerts. In particular, the one or more alerts indicates that the performance metrics are reaching towards the maximum limit.
[0076] Upon forecasting the one or more alerts, the forecasting engine 216 is further configured to transmit the forecasted one or more alerts to the user via the UE 102. In particular, the users are notified the forecasted one or more alerts in a real time. In one embodiment, the user is notified on the UI 306 of the UE 102. Further, based on the forecasted one or more alerts the user performs one or more actions to resolve the forecasted one or more alerts. Herein, the one or more actions is at least one of, but not limited to, troubleshooting techniques and Root Cause Analysis (RCA) to resolve the forecasted one or more alerts without impacting the performance of the system 108 in the network 106.
[0077] The retrieving unit 208, the computing unit 210, the configuring unit 212, the training unit 214, the forecasting engine 216 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 204 may store instructions that, when executed by the processing resource, implement the processor 202. In such examples, the system 108 may comprise the memory 204 storing the instructions and the processing resource to execute the instructions, or the memory 204 may be separate but accessible to the system 108 and the processing resource. In other examples, the processor 202 may be implemented by electronic circuitry.
[0078] FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for forecasting one or more alerts. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
[0079] FIG. 3 shows communication between the UE 102, the system 108, and the one or more data sources 110. For the purpose of description of the exemplary embodiment as illustrated in FIG. 3, the UE 102, uses network protocol connection to communicate with the system 108, and the one or more data sources 110. In an embodiment, the network protocol connection is the establishment and management of communication between the UE 102, the system 108, and the one or more data sources 110 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols. The network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
[0080] In an embodiment, the UE 102 includes a primary processor 302, and a memory 304 and a User Interface (UI) 306. In alternate embodiments, the UE 102 may include more than one primary processor 302 as per the requirement of the network 106. The primary processor 302, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0081] In an embodiment, the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304. The memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for forecasting the one or more alerts. The memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
[0082] In an embodiment, the User Interface (UI) 306 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like. The UI 306 of the UE 102 allows the user to transmit data to the system 108 for training the AI/ML model 220. Herein, the UE 102 act as at least one data source 110. In one embodiment, the user receives at least one of, but not limited to, the forecasted one or more alerts on the UI 306 from the system 108. In one embodiment, the user may be at least one of, but not limited to, a network operator.
[0083] As mentioned earlier in FIG.2, the system 108 includes the processors 202, the memory 204 and the data lake 206, for forecasting one or more alerts, which are already explained in FIG. 2. For the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
[0084] Further, as mentioned earlier the processor 202 includes the retrieving unit 208, the computing unit 210, the configuring unit 212, the training unit 214, and the forecasting engine 216 which are already explained in FIG. 2. Hence, for the sake of brevity, a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition. The limited description provided for the system 108 in FIG. 3, should be read with the description provided for the system 108 in the FIG. 2 above, and should not be construed as limiting the scope of the present disclosure.
[0085] FIG. 4 is an exemplary the system 108 architecture 400 for forecasting the one or more alerts, according to one or more embodiments of the present disclosure.
[0086] The architecture 400 includes the one or more data sources 110, a data integrator 402, a data pre-processing unit 404, a model training unit 406, a prediction unit 408, the data lake 206 and the UI 306 communicably coupled to each other via the network 106.
[0087] In one embodiment, the one or more data sources 110 are various origins from which the historic data pertaining to one or more alerts of performance metrics is collected which are used for at least one of, analysis, AI/ML model 220 training, or other purposes. In one embodiment, the IPM interface is at least one data source 110. In an alternate embodiment, IPM interface collects historic data from the one or more data sources 110 which are present within the network 106 and outside the network 106. The IPM interface periodically collects historic data from the one or more data sources 110.
[0088] In one embodiment, the data integrator 402 collects historic data from at least one of, the IPM interface and the one or more data sources 110. In one embodiment, the data integrator 402 integrates the historic data received from the one or more data sources 110 within the network 106 and the one or more data sources 110 outside the network 106. Herein, integrating data involves combining data from the one or more data sources 110 to provide a unified view or to enable comprehensive analysis.
[0089] In one embodiment, the data pre-processing unit 404 preprocesses the historic data received from the one or more data sources 110. For example, the data undergoes preprocessing to ensure data consistency within the system 108. In particular, the preprocessing involves tasks like data cleaning, normalization, removing unwanted data like outliers, duplicate records and handling missing values.
[0090] In one embodiment, the data lake 206 includes a structured collection of preprocessed data that is managed and organized in a way that allows system 108 for easy access, retrieval, and manipulation. The data lake 206 are used to store, manage, and retrieve large amounts of information efficiently.
[0091] In one embodiment, the model training unit 406 trains the AI/ML model 220 using the preprocessed data stored in the data lake 206. Due to training, the trained AI/ML model 220 is used for various purposes such as forecasting one or more threshold breach alerts, identifying the one or more anomalies, pattern recognition and issue detection etc.
[0092] In one embodiment, the prediction unit 408 forecasts the one or more alerts using the model training unit 406. By leveraging historical data pertaining to the one or more alerts of the performance metrics and advanced AI/ML model 220 training techniques, the prediction unit 408 unit aims to provide timely insights that empower decision-makers to take proactive actions.
[0093] FIG. 5 is a signal flow diagram illustrating the flow for forecasting the one or more alerts, according to one or more embodiments of the present disclosure.
[0094] At step 502, the system 108 retrieves historic data pertaining to the one or more alerts associated with the performance metrics from at least one of, the one or more data sources 110 present within the network 106 and the one or more data sources 110 present outside the network 106. In one embodiment, the system 108 transmits at least one of, but not limited to, a HTTP request to the one or more data sources 110 to retrieve the historic data. In one embodiment, a connection is established between the system 108 and the one or more data sources 110 before retrieving the historic data. Further, the historic data is preprocessed. Herein, the system 108 makes the preprocessed data ready for training the AI/ML model 220.
[0095] At step 504, the system 108 computes at least one of, but not limited to the one or more features. Herein, the system 108 makes the computed data ready for training the AI/ML model 220.
[0096] At step 506, the system 108 configures the one or more hyperparameters of the AI/ML model 220. In one embodiment, the user configures the one or more hyperparameters of the AI/ML model 220 via the UI 306 of the UE 102. In alternate embodiment, the system 108 configures the one or more hyperparameters of the AI/ML model 220 based on historical data. Herein, the system 108 makes the one or more hyperparameters ready for training the AI/ML model 220.
[0097] At step 508, the system 108 trains the AI/ML model 220 based on at least one of, the preprocessed data and the computed data. In particular, the AI/ML model 220 is trained to teach the AI/ML model 220 to make predictions or forecast one or more alerts.
[0098] At step 510, the system 108 forecasts the one or more alerts using the trained model 220. The forecasted one or more alerts are provided on the UI 306 of the UE 102.
[0099] FIG. 6 is a flow diagram of a method 600 for forecasting the one or more alerts, according to one or more embodiments of the present invention. For the purpose of description, the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
[00100] At step 602, the method 600 includes the step of retrieving historic data pertaining to the one or more alerts associated with the performance metrics from the one or more data sources 110. In one embodiment, the retrieving unit 208 retrieves the data from the one or more data sources 110. In particular, the retrieving unit 208 utilizes the one or more APIs for retrieving the data from the one or more data sources 110. For example, the retrieving unit 208 retrieves data from at least one of, but not limited to, the IPM and the network functions. Herein, the data is at least one of, but not limited to, historical data related to the one or more alerts of each KPI and counter in the network 106.
[00101] Further, the historic data received from the one or more data sources 110 is combined by the retrieving unit 208. Thereafter, the integrated historic data is preprocessed by the the retrieving unit 208 to ensure the data consistency and quality within the system 108.
[00102] At step 604, the method 600 includes the step of computing, at least one of, one or more features. In one embodiment, the computing unit 210 selects relevant features from the retrieved historic data such as ay least one of, but not limited to, one or more thresholds, the time of peak of the performance metrics and the time of dip of the performance metrics. For example, the computing unit 210 selects at least one of, but not limited to, minimum and maximum thresholds which are reached by the performance metrics, the time at which the performance metrics has reached the minimum and maximum thresholds, a particular day/week/ month at which the performance metrics has reached the minimum and maximum thresholds.
[00103] Further the computing unit 210 computes the selected one or more features. For example, let us assume that the performance metrics of the network function such as a response time with the maximum limit is 200 ms. Based on the maximum limit, the computing unit 210 computes the one or more thresholds such as 150 ms. The time at which the performance metrics is at maximum limit (200 ms) or near maximum limit, then the time is referred to time of peak of the performance metrics. The time at which the performance metrics is low, then the time is referred to time of dip of the performance metrics.
[00104] Further, the configuring unit 212 configures the one or more hyperparameters of the AI/ML model 220. For example, the configuring unit 212 configures the learning rate such as 0.1 or 0.001. The configuring unit 212 set the number of epochs such as 50 which indicates how many times the AI/ML model 220 will work through the entire training dataset.
[00105] At step 606, the method 600 includes the step of training the AI/ML model 220 with at least one of, the computed one or more features. In one embodiment, the training unit 214 trains the AI/ML model 220 based on the at least one of, the historic data which is preprocessed data and the computed data. Based on training, the trained AI/ML model 220 identifies and learns at least one of, but not limited to, trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics.
[00106] At step 608, the method 600 includes the step of forecasting the one or more alerts associated with the performance metrics utilizing the trained AI/ML model 220. In particular, based on the learnt trends/patterns, the trained AI/ML model 220 tunes the computed data such as the one or more thresholds. Thereafter, the current performance metrics are identified and compared with the computed one or more thresholds which are tuned by the trained AI/ML model 220. On comparing, if the forecasting engine 216 determines that performance metrics deviate from the computed one or more thresholds, then the forecasting engine 216 infers the deviation as the one or more alerts.
[00107] For example, let us assume that the performance metrics (KPI) of the network function such as the response time is 160 ms and the computed one or more thresholds is 150 ms. Based on comparison, when the response time is exceeding the computed one or more thresholds, the exceed in the response time indicates that the performance metrics are reaching towards the maximum limit of the response time. In other words, when the response time is exceeding the computed one or more thresholds then the one or more alerts are forecasted.
[00108] Based on the forecasted one or more alerts, the forecasting engine 216 provides notifications pertaining to the forecasted one or more alerts to the user so that the user can perform the one or more actions to resolve the forecasted one or more alerts. Advantageously, due to a proactive approach of forecasting the one or more alerts, the user addresses the issues before impacting customers, leads to reduction in the service disruptions and enhances network performance.
[00109] In yet another aspect of the present invention, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 202. The processor 202 is configured to retrieve, from one or more data sources 110, historic data pertaining to one or more alerts associated with performance metrics. The processor 202 is further configured to compute at least one of, one or more features. The processor 202 is further configured to train the AI/ML model 220 with at least one of, the computed one or more features. The processor 202 is further configured to forecast, utilizing the trained AI/ML model 220, one or more alerts associated with the performance metrics.
[00110] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1-6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[00111] The present disclosure provides technical advancements of proactive issue resolution. The invention proactively predicts the threshold breach for multiple performance metrics in network and prompts the network operators immediately so that corrective actions may be taken before the performance metrics lead to service disruptions or impact customer satisfaction. The automation relieves network operators from performing manual tasks related to monitor threshold breach pattern/trend analysis, improving operational efficiency. The invention improves network service quality which results in higher customer satisfaction and retention.
[00112] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.
REFERENCE NUMERALS
[00113] Environment - 100;
[00114] User Equipment (UE) - 102;
[00115] Server - 104;
[00116] Network- 106;
[00117] System -108;
[00118] One or more data sources – 110;
[00119] Processor - 202;
[00120] Memory - 204;
[00121] Data lake – 206;
[00122] Retrieving unit – 208;
[00123] Computing unit – 210;
[00124] Configuring unit – 212;
[00125] Training unit – 214;
[00126] Forecasting engine – 216;
[00127] AI/ML Model – 220;
[00128] Primary Processor – 302;
[00129] Memory – 304;
[00130] User Interface (UI) – 306;
[00131] Data integrator– 402;
[00132] Data Preprocessing unit – 404;
[00133] Model training unit – 406;
[00134] Prediction unit – 408.
,CLAIMS:CLAIMS
We Claim:
1. A method (600) for forecasting one or more alerts, the method (600) comprising the steps of:
retrieving, by the one or more processors (202), from one or more data sources (110), historic data pertaining to one or more alerts associated with performance metrics;
computing, by the one or more processors (202), one or more features;
training, by the one or more processors (202), an Artificial Intelligence/ Machine Learning (AI/ML) model (220) with at least one of, the computed one or more features; and
forecasting, by the one or more processors (202), utilizing the trained AI/ML model (220), one or more alerts associated with the performance metrics
2. The method (600) as claimed in claim 1, wherein the one or more alerts associated with the performance metrics are raised when predefined thresholds are breached by the performance metrics.
3. The method (600) as claimed in claim 1, wherein the performance metrics is at least one of, Key Performance Indicators (KPIs) and counters.
4. The method (600) as claimed in claim 1, wherein the one or more data sources (110) include at least one of, an Integration Performance Management (IPM) module, and network configurations.
5. The method (600) as claimed in claim 1, wherein the step of, retrieving, from one or more data sources (110), historic data pertaining to one or more alerts associated with the performance metrics, further includes the step of:
preprocessing, by the one or more processors (202), the historic data.
6. The method (600) as claimed in claim 1, wherein the one or more features includes at least one of, one or more thresholds, time of peak of the performance metrics and time of dip of the performance metrics.
7. The method (600) as claimed in claim 1, wherein the one or more processors (202), computes the one or more features based on at least one of, data visualization and plotting techniques.
8. The method (600) as claimed in claim 1, wherein the step of, training, the AI/ML model (220) with at least one of, the computed one or more features, includes the steps of:
identifying, by the one or more processors (202), trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics, wherein the trends/patterns are related to the predefined thresholds breached by the performance metrics and
enabling, by the one or more processors (202), the AI/ML model (220) to learn the identified trends/patterns associated with the time of peak of the performance metrics and the time of dip of the performance metrics.
9. The method (600) as claimed in claim 1, wherein the step of, forecasting, utilizing the trained AI/ML model (220), one or more alerts associated with the performance metrics, incudes the step of:
forecasting, by the one or more processors (202), utilizing the trained AI/ML model (220), the one or more alerts based on the learnt, at least one of, the trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics.
10. The method (600) as claimed in claim 9, wherein the forecasting of the one or more alerts is performed by the one or more processors (202) based on at least one of, a given future data range or one or more new data sources.
11. The method (600) as claimed in claim 1, wherein the step of, forecasting, utilizing the trained AI/ML model (220), one or more alerts associated with the performance metrics, further includes the step of:
transmitting, by the one or more processors (202), the forecasted one or more alerts to a user via a user interface (306).
12. The method (600) as claimed in claim 1, wherein upon training the AI/ML model (220), the method (600) includes the steps of:
storing, by the one or more processors (202), the trained AI/ML model (220) in a data lake by providing a distinct name to the trained AI/ML model (220); and
creating, by the one or more processors (202), a searchable catalogue in the data lake (206) to allow third parties to use the trained AI/ML model (220) based on providing the training name in the searchable catalogue for different training and use cases.
13. A system (108) for forecasting one or more alerts, the system (108) comprising:
a retrieving unit (208), configured to, retrieve, from one or more data sources (110), historic data pertaining to one or more alerts associated with performance metrics;
a computing unit (210), configured to, compute, one or more features;
a training unit (214), configured to, train, an Artificial Intelligence/ Machine Learning (AI/ML) model (220) with at least one of, the computed one or more features; and
a forecasting engine (216), configured to, forecast, utilizing the trained AI/ML model (220), one or more alerts associated with the performance metrics.
14. The system (108) as claimed in claim 13, wherein the one or more alerts associated with the performance metrics are raised when predefined thresholds are breached by the performance metrics.
15. The system (108) as claimed in claim 13, wherein the performance metrics is at least one of, Key Performance Indicators (KPIs) and counters.
16. The system (108) as claimed in claim 13, wherein the one or more data sources (110) include at least one of, an Integration Performance Management (IPM) module, and network configurations.
17. The system (108) as claimed in claim 13, wherein the retrieving unit (208), is further configured to:
preprocess, the historic data.
18. The system (108) as claimed in claim 13, wherein the one or more features includes at least one of, one or more thresholds, time of peak of the performance metrics and time of dip of the performance metrics.
19. The system (108) as claimed in claim 13, wherein the computing unit (210), computes the one or more features based on at least one of, data visualization and plotting techniques.
20. The system (108) as claimed in claim 13, wherein the training unit (214) trains the AI/ML model (220) by:
identifying, trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics, wherein the trends/patterns are related to the predefined thresholds breached by the performance metrics and
enabling, the AI/ML model (220) to learn the identified trends/patterns associated with the time of peak of the performance metrics and the time of dip of the performance metrics.
21. The system (108) as claimed in claim 13, wherein the forecasting engine (16) forecasts, utilizing the trained AI/ML model (220), the one or more alerts associated with the performance metrics, by:
forecasting, utilizing the trained AI/ML model (220), the one or more alerts based on the learnt, at least one of, the trends/patterns associated with the time of peak of performance metrics and the time of dip of the performance metrics.
22. The system (108) as claimed in claim 21, wherein the forecasting of the one or more alerts is performed by the forecasting engine (216) based on at least one of, a given future data range or one or more new data sources (110).
23. The system (108) as claimed in claim 13, wherein the forecasting engine (216) is further configured to:
transmit, the forecasted one or more alerts to a user via a user interface (306).
24. The system (108) as claimed in claim 13, wherein upon training the AI/ML model (220), the training unit (214) is further configured to:
store, the trained AI/ML model (220) in a data lake (206) by providing a distinct name to the trained AI/ML model (220); and
create, a searchable catalogue in the data lake (206) to allow third parties to use the trained AI/ML model (220) based on providing the training name in the searchable catalogue for different training and use cases.
| # | Name | Date |
|---|---|---|
| 1 | 202321068026-STATEMENT OF UNDERTAKING (FORM 3) [10-10-2023(online)].pdf | 2023-10-10 |
| 2 | 202321068026-PROVISIONAL SPECIFICATION [10-10-2023(online)].pdf | 2023-10-10 |
| 3 | 202321068026-FORM 1 [10-10-2023(online)].pdf | 2023-10-10 |
| 4 | 202321068026-FIGURE OF ABSTRACT [10-10-2023(online)].pdf | 2023-10-10 |
| 5 | 202321068026-DRAWINGS [10-10-2023(online)].pdf | 2023-10-10 |
| 6 | 202321068026-DECLARATION OF INVENTORSHIP (FORM 5) [10-10-2023(online)].pdf | 2023-10-10 |
| 7 | 202321068026-FORM-26 [27-11-2023(online)].pdf | 2023-11-27 |
| 8 | 202321068026-Proof of Right [12-02-2024(online)].pdf | 2024-02-12 |
| 9 | 202321068026-DRAWING [08-10-2024(online)].pdf | 2024-10-08 |
| 10 | 202321068026-COMPLETE SPECIFICATION [08-10-2024(online)].pdf | 2024-10-08 |
| 11 | Abstract.jpg | 2025-01-03 |
| 12 | 202321068026-Power of Attorney [24-01-2025(online)].pdf | 2025-01-24 |
| 13 | 202321068026-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf | 2025-01-24 |
| 14 | 202321068026-Covering Letter [24-01-2025(online)].pdf | 2025-01-24 |
| 15 | 202321068026-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf | 2025-01-24 |
| 16 | 202321068026-FORM 3 [29-01-2025(online)].pdf | 2025-01-29 |