Sign In to Follow Application
View All Documents & Correspondence

System And Method For Predicting A Failure Event In A Network

Abstract: ABSTRACT SYSTEM AND METHOD FOR PREDICTING A FAILURE EVENT IN A NETWORK The present invention relates to a system (120) and a method (500) for monitoring performance of a network (105) in real time is disclosed. The system (120) includes a receiving unit (220) configured to receive data from a data source. The system (120) includes a selecting unit (230) configured to select one or more features of the received data. The system (120) includes a training unit (235) configured to train a model utilizing the received data corresponding to the selected one or more features. The system (120) includes an evaluating unit (240) configured to evaluate validation metrics of a trained model based on a comparison of the trained model and a previously trained model. The system (120) includes a generating unit (245) configured to generate one or more predictions pertaining to the failure event in the network in real time. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2023
Publication Number
15/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR PREDICTING A FAILURE EVENT IN A NETWORK
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication networks, more particularly to a system and a method for predicting a failure event in a network.
BACKGROUND OF THE INVENTION
[0002] With the increase in number of users, the network service provisions have been implementing to up-gradations to enhance the service quality so as to keep pace with such high demand. With advancement of technology, there is a demand for the telecommunication service to induce up to date features into the scope of provision so as to enhance user experience and implement advanced monitoring mechanisms. There are regular data analyses to observe issues beforehand for which many data collection as well as assessment practices are implemented in a network.
[0003] A probing agent is implemented in a network to actively collect probing data, preferably Streaming Data Record (SDR) from one or more network nodes. The one or more network nodes generate the SDR including the clear codes for failed events at a procedure level whenever any error scenario occurs or experienced by Network Function (NF) or the network node. The clear codes are predefined codes that indicate the status of various network components or operations. Once the SDRs are generated by the NF, then they are streamed towards the probing agent, where the records are finally indexed in an Adaptive Troubleshooting and Operations Management platform (ATOM) data lake. Furthermore, the SDRs are analyzed which will aid in the overall network monitoring, troubleshooting and root cause analysis.
[0004] The traditional system does not have a specific mechanism to analyze the historical failure data records in particular, that are coming from different applications as mentioned above, thus unable to perform appropriate Root Cause Analysis (RCA). This lack of failure analysis does not leverage the system to take corrective actions in the network thus leading to service disruption and customer dissatisfaction.
[0005] There is a requirement of a system and method thereof for proper analysis of past data regarding failure clear codes from various network performance indicators, KPIs, alarms or counters etc., to predict forthcoming failures so that proactive measures can be taken.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and a system for predicting a failure event in a network.
[0007] In one aspect of the present invention, the method for predicting the failure event in the network is disclosed. The method includes the step of receiving, by one or more processors, data from a data source, wherein the data source is at least one of a probing unit. The method includes the step of selecting, by the one or more processors, one or more features of the received data. The one or more features correspond to at least a time of day or a day of week or a day of month and performance data pertaining to the failure event. The method includes the step of training, by the one or more processors, a model utilizing the received data corresponding to the selected one or more features. The method includes the step of evaluating, by the one or more processors, validation metrics of a trained model based on a comparison of the trained model and a previously trained model. The method includes the step of generating, by the one or more processors, one or more predictions pertaining to the failure event in the network in real time.
[0008] In one embodiment, the data corresponds to at least one of performance data of a network and Streaming Data Records (SDRs) of the network. The performance data is at least one of alarms, counters, clear codes, and Key Performance Indicators (KPIs).
[0009] In another embodiment, on receiving the data, the method includes the step of performing, by the one or more processors, operations pertaining to data definition, normalization, and cleaning on the received data to ensure consistency of the data.
[0010] In yet another embodiment, the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model.
[0011] In yet another embodiment, the method includes the step of transmitting, by the one or more processors, a notification pertaining to the one or more predictions to a User Equipment (UE) on generation of the one or more predictions.
[0012] In another aspect of the present invention, the system for predicting a failure event in the network is disclosed. The system includes a receiving unit configured to receive data from a data source. The data source is at least one of a probing unit. The system includes a selecting unit configured to select one or more features of the received data. The one or more features correspond to at least a time of day or a day of week or a day of month and performance data pertaining to the failure event. The system includes a training unit configured to train a model utilizing the received data corresponding to the selected one or more features. The system includes an evaluating unit configured to evaluate, validation metrics of a trained model based on a comparison of the trained model and a previously trained model. The system includes a generating unit configured to generate one or more predictions pertaining to the failure event in the network in real time.
[0013] In another aspect of the embodiment, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to receive data from a data source. The data source is at least one of a probing unit. The processor is configured to select one or more features of the received data. The one or more features correspond to at least a time of day or a day of week or a day of month and performance data pertaining to the failure event. The processor is configured to train a model utilizing the received data corresponding to the selected one or more features. The processor is configured to evaluate validation metrics of a trained model based on a comparison of the trained model and a previously trained model. The processor is configured to generate one or more predictions pertaining to the failure event in the network in real time.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for predicting a failure event in a network, according to one or more embodiments of the present disclosure;
[0017] FIG. 2 is an exemplary block diagram of a system for predicting the failure event in the network, according to the one or more embodiments of the present disclosure;
[0018] FIG. 3 is a block diagram of an architecture that can be implemented in the system of FIG.2, according to the one or more embodiments of the present disclosure;
[0019] FIG. 4 is a flow chart illustrating a method for predicting the failure event in the network, according to the one or more embodiments of the present disclosure; and
[0020] FIG. 5 is a flow diagram illustrating the method for predicting the failure event in the network, according to the one or more embodiments of the present disclosure.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] Embodiments of the present invention provide a system and a method for prediction of a failure event in a network. The present system and method further configured to predict a future disruptive situation prior to actual happening in required format to achieve proactive problem solving. The system is interfaced with a probing unit which collects failure data from Streaming Data Records (SDRs) or clear codes. The system is also configured to integrate and apply trained AI/ML models to the failure data thus forecasting future events and taking appropriate action based on upcoming abnormalities in clear code data.
[0026] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for predicting a failure event in a network 105, according to one or more embodiments of the present invention. The environment 100 includes the network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 for predicting the failure event in the network 105. In an embodiment, the user is at least one of, a network operator, and a service provider. Predicting the failure event in the network 105 involves identifying potential disruptions or performance degradations before they occur, allowing for proactive measures to mitigate impact. In an embodiment, the failure event in the network 105, includes, but not limited to, a UE registration/deregistration failure, a Protocol Data Unit (PDU) session establishment failure, and an access network release failure.
[0027] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0028] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0029] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0030] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is configured for predicting the failure event in the network 105. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity, as per multiple embodiments of the present invention.
[0031] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0032] FIG. 2 is an exemplary block diagram of a system 120 for predicting the failure event in the network 105, according to one or more embodiments of the present disclosure.
[0033] The system 120 includes a processor 205, a memory 210, a user interface 215, and a storage unit 255. For the purpose of description and explanation, the description will be explained with respect to one or more processors 205, or to be more specific will be explained with respect to the processor 205 and should nowhere be construed as limiting the scope of the present disclosure. The one or more processor 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0034] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0035] The User Interface (UI) 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the UE 110, and the storage unit 255. The term “storage unit”, “database” and “data lake” are used interchangeably hereinafter, without limiting the scope of the disclosure.
[0036] The storage unit 255 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of storage unit 255 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0037] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0038] In order for the system 120 to predict the failure event in the network 105 in real time, the processor 205 includes a receiving unit 220, a performing unit 225, a selecting unit 230, a training unit 235, an evaluating unit 240, a generating unit 245, and a transmitting unit 250 communicably coupled to each other. In an embodiment, operations and functionalities of the receiving unit 220, the performing unit 225, the selecting unit 230, the training unit 235, the evaluating unit 240, the generating unit 245, and the transmitting unit 250 can be used in combination or interchangeably.
[0039] The receiving unit 220 is configured to receive data from a data source. In an embodiment, the data source is at least one of a probing unit 305 (as shown in FIG.3). The data corresponds to at least one of performance data of the network 105 and Streaming Data Records (SDRs) of the network 105. The probing unit 305 collects the data, which ensures the current state of the network 105. In another embodiment, the data source includes, but is not limited to, one or more network nodes, an Integrated Performance Management (IPM), a Fault Management System (FMS), and a Network Management System (NMS). The one or more network nodes generate and transmit the performance data of the network 105. The performance data is at least one of alarms 305a (as shown in FIG.3), counters 305b (as shown in FIG.3), clear codes 305c (as shown in FIG.3), and Key Performance Indicators (KPIs) 305d (as shown in FIG.3).
[0040] The SDR captures real-time data about network activities or transactions, such as session information, packet details, or logs from one or more network functions (e.g., user sessions, calls, data transfers). The SDR includes, but is not limited to, the clear codes 305C. Each of the clear codes 305C indicates the failure event, including but not limited to, the UE registration/deregistration failure, the PDU session establishment failure, and the access network release failure, in the network 105. The SDRs are used for monitoring the behavior and performance of network elements and services in real time.
[0041] Upon receiving the data from the data source, the performing unit 225 is configured to perform operations pertaining to data definition, normalization, and cleaning of the received data to ensure consistency of the streaming data. The data definition refers to structuring the data by identifying and labeling data elements to ensure that the data adheres to a specific format or schema. The normalization of the data refers to classifying the data to eliminate redundancy and ensure efficiency, often by standardizing values and ensuring that the standardized values within the data are correct. The cleaning of the received streaming data refers to removing or correcting errors, inconsistencies, and inaccuracies within the data, such as handling missing values, duplicates, or outliers.
[0042] Upon performing the operations by the performing unit 225, the selecting unit 230 is configured to select one or more features of the received data. In an embodiment, the one or more features correspond to at least a time of day or a day of week or a day of month and the performance data pertaining to the failure event. The one or more features refer to separating or arranging the data. In an embodiment, each of the one or more features includes the received data associated with at least the time of day or the day of week or the day of month. In an exemplary embodiment for the time of day, dividing the data based on specific times, such as morning hours (6 AM to 12 PM) or evening hours (6 PM to 12 AM), which aid in understanding patterns in network usage or performance at different times of the day. In an exemplary embodiment for the day of the week, the data is separated based on the day (e.g., Monday, Tuesday), which is useful for identifying trends that might occur on specific days, such as higher network traffic on weekdays versus weekends. In an exemplary embodiment for the day of the month, segregating the data based on particular days within a month (e.g., the 1st, 15th, or end of the month), which aids to identify recurring monthly patterns, like increased usage or network demand.
[0043] By categorizing the data into distinct groups based on the one or more features, the system 120 can easily identify trends analysis or anomalies. By selecting the one or more features based on the time of day or day of the week, the user analyzes peak and off-peak times. In an exemplary embodiment, higher traffic might be observed during business hours, allowing the network 105 to adjust resources accordingly. The trends analysis refers to the data based on the one or more features used for predictive analysis, which helps in forecasting future network usage and planning upgrades or expansions based on recurring patterns. By examining the data for specific time frames (e.g., if traffic is unusually high during a normally quiet time), the system 120 can flag potential network issues, such as security breaches or abnormal user behavior.
[0044] Upon selecting the one or more features of the received data, the training unit 235 is configured to train a model utilizing the received data corresponding to the selected one or more features. In an embodiment, the model includes, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) model. Using the cleaned and normalized data, the training unit 235 applies machine learning algorithms (e.g., decision trees, neural networks) to learn patterns and relationships that precede failure events. The AI/ML model is used to identify the failure event or to forecast the future trends. The AI/ML model utilizes a variety of ML techniques, such as supervised learning, unsupervised learning, and reinforcement learning.
[0045] In one embodiment, the supervised learning is a type of machine learning algorithm, which is trained on a labeled dataset. The supervised learning refers to each training example paired with an output label. The supervised learning algorithm learns to map inputs to a correct output. In one embodiment, the unsupervised learning is a type of machine learning algorithm, which is trained on data without any labels. The unsupervised learning algorithm tries to learn the underlying structure or distribution in the data in order to discover patterns or groupings. In one embodiment, the reinforcement learning is a type of machine learning where an agent learns to make decisions by performing actions in an environment to maximize cumulative reward. The agent receives feedback in the form of rewards or penalties based on the actions it takes, and it learns a path that maps states of the environment to the best actions.
[0046] In an embodiment, the trained model learns the trends/patterns associated with the failure event. The trained AI/ML model is configured to analyze the trends over time, such as gradual increases in bandwidth usage or recurring patterns of downtime, which aids in understanding the long-term behavior of the network 105. The trained AI/ML model learns the trends/patterns associated with the failure event. Upon training the model utilizing the received data corresponding to the selected one or more features, the evaluating unit 240 is configured to evaluate validation metrics of the trained model based on a comparison of the trained model and a previously trained model.
[0047] In an embodiment, the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model. The accuracy measures the proportion of correct predictions made by the trained model compared to the total number of predictions. Higher accuracy indicates better performance. The error refers to the difference between the model’s predictions and the actual outcomes. The error can be measured in various ways, such as mean squared error (MSE) or mean absolute error (MAE).
[0048] The evaluation involves comparing the trained model against the previously trained model to determine improvements or changes in performance of the network 105. By comparing the trained model with the previously trained model, the evaluating unit 240 determines whether the trained model provides better accuracy or has fewer errors, which helps in understanding if the changes or improvements made in the training process have led to a more effective model. In an exemplary embodiment, the previously trained model provides an accuracy of 85% and an error rate of 10%. After generating the trained model with the data, the evaluating unit 240 determines whether the trained model provides an accuracy of 88% and an error rate of 8%, which indicates an improvement in performance, as the trained model is more accurate and provides lower error rate compared to the previously trained model.
[0049] Upon evaluating the validation metrics based on the comparison of the trained model and the previously trained model, the generating unit 245 is configured to generate one or more predictions pertaining to the failure event in the network 105 in real time. In an exemplary embodiment, 75% probability of the UE registration failure in a first region within the next 30 minutes due to increased signaling load and a drop in signal strength. Users in the first region may experience issues connecting to the network 105, leading to a spike in customer support tickets and potential churn. In another exemplary embodiment, there is a 70% probability of PDU session establishment failure for new UEs in a second region within the next hour, primarily due to insufficient bearer resources and ongoing network congestion (load at 90% capacity).
[0050] Upon generating the one or more predictions pertaining to the failure event in the network 105, the transmitting unit 250 is configured to transmit a notification pertaining to the one or more predictions to the UE 110 on generation of the one or more predictions. In an exemplary embodiment, the notification includes, but not limited to, one or more predictions summary, impact information, and recommended actions. The one or more predictions summary refers to brief description of the prediction of the failure event. The impact information refers to explanation of how the prediction might affect the user. The recommended actions refer to suggestions for the users, such as switching to a different network, or waiting for network conditions to improve. The transmitting unit 250 is triggered to send the notification to the UE 110 via a Short Message Service (SMS).
[0051] FIG. 3 is a block diagram of an architecture 300 that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure. The architecture 300 of the system 120 includes the data lake 255, the probing unit 305, a data collector and integrator module 310, a data pre-processing module 315, a model training unit 320, a future failure computation unit 325, and a probing unit UI 330. The probing unit 305 includes, but is not limited to, the alarms 305a, the counters 305b, the clear codes 305c, and the KPIs 305d.
[0052] The architecture 300 of the system 120 is configured to interact with the probing unit 305. The probing unit 305 transmits the data to the data collector and integrator module 310. In an embodiment, the data include, but not limited to, the performance data of the network 105, and the SDRs of the network 105. In an embodiment, the data source includes, but is not limited to, one or more network nodes/functions. The SDRs are used for monitoring the behavior and performance of network elements and services in real time. The alarms monitor real-time conditions and trigger alerts when the thresholds are breached. The counters track specific events or metrics over time, providing insights into network performance. In an exemplary embodiment, the registration attempts of the UE 110 include counting the number of UE registration attempts and successes. The session establishment attempts to track the number of successful and failed PDU session establishments. The data can help identify the trends and the patterns related to the failure events in the network 105.
[0053] In an embodiment, the KPIs measure the effectiveness and efficiency of the network 105. In an exemplary embodiment, registration success rate includes percentage of successful UE registrations and percentage of successful PDU session establishments. Regular monitoring of KPIs helps the users understand overall network health and detect early signs of degradation. The probing unit 305 combines the alarms 305a, the counters 305b, the clear codes the 305c, and the KPIs 305d to provide a comprehensive view of network performance. By utilizing these components, the user can effectively manage network operations, and quickly respond to issues. The probing unit 305 transmits the data to the data collector and integrator module 310.
[0054] The data collector and integrator module 310 collects the data from the probing unit 305 and transmits the collected data to the data pre-processing module 315. The data pre-processing module 315 receives the collected data and cleans, normalizes the collected data based on the one or more features. Upon normalizing the collected data, the data pre-processing module 315 transmits the normalized data to the model training unit 320.
[0055] The model training unit 320 is configured to train the model utilizing the normalized data. In an embodiment, the model includes, but is not limited to, Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions based on the data. The model training unit 320 is further configured to access information and store the previously trained model in the data lake 255.
[0056] The future failure computation unit 325 is configured to evaluate the validation metrics based on a comparison of the trained model and the previously trained model to predict the failure event in the network 105. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model. The future failure computation unit 325 is configured to generate the one or more predictions pertaining to the failure event in the network 105 in real time.
[0057] Upon evaluating the validation metrics based on the comparison of the trained model and the previously trained model, the probing unit UI 330 displays dashboards and graphical representations about the notifications, the reports and the one or more predictions pertaining to the failure event in the network 105 in real time to the user.
[0058] FIG. 4 is a flow chart illustrating a method for predicting the failure event in the network 105 in real time, according to one or more embodiments of the present disclosure.
[0059] At 405, the probing unit 305 includes the data. In an embodiment, the data include, but not limited to, the performance data of the network 105, and the SDRs of the network 105. In an embodiment, the data source includes, but is not limited to, one or more network nodes/functions. The alarms monitor real-time conditions and trigger alerts when the thresholds are breached. The counters track specific events or metrics over time, providing insights into network performance. The probing unit 305 combines the alarms 305a, the counters 305b, the clear codes 305c, and the KPIs 305d to provide a comprehensive view of the network performance. By utilizing these components, the user can effectively manage network operations, and quickly respond to issues. The probing unit 305 transmits the data to the data collector and integrator module 310.
[0060] At 410, the data collector and integrator module 310 collects the data from the probing unit 305 and transmits the collected data to the data pre-processing module 315. The data pre-processing module 315 receives the collected data and cleans, normalizes the collected data based on the one or more features. Upon normalizing the collected data, the data pre-processing module 315 transmits the normalized data to the model training unit 320.
[0061] At 415, the model training unit 320 is configured to train the model utilizing the normalized data. In an embodiment, the model includes, but is not limited to, Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions based on the data. The model training unit 320 is further configured to access information and store the previously trained model in the data lake 255.
[0062] At 420, the future failure computation unit 325 is configured to evaluate the validation metrics based on the comparison of the trained model and the previously trained model to predict the failure event in the network 105. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model. The future failure computation unit 325 is configured to generate the one or more predictions pertaining to the failure event in the network 105 in real time.
[0063] At 425, upon evaluating the validation metrics based on the comparison of the trained model and the previously trained model, the probing unit UI 330 displays the dashboards and the graphical representations about the notifications, the reports and the one or more predictions pertaining to the failure event in the network 105 in real time to the user.
[0064] FIG. 5 is a flow diagram illustrating the method 500 for predicting the failure event in the network 105 in real time, according to one or more embodiments of the present disclosure.
[0065] At step 505, the method 500 includes the step of receiving the data from the data source by the receiving unit 220. In an embodiment, the data source is at least one of the probing unit 305. The data corresponds to at least one of performance data of the network 105 and SDRs of the network 105. The probing unit 305 collects the data, which ensures the current state of the network 105. In another embodiment, the data source includes, but is not limited to, one or more network nodes, the IPM, the FMS, and the NMS. The one or more network nodes generate and transmit the performance data of the network 105. The performance data is at least one of the alarms 305a, the counters 305b, the clear codes 305c, and the KPIs 305d.
[0066] Upon receiving the data from the data source, the performing unit 225 is configured to perform the operations pertaining to data definition, normalization, and cleaning of the received data to ensure consistency of the streaming data. The data definition refers to structuring the data by identifying and labeling data elements to ensure that the data adheres to the specific format or the schema. The normalization of the data refers to classifying the data to eliminate redundancy and ensure efficiency, often by standardizing values and ensuring that the standardized values within the data are correct. The cleaning of the received streaming data refers to removing or correcting errors, inconsistencies, and inaccuracies within the data, such as handling missing values, duplicates, or outliers.
[0067] At step 510, the method 500 includes the step of selecting the one or more features of the received data by the selecting unit 230. In an embodiment, the one or more features correspond to at least the time of day or the day of week or the day of month and the performance data pertaining to the failure event. The one or more features refer to separating or arranging the data. In an embodiment, each of the one or more features includes the received data associated with at least the time of day or the day of week or the day of month.
[0068] By categorizing the data into distinct groups based on the one or more features, the system 120 can easily identify trends analysis or anomalies. By selecting the one or more features based on the time of day or day of the week, the user analyzes peak and off-peak times. The trends analysis refers to the data based on the one or more features used for predictive analysis, which helps in forecasting future network usage and planning upgrades or expansions based on recurring patterns. By examining the data for specific time frames (e.g., if traffic is unusually high during a normally quiet time), the system 120 can flag potential network issues, such as security breaches or abnormal user behavior.
[0069] At step 515, the method 500 includes the step of training the model utilizing the received data corresponding to the selected one or more features by the training unit 235. In an embodiment, a model includes, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) learning model. Using the cleaned and normalized data, the training unit 235 applies machine learning algorithms (e.g., decision trees, neural networks) to learn the patterns and relationships that precede failure events. The AI/ML model is used to identify the failure event or to forecast future trends. The AI/ML model utilizes a variety of ML techniques, such as supervised learning, unsupervised learning, and reinforcement learning.
[0070] At step 520, the method 500 includes the step of evaluating the validation metrics of the trained model based on the comparison of the trained model and the previously trained model by the evaluating unit 240. In an embodiment, the validation metrics are at least one of the accuracy and the error of the trained model over the previously trained model. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model.
[0071] At step 525, the method 500 includes the step of generating the one or more predictions pertaining to the failure event in the network 105 in real time by the generating unit 245. In an exemplary embodiment, 75% probability of the UE registration failure in a first region within the next 30 minutes due to increased signaling load and a drop in signal strength. Users in the first region may experience issues connecting to the network 105, leading to a spike in customer support tickets and potential churn. In another exemplary embodiment, there is a 70% probability of PDU session establishment failure for new UEs in a second region within the next hour, primarily due to insufficient bearer resources and ongoing network congestion (load at 90% capacity).
[0072] Upon generating the one or more predictions pertaining to the failure event in the network 105, the transmitting unit 250 is configured to transmit the notification pertaining to the one or more predictions to the UE 110 on generation of the one or more predictions. In an exemplary embodiment, the notification includes, but not limited to, one or more predictions summary, impact information, and recommended actions. The one or more predictions summary refers to brief description of the prediction of the failure event. The impact information refers to explanation of how the prediction might affect the user. The recommended actions refer to suggestions for the users, such as switching to a different network, or waiting for network conditions to improve. The transmitting unit 250 is triggered to send the notification to the UE 110 via the Short Message Service (SMS).
[0073] In another aspect of the embodiment, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 205 is disclosed. The processor 205 is configured to receive data from a data source. The data source is at least one of a probing unit 305. The processor 205 is configured to select one or more features of the received data. The one or more features correspond to at least a time of day or a day of week or a day of month and performance data pertaining to the failure event. The processor 205 is configured to train a model utilizing the received data corresponding to the selected one or more features. The processor 205 is configured to evaluate validation metrics of a trained model based on a comparison of the trained model and a previously trained model. The processor 205 is configured to generate one or more predictions pertaining to the failure event in the network in real time.
[0074] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0075] The present disclosure provides technical advancement for predicting the failure event in the network. The present system and method configured to predict a future disruptive situation prior to actual happening in required format to achieve proactive problem solving. The system is interfaced with the probing unit which collects the data from Streaming Data Records (SDRs) or clear codes. The system is also configured to integrate and apply the trained AI/ML models to the failure data thus forecasting future events and taking appropriate action based on upcoming abnormalities in clear code data.
[0076] The present disclosure offers multiple advantages of accurate future failure detection, enhanced system performance that leads to faster processing and more accurate outcome, solves proactive problem of any anticipated issues in network call flows or procedures such as UE registration/deregistration failure, PDU session establishment failure, access network release failure, etc., and perform reliable root cause analysis, improves network performance, and saves cost and resource.
[0077] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0078] Environment - 100
[0079] Network-105
[0080] User equipment- 110
[0081] Server - 115
[0082] System -120
[0083] Processor - 205
[0084] Memory - 210
[0085] User interface-215
[0086] Receiving unit – 220
[0087] Performing unit– 225
[0088] Selecting unit- 230
[0089] Training unit– 235
[0090] Evaluating unit- 240
[0091] Generating unit- 245
[0092] Transmitting unit- 250
[0093] Storage unit-255
[0094] Architecture- 300
[0095] Probing unit-305
[0096] Alarms- 305a
[0097] Counters- 305b
[0098] SDR/clear codes- 305c
[0099] KPIs- 305d
[00100] Data collector and integrator module- 310
[00101] Data preprocessing module-315
[00102] Model training unit- 320
[00103] Future failure computation unit- 325
[00104] Probing unit UI- 330
,CLAIMS:CLAIMS
We Claim:
1. A method (500) of prediction of a failure event in a network (105), the method (500) comprising the steps of:
receiving, by one or more processors (205), data from a data source, wherein the data source is at least one of a probing unit (305);
selecting, by the one or more processors (205), one or more features of the received data, wherein the one or more features correspond to at least a time of day or a day of week or a day of month and performance data pertaining to the failure event;
training, by the one or more processors (205), a model utilizing the received data corresponding to the selected one or more features;
evaluating, by the one or more processors (205), validation metrics of a trained model based on a comparison of the trained model and a previously trained model; and
generating, by the one or more processors (205), one or more predictions pertaining to the failure event in the network (105) in real time.

2. The method (500) as claimed claim 1, wherein the data corresponds to at least one of performance data of a network and Streaming Data Records (SDR) of the network, wherein the performance data is at least one of alarms (305a), counters (305b), clear codes (305c), and Key Performance Indicators (KPIs) (305d).

3. The method (500) as claimed in claim 1, wherein on receiving the data, the method comprises the step of performing, by the one or more processors, operations pertaining to data definition, normalization, and cleaning on the received data to ensure consistency of the data.

4. The method (500) as claimed in claim 1, wherein the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model.

5. The method (500) as claimed in claim 1, wherein the method (500) comprises the step of transmitting, by the one or more processors (205), a notification pertaining to the one or more predictions to a User Equipment (UE) (110) on generation of the one or more predictions.

6. A system (120) for predicting a failure event in a network (105), the system (120) comprising:
a receiving unit (220) configured to receive, data from a data source, wherein the data source is at least one of a probing unit (305);
a selecting unit (230) configured to select, one or more features of the received data, wherein the one or more features correspond to at least a time of day or a day of week or a day of month and performance data pertaining to the failure event;
a training unit (235) configured to train, a model utilizing the received data corresponding to the selected one or more features;
an evaluating unit (240) configured to evaluate, validation metrics of a trained model based on a comparison of the trained model and a previously trained model; and
a generating unit (245) configured to generate, one or more predictions pertaining to the failure event in the network (105) in real time.

7. The system (120) as claimed claim 6, wherein the data corresponds to at least one of performance data of a network and Streaming Data Records (SDR) of the network, wherein the performance data is at least one of alarms (305a), counters (305b), clear codes (305c), and Key Performance Indicators (KPIs) (305d).

8. The system (120) as claimed claim 6, wherein on receiving the data, the system (120) comprises a performing unit (225) is configured to perform operations pertaining to data definition, normalization, and cleaning on the received data to ensure consistency of the data.

9. The system (120) as claimed claim 6, wherein the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model.

10. The system (120) as claimed claim 6, comprising a transmitting unit (250) configured to transmit, a notification pertaining to the one or more predictions to a User Equipment (UE) (110) on generation of the one or more predictions.

Documents

Application Documents

# Name Date
1 202321067376-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf 2023-10-07
2 202321067376-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf 2023-10-07
3 202321067376-POWER OF AUTHORITY [07-10-2023(online)].pdf 2023-10-07
4 202321067376-FORM 1 [07-10-2023(online)].pdf 2023-10-07
5 202321067376-FIGURE OF ABSTRACT [07-10-2023(online)].pdf 2023-10-07
6 202321067376-DRAWINGS [07-10-2023(online)].pdf 2023-10-07
7 202321067376-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf 2023-10-07
8 202321067376-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321067376-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321067376-DRAWING [06-10-2024(online)].pdf 2024-10-06
11 202321067376-COMPLETE SPECIFICATION [06-10-2024(online)].pdf 2024-10-06
12 Abstract.jpg 2024-12-07
13 202321067376-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321067376-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321067376-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321067376-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321067376-FORM 3 [31-01-2025(online)].pdf 2025-01-31