Sign In to Follow Application
View All Documents & Correspondence

System And Method For Monitoring Performance Of A Network In Real Time

Abstract: ABSTRACT SYSTEM AND METHOD FOR MONITORING PERFORMANCE OF A NETWORK IN REAL TIME The present invention relates to a system (120) and a method (500) for monitoring performance of a network (105) in real time is disclosed. The system (120) includes a receiving unit (220) configured to receive data from a data source. The system (120) includes a segregating unit (230) to segregate the data based on multiple sets of features. The system (120) includes a training unit (235) to train, a model utilizing at least one of the segregated data. The system (120) includes an evaluating unit (240) to evaluate validation metrics based on a comparison of the trained model and a previously trained model. The system (120) includes a notification unit (245) to utilize, the trained model to generate notifications, reports and predictions pertaining to the performance of the network in real time, if the validation metrics of the trained model are greater than a predefined threshold. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
07 October 2023
Publication Number
20/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, India

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR MONITORING PERFORMANCE OF A NETWORK IN REAL TIME
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication networks, more particularly to a system and a method for monitoring performance of a network in real time.
BACKGROUND OF THE INVENTION
[0002] With the increase in number of users, the network service provisions have been implementing to up-gradations to enhance the service quality so as to keep pace with such high demand. With advancement of technology, there is a demand for the telecommunication service to induce up to date features into the scope of provision so as to enhance user experience and implement advanced monitoring mechanisms. There are regular data analyses to observe issues beforehand for which many data collection as well as assessment practices are implemented in a network.
[0003] A probing agent implemented in a network to actively collect probing data, preferably Streaming Data Record (SDR) from one or more network nodes. The one or more network nodes generate the SDR including the clear codes for failed events at a procedure level whenever any error scenario occurs or experienced by Network Function (NF) or the network node. Once the SDRs are generated by the NF, then they are streamed towards the probing agent, where the records are finally indexed in an Adaptive Troubleshooting and Operations Management platform (ATOM) data lake. Furthermore, the SDRs are analyzed which will aid in the overall network monitoring, troubleshooting and root cause analysis.
[0004] The problem in the current network architecture is that any service disruption scenario can be identified only after graphically visualizing the real time streaming data and then taking appropriate action based on abnormalities in the clear codes. This kind of delayed issue detection leads to prolonged service disruption and customer dissatisfaction. There is no available mechanism to predict and identify the prolonged service disruption by means of analyzing real time data.
[0005] There is a requirement of a system and a method to analyze the real time data to make predictions and proactively take required action to solve the issue.
SUMMARY OF THE INVENTION
[0006] One or more embodiments of the present disclosure provide a method and a system for monitoring performance of a network in real time.
[0007] In one aspect of the present invention, the method for monitoring performance of the network in real time is disclosed. The method includes the step of receiving, by one or more processors, streaming data from a data source, wherein the data source is at least one of a probing unit. The method includes the step of segregating, by the one or more processors, the streaming data based on multiple sets of features. The method includes the step of training, by the one or more processors, a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data. The method includes the step of evaluating, by the one or more processors, validation metrics based on a comparison of the trained model and a previously trained model. The method includes the step of utilizing, by the one or more processors, the trained model to generate notifications, reports and predictions pertaining to the performance of the network in real time, if the validation metrics are greater than a predefined threshold.
[0008] In one embodiment, on receiving the streaming data, the method includes the step of performing, by the one or more processors, operations pertaining to data definition, normalization, and cleaning on the received streaming data to ensure consistency of the streaming data.
[0009] In another embodiment, each of the multiple sets of features comprises the received streaming data associated with at least a time of day or a day of week or a day of month.
[0010] In yet another embodiment, the predefined threshold corresponds to validation metrics of the previously trained model.
[0011] In yet another embodiment, the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model.
[0012] In another aspect of the present invention, the system for monitoring performance of the network in real time is disclosed. The system includes a receiving unit configured to receive data from a data source, wherein the data source is at least one of a probing unit. The system includes a segregating unit configured to segregate the data based on multiple sets of features. The system includes a training unit configured to train, a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data. The system includes an evaluating unit configured to evaluate validation metrics based on a comparison of the trained model and a previously trained model. The system includes a notification unit configured to utilize, the trained model to generate notifications, reports and predictions pertaining to the performance of the network in real time, if the validation metrics are greater than a predefined threshold.
[0013] In another aspect of the embodiment, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to receive streaming data from a data source, wherein the data source is at least one of a probing unit. The processor is configured to segregate the streaming data based on multiple sets of features. The processor is configured to train, a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data. The processor is configured to evaluate validation metrics based on a comparison of the trained model and a previously trained model. The processor is configured to utilize the trained model to generate notifications, reports and predictions pertaining to the performance of the network in real time, if the validation metrics are greater than a predefined threshold.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for monitoring performance of a network in real time, according to one or more embodiments of the present disclosure;
[0017] FIG. 2 is an exemplary block diagram of a system for monitoring the performance of the network in real time, according to the one or more embodiments of the present disclosure;
[0018] FIG. 3 is a block diagram of an architecture that can be implemented in the system of FIG.2, according to the one or more embodiments of the present disclosure;
[0019] FIG. 4 is a flow chart illustrating a method for monitoring the performance of the network in real time, according to the one or more embodiments of the present disclosure; and
[0020] FIG. 5 is a flow diagram illustrating the method for monitoring the performance of the network in real time, according to the one or more embodiments of the present disclosure.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] Embodiments of the present invention provide a system and a method for monitoring performance of a network in real time. The present invention is configured to predict a service disruption scenario prior to displaying as graphical representations or dash-boarding or any other required format to a network operator for solving the proactive problems. To achieve accurate prediction of any disruption from real time data obtained from the network. The system is interfaced with a probing unit, such as vProbe (virtual probe) which collects the data, such as Streaming Data Record (SDR) from one or more network nodes. The system is also configured to integrate and apply trained AI/ML models to the real time data to make predictions and promptly take required action to solve the issue.
[0026] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for monitoring performance of a network 105 in real time, according to one or more embodiments of the present invention. The environment 100 includes the network 105, a User Equipment (UE) 110, a server 115, and a system 120. The UE 110 aids a user to interact with the system 120 for monitoring the performance of the network in real time. In an embodiment, the user is at least one of, a network operator, and a service provider. Monitoring the performance of the network in real time refers to the continuous observation and analysis of one or more network parameters to ensure optimal functioning and early detection of issues. In an embodiment, the one or more network parameters, include, but is not limited to, bandwidth usage, latency, throughput, packet loss, and error rates.
[0027] For the purpose of description and explanation, the description will be explained with respect to the UE 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0028] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0029] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0030] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is configured for monitoring the performance of the network 105 in real time. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity, as per multiple embodiments of the present invention.
[0031] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0032] FIG. 2 is an exemplary block diagram of a system 120 for monitoring the performance of the network 105 in real time, according to one or more embodiments of the present disclosure.
[0033] The system 120 includes a processor 205, a memory 210, a user interface 215, and a storage unit 250. For the purpose of description and explanation, the description will be explained with respect to one or more processors 205, or to be more specific will be explained with respect to the processor 205 and should nowhere be construed as limiting the scope of the present disclosure. The one or more processor 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0034] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0035] The User Interface (UI) 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the UE 110, and the storage unit 250. The term “storage unit”, “database” and “data lake” are used interchangeably hereinafter, without limiting the scope of the disclosure.
[0036] In an embodiment, the UI 215 also referred as an Integrated Performance Management (IPM) interface. The IPM interface 215 removes the boundaries associated with previous performance and management frameworks. The system is configured to interact with the IPM interface 215 in the network 105 via an Application Programming Interface (API) as medium of communication and perform the communication process by using different formats like JavaScript Object Notation (JSON), Python or any other compatible formats.
[0037] The storage unit 250 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of storage unit 250 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0038] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0039] In order for the system 120 to monitor the performance of the network 105 in real time, the processor 205 includes a receiving unit 220, a performing unit 225, a segregating unit 230, a training unit 235, an evaluating unit 240, and a notification unit 245 communicably coupled to each other. In an embodiment, operations and functionalities of the receiving unit 220, the performing unit 225, the segregating unit 230, the training unit 235, the evaluating unit 240, and the notification unit 245 can be used in combination or interchangeably.
[0040] The receiving unit 220 is configured to receive streaming data from a data source. In an embodiment, the data source is at least one of a probing unit 305 (as shown in FIG.3). In another embodiment, the data source includes, but is not limited to, network nodes. The streaming data corresponds to at least one of performance data of the network 105 and Streaming Data Records (SDR) of the network 105. The SDR captures real-time data about network activities or transactions, such as session information, packet details, or logs from one or more network functions (e.g., user sessions, calls, data transfers). The SDR includes, but is not limited to, clear codes. Each of the clear codes indicates a failure scenario such as service disruption, in the network 105. The SDRs are used for monitoring the behavior and performance of network elements and services in real time.
[0041] Upon receiving the streaming data from the data source, the performing unit 225 is configured to perform operations pertaining to data definition, normalization, and cleaning on the received streaming data to ensure consistency of the streaming data. The data definition refers to structuring the data by identifying and labeling data elements to ensure that the data adheres to a specific format or schema. The normalization of the data refers to classifying the data to eliminate redundancy and ensure efficiency, often by standardizing values and ensuring that the standardized values within the data are correct. Cleaning on the received streaming data refers to removing or correcting errors, inconsistencies, and inaccuracies within the data, such as handling missing values, duplicates, or outliers.
[0042] Upon performing the operations by the performing unit 225, the segregating unit 230 is configured to segregate the streaming data based on multiple sets of features. The multiple sets of features refer to separating or arranging the streaming data. In an embodiment, each of the multiple sets of features includes the received streaming data associated with at least a time of day or a day of week or a day of month. In an exemplary embodiment for the time of day, dividing the data based on specific times, such as morning hours (6 AM to 12 PM) or evening hours (6 PM to 12 AM), which aid in understanding patterns in network usage or performance at different times of the day. In an exemplary embodiment for the day of the week, the streaming data is separated based on the day (e.g., Monday, Tuesday), which is useful for identifying trends that might occur on specific days, such as higher network traffic on weekdays versus weekends. In an exemplary embodiment for the day of the month, segregating the data based on particular days within a month (e.g., the 1st, 15th, or end of the month), which aids to identify recurring monthly patterns, like increased usage or network demand.
[0043] By categorizing the streaming data into distinct groups based on multiple set of features, the system 120 can easily identify trends analysis or anomalies. By segregating data based on the time of day or day of the week, the user analyzes peak and off-peak times. In an exemplary embodiment, higher traffic might be observed during business hours, allowing the network 105 to adjust resources accordingly. The trends analysis refers to the streaming data segregated based on the multiple set of features used for predictive analysis, which helps in forecasting future network usage and planning upgrades or expansions based on recurring patterns. By examining the streaming data for specific time frames (e.g., if traffic is unusually high during a normally quiet time), the system 120 can flag potential network issues, such as security breaches or abnormal user behavior.
[0044] Upon segregating the streaming data based on the multiple sets of features, the training unit 235 is configured to train a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data. In an embodiment, the model includes, but is not limited to, Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions based on the data. By training the model on the selection of the at least one of the segregated data, the model recognizes the patterns or make predictions specific to different conditions. In an exemplary embodiment, the segregating unit 230 categorizes the streaming data into different time periods (e.g., peak hours, off-peak hours). The training unit 235 selects the streaming data from peak hours to train the model that predicts network congestion during busy times. Conversely, the streaming data from off-peak hours is used to train the model for identifying low-traffic patterns or optimizing resource allocation during quieter periods. The model is trained based on the multiple set of features of the segregated data.
[0045] Upon training the model utilizing at least one of the segregated data, the evaluating unit 240 is configured to evaluate validation metrics based on a comparison of the trained model and a previously trained model. In an embodiment, the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model. The accuracy measures the proportion of correct predictions made by the trained model compared to the total number of predictions. Higher accuracy indicates better performance. The error refers to the difference between the model’s predictions and the actual outcomes. The error can be measured in various ways, such as mean squared error (MSE) or mean absolute error (MAE).
[0046] The evaluation involves comparing the trained model against the previously trained model to determine improvements or changes in performance of the network 105. By comparing the trained model with the previously trained model, the evaluating unit 240 determines whether the trained model provides better accuracy or has fewer errors, which helps in understanding if the changes or improvements made in the training process have led to a more effective model. In an exemplary embodiment, the previously trained model provides an accuracy of 85% and an error rate of 10%. After generating the trained model with the streamed data, the evaluating unit 240 determines whether the trained model provides an accuracy of 88% and an error rate of 8%, which indicates an improvement in performance, as the trained model is more accurate and provides lower error rate compared to the previously trained model.
[0047] Upon evaluating the validation metrics based on the comparison of the trained model and the previously trained model, the notification unit 245 configured to utilize the trained model to generate notifications, reports and predictions pertaining to the performance of the network 105 in real time if the validation metrics of the trained model are greater than a predefined threshold. In an embodiment, the predefined threshold is defined by the user. The predefined threshold corresponds to the validation metrics of the previously trained model. If the validation metrics indicate that the trained model performance surpasses the predefined threshold, the notification unit 245 utilize the trained model to generate outputs related to the performance of the network 105 in real time.
[0048] The notifications refer to alerts or messages that provide information about the network’s status or any significant changes. In an example, if the trained model detects a potential issue or anomaly, the notification unit 245 transmits a notification to the user via the UI 215. The reports refer to detailed documents or summaries that outline the network’s performance metrics, trends, and insights derived from the trained model. The reports help in understanding overall performance and making data-driven decisions. The predictions forecasts about future network conditions or performance based on the model's analysis. In an exemplary embodiment, the predefined threshold for accuracy is set at 85%. If the trained model achieves an accuracy of 88%, the notification unit 245 utilize the trained model to produce real-time alerts if network anomalies are detected, generate detailed performance reports, and make predictions about future network traffic or potential problems.
[0049] If the trained model performs better than the predefined threshold, the notification unit 245 leverages the trained model to create and distribute the notifications, the reports, and the predictions about the performance of the network 105 in real time. By receiving the notification from the notification unit 245, the system 120 solves the problem proactively and any anticipated issues in network call flows or procedures such as UE registration, initialization and configuration of the probing unit 305, packet filtering, traffic sampling, data exporting and reporting, which improves performance of the network 105 in real time, processing speed of the processor 205, and reduces requirement of memory space.
[0050] FIG. 3 is a block diagram of an architecture 300 that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure. The architecture 300 of the system 120 includes the IPM interface 215, the data lake 250, the probing unit 305, a data collector and integrator module 310, a data pre-processing module 315, a model training unit 320, a real time prediction unit 325, and a probing unit UI 330.
[0051] The architecture 300 of the system 120 is configured to interact with the probing unit 305. The probing unit 305 collects the data from the data source. In an embodiment, the data include, but not limited to, the Streaming Data Records (SDRs). In an embodiment, the data source includes, but is not limited to, one or more network nodes/functions. The SDRs are used for monitoring the behavior and performance of network elements and services in real time. The probing unit 305 transmits the data to the data collector and integrator module 310 via the IPM interface 215.
[0052] The data collector and integrator module 310 collects the data from the probing unit 305 and transmits the collected data to the data pre-processing module 315. The data pre-processing module 315 receives the collected data and cleans, normalizes the collected data based on the multiple set of features. Upon normalizing the collected data, the data pre-processing module 315 transmits the normalized data to the model training unit 320.
[0053] The model training unit 320 is configured to train the model utilizing the normalized data. In an embodiment, the model includes, but is not limited to, Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions based on the data. The model training unit 320 is further configured to access information and store the previously trained model in the data lake 250.
[0054] The real time prediction unit 325 is configured to evaluate the validation metrics based on a comparison of the trained model and the previously trained model to predict the performance of the network 105 in real time. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model.
[0055] Upon evaluating the validation metrics based on the comparison of the trained model and the previously trained model, the probing unit UI 330 displays dashboards and graphical representations about the notifications, the reports and the predictions pertaining to the performance of the network 105 in real time to the user.
[0056] FIG. 4 is a flow chart illustrating a method for monitoring the performance of the network 105 in real time, according to one or more embodiments of the present disclosure.
[0057] At 405, the probing unit 305 transmits the data to the data collector and integrator module 310. In an embodiment, the data include, but not limited to, the Streaming Data Records (SDRs). In an embodiment, the data source includes, but is not limited to, one or more network nodes/functions. The SDRs are used for monitoring the behavior and performance of network elements and services in real time. Upon collecting the data from the probing unit 305, the probing unit 305 transmits the data to the data collector and integrator module 310 via the IPM interface 215.
[0058] At 410, the data collector and integrator module 310 are configured to receive the data from the probing unit 305 based on the multiple set of features. The data pre-processing module 315 receives the collected data and processes the collected data. The processing of the collected data refers to cleaning and normalization of the data based on the multiple set of features. Upon normalizing the data, the data pre-processing module 315 transmits the normalized data to the model training unit 320 for training the model.
[0059] At 415, the model training unit 320 is configured to train the model by utilizing the normalized data. In an embodiment, the model includes, but is not limited to, the AI/ML model to analyze the trends and the patterns and make one or more predictions based on the normalized data.
[0060] At 420, upon training the model, the data lake 250 is configured to store the trained model. The model training unit 320 is further configured to access information and also store the previously trained model in the data lake 250.
[0061] At 425, the real time prediction unit 325 evaluates the validation metrics based on the comparison of the trained model and the previously trained model to predict the performance of the network 105 in real time. The validation metrics are used to quantify the performance of the trained model. The validation metrics provide insights into how well the trained model is functioning and how the trained model compares to the previously trained model.
[0062] Upon evaluating the validation metrics based on the comparison of the trained model and the previously trained model, the probing unit UI 330 displays the dashboards and graphical representations about the notifications, the reports and the predictions pertaining to the performance of the network 105 in real time to the user.
[0063] FIG. 5 is a flow diagram illustrating the method 500 for monitoring the performance of the network 105 in real time, according to one or more embodiments of the present disclosure.
[0064] At step 505, the method 500 includes the step of receiving the streaming data from the data source by the receiving unit 220. In an embodiment, the data source is at least one of the probing unit 305. In another embodiment, the data source includes, but is not limited to, network nodes. The SDR includes, but is not limited to, clear codes. Each of the clear codes indicates a failure scenario such as service disruption, in the network 105. The SDRs are used for monitoring the behavior and performance of network elements and services in real time.
[0065] At step 510, the method 500 includes the step of segregating the streaming data based on the multiple sets of features by the segregating unit 230. The multiple sets of features refer to separating or arranging the streaming data. In an embodiment, each of the multiple sets of features includes the received streaming data associated with at least a time of day or a day of week or a day of month.
[0066] By categorizing the streaming data into distinct groups based on the multiple set of features, the system 120 can easily identify trends analysis or anomalies. By segregating data based on the time of day or day of the week, the user analyzes peak and off-peak times.
[0067] At step 515, the method 500 includes the step of training the model utilizing at least one of the segregated data based on the selection of at least one of the segregated data by the training unit 235. In an embodiment, the model includes, but is not limited to, the Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions based on the data. By training the model on the selection of the at least one of the segregated data, the model recognizes the patterns or make predictions specific to different conditions.
[0068] At step 520, the method 500 includes the step of evaluating the validation metrics based on the comparison of the trained model and the previously trained model by the evaluating unit 240. In an embodiment, the validation metrics is at least one of an accuracy and an error of the trained model over the previously trained model. The validation metrics are used to quantify the performance of the model. The validation metrics provide insights into how well the model is functioning and how the trained model compares to the previously trained model.
[0069] The evaluation involves comparing the trained model against the previously trained model to determine improvements or changes in performance of the network 105. By comparing the trained model with the previously trained model, the evaluating unit 240 determines whether the trained model provides better accuracy or has fewer errors, which helps in understanding if the changes or improvements made in the training process have led to a more effective model
[0070] At step 520, the method 500 includes the step of utilizing the trained model to generate notifications, reports and predictions pertaining to the performance of the network 105 in real time by the notification unit 245 if the validation metrics of the trained model are greater than the predefined threshold. In an embodiment, the predefined threshold is defined by the user. The predefined threshold corresponds to the validation metrics of the previously trained model. If the validation metrics indicate that the trained model performance surpass the predefined threshold, the notification unit 245 utilize the trained model to generate outputs related to the performance of the network 105 in real time.
[0071] If the trained model performs better than the predefined threshold, the notification unit 245 leverages the trained model to create and distribute the notifications, the reports, and the predictions about the performance of the network 105 in real time. By receiving the notification from the notification unit 245, the system 120 solves the proactive problem of any anticipated issues in network call flows or procedures such as UE registration, initialization and configuration of the probing unit 305, packet capture, packet filtering, traffic sampling, data exporting and reporting, which improves performance of the network 105 in real time, processing speed of the processor 205, and reduces requirement of memory space.
[0072] In another aspect of the embodiment, a non-transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 205 is disclosed. The processor 205 is configured to receive streaming data from a data source, wherein the data source is at least one of a probing unit 305. The processor 205 is configured to segregate the streaming data based on multiple sets of features. The processor 205 is configured to train, a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data. The processor 205 is configured to evaluate validation metrics based on a comparison of the trained model and a previously trained model. The processor 205 is configured to utilize the trained model to generate notifications, reports and predictions pertaining to the performance of the network in real time, if the validation metrics are greater than a predefined threshold.
[0073] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0074] The present disclosure provides technical advancement for real time data reporting and analytics for better network monitoring. The present disclosure is configured to predict a future disruptive situation prior to actual happening in required graphical or dash-boarding or any other required format to achieve proactive problem solving. To achieve accurate prediction of any disruption from real time data obtained from the network. The present disclosure displays the graphical representations or dash-boarding of the error via the probing unit UI. The present disclosure is also configured to integrate and apply the trained AI/ML models to the real time data to make predictions and promptly take required action to solve the issue.
[0075] The present disclosure offers multiple advantages of efficient and simplified data integration to speed up access to diverse datasets, improving analysis and decision-making, and improves system performance leads to faster processing and more accurate outcomes. By receiving the notification from the notification unit, the system solves the proactive problem of any anticipated issues in network call flows or procedures such as UE registration, initialization and configuration of the probing unit 305, packet capture, packet filtering, traffic sampling, data exporting and reporting, which improves performance of the network in real time, processing speed of the processor, and reduces requirement of memory space.
[0076] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0077] Environment - 100
[0078] Network-105
[0079] User equipment- 110
[0080] Server - 115
[0081] System -120
[0082] Processor - 205
[0083] Memory - 210
[0084] User interface-215
[0085] Receiving unit – 220
[0086] Performing unit– 225
[0087] Segregating unit- 230
[0088] Training unit– 235
[0089] Evaluating unit- 240
[0090] Notification unit- 245
[0091] Storage unit-250
[0092] Architecture- 300
[0093] Probing unit-305
[0094] Data collector and integrator module- 310
[0095] Data preprocessing module-315
[0096] Model training unit- 320
[0097] Real time prediction unit- 325
[0098] Probing unit UI- 330
,CLAIMS:CLAIMS
We Claim:
1. A method (500) of monitoring performance of a network (105) in real time, the method (500) comprising the steps of:
receiving, by one or more processors (205), streaming data from a data source, wherein the data source is at least one of a probing unit (305);
segregating, by the one or more processors (205), the streaming data based on multiple sets of features;
training, by the one or more processors (205), a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data;
evaluating, by the one or more processors (205), validation metrics based on a comparison of the trained model and a previously trained model; and
utilizing, by the one or more processors (205), the trained model to generate notifications, reports and predictions pertaining to the performance of the network (105) in real time, if the validation metrics of the trained model are greater than a predefined threshold.

2. The method (500) as claimed in claim 1, wherein on receiving the streaming data, the method (500) comprises the step of performing, by the one or more processors (205), operations pertaining to data definition, normalization, and cleaning on the received streaming data to ensure consistency of the streaming data.

3. The method (500) as claimed in claim 1, wherein each of the multiple sets of features comprises the received streaming data associated with at least a time of day or a day of week or a day of month.

4. The method (500) as claimed in claim 1, wherein the predefined threshold corresponds to the validation metrics of the previously trained model.

5. The method (500) as claimed in claim 1, wherein the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model.

6. A system (120) of monitoring performance of a network (105) in real time, the system (120) comprising:
a receiving unit (220) configured to receive, streaming data from a data source, wherein the data source is at least one of a probing unit (305);
a segregating unit (230) configured to segregate, the streaming data based on multiple sets of features;
a training unit (235) configured to train, a model utilizing at least one of the segregated data based on a selection of at least one of the segregated data;
an evaluating unit (240) configured to evaluate, validation metrics based on a comparison of the trained model and a previously trained model; and
a notification unit (245) configured to utilize, the trained model to generate notifications, reports and predictions pertaining to the performance of the network in real time, if the validation metrics of the trained model are greater than a predefined threshold.

7. The system (120) as claimed in claim 6, the system comprising a performing unit (225) configured to perform, operations pertaining to data definition, normalization, and cleaning on the received streaming data to ensure consistency of the streaming data on receiving the streaming data from the data source.

8. The system (120) as claimed in claim 6, wherein each of the multiple sets of features comprises the received streaming data associated with at least a time of day or a day of week or a day of month.

9. The system (120) as claimed in claim 6, wherein the predefined threshold corresponds to validation metrics of the previously trained model.

10. The system (120) as claimed in claim 6, wherein the validation metrics are at least one of an accuracy and an error of the trained model over the previously trained model.

Documents

Application Documents

# Name Date
1 202321067388-STATEMENT OF UNDERTAKING (FORM 3) [07-10-2023(online)].pdf 2023-10-07
2 202321067388-PROVISIONAL SPECIFICATION [07-10-2023(online)].pdf 2023-10-07
3 202321067388-POWER OF AUTHORITY [07-10-2023(online)].pdf 2023-10-07
4 202321067388-FORM 1 [07-10-2023(online)].pdf 2023-10-07
5 202321067388-FIGURE OF ABSTRACT [07-10-2023(online)].pdf 2023-10-07
6 202321067388-DRAWINGS [07-10-2023(online)].pdf 2023-10-07
7 202321067388-DECLARATION OF INVENTORSHIP (FORM 5) [07-10-2023(online)].pdf 2023-10-07
8 202321067388-FORM-26 [27-11-2023(online)].pdf 2023-11-27
9 202321067388-Proof of Right [12-02-2024(online)].pdf 2024-02-12
10 202321067388-DRAWING [06-10-2024(online)].pdf 2024-10-06
11 202321067388-COMPLETE SPECIFICATION [06-10-2024(online)].pdf 2024-10-06
12 Abstract.jpg 2024-12-07
13 202321067388-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
14 202321067388-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
15 202321067388-Covering Letter [24-01-2025(online)].pdf 2025-01-24
16 202321067388-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
17 202321067388-FORM 3 [28-01-2025(online)].pdf 2025-01-28