Sign In to Follow Application
View All Documents & Correspondence

System And Method For Predicting One Or More Anomalies Of At Least One User Equipment

Abstract: ABSTRACT SYSTEM AND METHOD FOR PREDICTING ONE OR MORE ANOMALIES OF AT LEAST ONE USER EQUIPMENT The present invention relates to a system (120) and method (400) for predicting the one or more anomalies associated with one or more UEs (110) is disclosed. The system (120) includes a receiving unit (220) configured to receive a first set of data corresponding to each of the one or more UEs (110) from one or more data sources. The system (120) includes a training unit (225) configured to train a model utilizing the received data to identify trends in the received data. The system (120) includes a predicting unit (230) configured to predict the one or more anomalies based on the identified trends in the received data. The system (120) optimizes resource allocation, minimizing operational costs and resource wastage. Ref. Fig. 2

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 October 2023
Publication Number
17/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
2. Ankit Murarka
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
3. Jugal Kishore
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
4. Chandra Ganveer
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
5. Sanjana Chaudhary
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
6. Gourav Gurbani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
7. Yogesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
8. Avinash Kushwaha
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
9. Dharmendra Kumar Vishwakarma
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
10. Sajal Soni
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
11. Niharika Patnam
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
12. Shubham Ingle
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
13. Harsh Poddar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
14. Sanket Kumthekar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
15. Mohit Bhanwria
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
16. Shashank Bhushan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
17. Vinay Gayki
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
18. Aniket Khade
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
19. Durgesh Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
20. Zenith Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
21. Gaurav Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
22. Manasvi Rajani
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
23. Kishan Sahu
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
24. Sunil meena
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
25. Supriya Kaushik De
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
26. Kumar Debashish
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
27. Mehul Tilala
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
28. Satish Narayan
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
29. Rahul Kumar
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
30. Harshita Garg
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
31. Kunal Telgote
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
32. Ralph Lobo
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India
33. Girish Dange
Reliance Corporate Park, Thane - Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India

Specification

DESC:
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENTS RULES, 2003

COMPLETE SPECIFICATION
(See section 10 and rule 13)
1. TITLE OF THE INVENTION
SYSTEM AND METHOD FOR PREDICTING ONE OR MORE ANOMALIES OF AT LEAST ONE USER EQUIPMENT
2. APPLICANT(S)
NAME NATIONALITY ADDRESS
JIO PLATFORMS LIMITED INDIAN OFFICE-101, SAFFRON, NR. CENTRE POINT, PANCHWATI 5 RASTA, AMBAWADI, AHMEDABAD 380006, GUJARAT, INDIA
3.PREAMBLE TO THE DESCRIPTION

THE FOLLOWING SPECIFICATION PARTICULARLY DESCRIBES THE NATURE OF THIS INVENTION AND THE MANNER IN WHICH IT IS TO BE PERFORMED.

FIELD OF THE INVENTION
[0001] The present invention relates to the field of wireless communication networks, more particularly to a system and a method for predicting one or more anomalies associated with one or more User Equipment’s (UEs).
BACKGROUND OF THE INVENTION
[0002] Network engineers encounter a significant challenge in predicting subscriber-wise failures for future dates in the telecom sector. This problem arises due to the complex and dynamic nature of telecom networks, where numerous factors can contribute to service degradation and failures. The telecom networks generate vast amounts of data, including call records, network logs, and performance metrics. Managing and processing data to predict the subscriber-wise failures accurately is a daunting task. The subscriber-wise failures are accurately predicted at the individual subscriber level, which requires a granular understanding of network performance, user behavior, and historical patterns. Traditional methods often struggle to provide accurate subscriber-wise predictions.
[0003] Furthermore, the telecom networks are subject to frequent fluctuations in network conditions, traffic patterns, and user demands. Predicting subscriber-wise failures while considering these dynamic factors is a significant challenge. Service providers need to allocate one or more resources efficiently to address potential failures proactively. Inaccurate predictions can lead to resource wastage or service disruptions.
[0004] Hence, there exists a need for an improved method and system that enables prediction of subscriber-wise failures in real-time by combining a model with network performance data and subscriber behavior.

SUMMARY OF THE INVENTION
[0005] One or more embodiments of the present disclosure provide a method and a system for predicting one or more anomalies associated with one or more User Equipment’s (UEs).
[0006] In one aspect of the present invention, the method for predicting the one or more anomalies associated with one or more UEs is disclosed. The method includes the step of receiving, by one or more processors, a first set of data corresponding to each of the one or more UEs from one or more data sources. The method includes the step of training, by the one or more processors, a model utilizing the received data to identify trends in the received data. The method includes the step of predicting, by the one or more processors, the one or more anomalies based on the identified trends in the received data.
[0007] In one embodiment, the method further includes the step of receiving, by the one or more processors, a second set of data corresponding to each of the one or more UEs from the one or more data sources in real time. The method further includes the step of comparing, by the one or more processors, the second set of data with the one or more predicted anomalies. The method further includes the step of identifying, by the one or more processors, one or more deviations based on a comparison of the second set of data with the one or more predicted anomalies. The method further includes the step of initiating, by the one or more processors, one or more actions in response to identification of the one or more deviations.
[0008] In another embodiment, the first set of data pertains to at least one of Call Detail Records (CDR), historical network performance data, historical trends and subscriber behavior data.
[0009] In yet another embodiment, the second set of data pertains to at least one of real time network data and real time subscriber behavior data.
[0010] In yet another embodiment, the one or more data sources include at least, a file input, a source path, an input stream, a Hyper Text Transfer Protocol 2 (HTTP 2), a Distributed File System (DFS), and a Network Attached Storage (NAS).
[0011] In yet another embodiment, the one or more actions correspond to at least one of transmitting an alert to network engineers and allocating one or more resources to address the one or more predicted anomalies.
[0012] In another aspect of the present invention, the system for predicting one or more anomalies associated with one or more UEs is disclosed. The system includes a receiving unit configured to receive a first set of data corresponding to each of the one or more UEs from one or more data sources. The system includes a training unit configured to train a model utilizing the received data to identify trends in the received data. The system includes a predicting unit configured to predict the one or more anomalies based on the identified trends in the received data.
[0013] In another aspect of the embodiment, a non-transitory computer-readable medium stored thereon computer-readable instructions that, when executed by a processor is disclosed. The processor is configured to receive a first set of data corresponding to each of the one or more UEs from one or more data sources. The processor is configured to train a model utilizing the received data to identify trends in the received data. The processor is configured to predict the one or more anomalies based on the identified trends in the received data.
[0014] Other features and aspects of this invention will be apparent from the following description and the accompanying drawings. The features and advantages described in this summary and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art, in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
[0016] FIG. 1 is an exemplary block diagram of an environment for predicting one or more anomalies associated with one or more User Equipment’s (UEs), according to one or more embodiments of the present disclosure;
[0017] FIG. 2 is an exemplary block diagram of a system for predicting one or more anomalies associated with the one or more UEs, according to the one or more embodiments of the present disclosure;
[0018] FIG. 3 is a block diagram of an architecture that can be implemented in the system of FIG.2, according to the one or more embodiments of the present disclosure;
[0019] FIG. 4 is a flow chart illustrating a method for predicting one or more anomalies associated with the one or more UEs, according to the one or more embodiments of the present disclosure; and
[0020] FIG. 5 is a flow diagram illustrating the method for predicting one or more anomalies associated with the one or more UEs, according to the one or more embodiments of the present disclosure.
[0021] The foregoing shall be more apparent from the following detailed description of the invention.
DETAILED DESCRIPTION OF THE INVENTION
[0022] Some embodiments of the present disclosure, illustrating all its features, will now be discussed in detail. It must also be noted that as used herein and in the appended claims, the singular forms "a", "an" and "the" include plural references unless the context clearly dictates otherwise.
[0023] Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure including the definitions listed here below are not intended to be limited to the embodiments illustrated but is to be accorded the widest scope consistent with the principles and features described herein.
[0024] A person of ordinary skill in the art will readily ascertain that the illustrated steps detailed in the figures and here below are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0025] Referring to FIG. 1, FIG. 1 illustrates an exemplary block diagram of an environment 100 for predicting one or more anomalies associated with one or more User Equipment’s (UEs) 110, according to one or more embodiments of the present invention. The environment 100 includes a network 105, the one or more UEs 110, a server 115, and a system 120. The term “UE” and the “one or more UEs” are interchangeably used hereinafter without limiting the scope of the disclosure. The UE 110 aids a user to interact with the system 120 for predicting the one or more anomalies associated with the one or more UEs 110. In an embodiment, the user is at least one of, a network engineer, and a service provider. Predicting the one or more anomalies associated with the one or more UEs refers to the process of identifying unusual patterns or behaviors in the performance or usage of the UEs 110 that deviate from their expected operational norms. The prediction of the one or more anomalies involves analyzing various metrics and data points related to the UEs 110 to detect potential issues that could affect their functionality or the overall network performance.
[0026] For the purpose of description and explanation, the description will be explained with respect to the one or more UEs 110, or to be more specific will be explained with respect to a first UE 110a, a second UE 110b, and a third UE 110c, and should nowhere be construed as limiting the scope of the present disclosure. Each of the UE 110 from the first UE 110a, the second UE 110b, and the third UE 110c is configured to connect to the server 115 via the network 105. In an embodiment, each of the first UE 110a, the second UE 110b, and the third UE 110c is one of, but not limited to, any electrical, electronic, electro-mechanical or an equipment and a combination of one or more of the above devices such as smartphones, virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
[0027] The network 105 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof. The network 105 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
[0028] The server 115 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, one or more processors executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof. In an embodiment, the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise, a defense facility, or any other facility that provides content.
[0029] The environment 100 further includes the system 120 communicably coupled to the server 115 and each of the first UE 110a, the second UE 110b, and the third UE 110c via the network 105. The system 120 is configured for predicting the one or more anomalies associated with the one or more UEs 110. The system 120 is adapted to be embedded within the server 115 or is embedded as the individual entity, as per multiple embodiments of the present invention.
[0030] Operational and construction features of the system 120 will be explained in detail with respect to the following figures.
[0031] FIG. 2 is an exemplary block diagram of the system 120 for predicting the one or more anomalies associated with the one or more UEs 110, according to one or more embodiments of the present disclosure.
[0032] The system 120 includes a processor 205, a memory 210, a user interface 215, and a database 250. For the purpose of description and explanation, the description will be explained with respect to one or more processors 205, or to be more specific will be explained with respect to the processor 205 and should nowhere be construed as limiting the scope of the present disclosure. The one or more processor 205, hereinafter referred to as the processor 205 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
[0033] As per the illustrated embodiment, the processor 205 is configured to fetch and execute computer-readable instructions stored in the memory 210. The memory 210 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory 210 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like.
[0034] The User Interface (UI) 215 includes a variety of interfaces, for example, interfaces for a Graphical User Interface (GUI), a web user interface, a Command Line Interface (CLI), and the like. The user interface 215 facilitates communication of the system 120. In one embodiment, the user interface 215 provides a communication pathway for one or more components of the system 120. Examples of the one or more components include, but are not limited to, the UE 110, and the database 250.
[0035] The database 250 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth. The foregoing examples of database 250 types are non-limiting and may not be mutually exclusive e.g., a database can be both commercial and cloud-based, or both relational and open-source, etc.
[0036] Further, the processor 205, in an embodiment, may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 205. In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processor 205 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for processor 205 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the memory 210 may store instructions that, when executed by the processing resource, implement the processor 205. In such examples, the system 120 may comprise the memory 210 storing the instructions and the processing resource to execute the instructions, or the memory 210 may be separate but accessible to the system 120 and the processing resource. In other examples, the processor 205 may be implemented by electronic circuitry.
[0037] In order for the system 120 to predict the one or more anomalies associated with the one or more User Equipment’s (UEs) 110, the processor 205 includes a receiving unit 220, a training unit 225, a predicting unit 230, a comparing unit 235, an identifying unit 240, and an initiating unit 245 communicably coupled to each other. In an embodiment, operations and functionalities of the receiving unit 220, the training unit 225, the predicting unit 230, the comparing unit 235, the identifying unit 240, and the initiating unit 245 can be used in combination or interchangeably.
[0038] The receiving unit 220 is configured to receive a first set of data corresponding to each of the one or more UEs 110 from one or more data sources. In an embodiment, the first set of data pertains to at least one of Call Detail Records (CDRs), historical network performance data, historical trends and subscriber behavior data. The CDRs are structured data entries that capture information about each call, call duration, call quality, or communication event that occurs within the network 105. The historical network performance data refers to the accumulated records and metrics that reflect the operational performance over time. The historical network performance data provides insights into how the network 105 has functioned in the past, enabling operators to analyze trends, identify issues, and make informed decisions for future. The historical network performance data includes bandwidth utilization, data throughput, latency, packet loss, and jitter. The historical trends refer to patterns or changes observed in data over a specific period of time. The historical trends provide insights into how particular metrics or behaviors evolve, allowing for analysis and forecasting based on past performance. The subscriber behavior data refers to the collection and analysis of information about how the subscribers interact with the network 105. The subscriber behavior data provides insights into individual and group usage patterns, preferences, and behaviors, which is utilized for optimizing network performance, enhancing user experience, and informing business strategies.
[0039] The receiving unit 220 receives the first set of data from the one or more data sources. In an embodiment, the one or more data sources include at least, a file input, a source path, an input stream, a Hyper Text Transfer Protocol 2 (HTTP 2), a Distributed File System (DFS), and a Network Attached Storage (NAS). The file input refers to reading data from the files stored locally or on the server 115. The files can be in different formats, including, but not limited to, Comma Separated Values (CSV), JavaScript Object Notation (JSON), extensible Markup Language (XML), or text files. In an exemplary embodiment, the data is stored in the CSV file, and the retrieving unit 220 fetches the data for processing. The receiving unit 220 receives the first set of data from the file and loads the first set of data into the memory 210 for further processing.
[0040] The source path typically refers to the directory or network location where the data files are stored. The receiving unit 220 fetches the data by following the provided file path. In an exemplary embodiment for the source path, the system 120 stores images in a specific directory. The receiving unit 220 navigates to a designated source path and retrieves all files that match the required criteria (e.g., .jpg images). The input stream refers to continuous data that is read in real-time from a stream of data (e.g., data being transmitted over the network 105 or generated by sensors). In an exemplary embodiment, the data is being received from an Application Programming Interface (API) or a live data stream, and the receiving unit 220 fetches the continuous data in real-time. The HTTP 2 is a protocol used for communication over the web, which improves upon HTTP/1.1 by offering multiplexing and better performance for handling multiple requests. In an exemplary embodiment, the receiving unit 220 receives the data from the web server using the HTTP 2. The receiving unit 220 uses the HTTP 2 to fetch the data from remote web servers or APIs.
[0041] The DFS is a distributed file system used to store large datasets across multiple machines. The DFS is commonly used in big data environments to store and retrieve large amounts of data. The receiving unit 220 connects to the DFS to receive the file for processing. The NAS is a dedicated file storage system that provides Local Area Network (LAN) access to the data. The NAS allows multiple users or systems to access the data from a centralized storage device. The receiving unit 220 fetches the data from a NAS device over the network 105. In an exemplary embodiment, if the data is stored on the NAS, the receiving unit 220 fetches the data via network protocols. Upon receiving the first set of data from the one or more data sources, the received data is stored in a data frame for further processing.
[0042] Upon receiving the first set of data from the one or more data sources, the receiving unit 220 further requires data pre-processing by utilizing the received data. The data pre-processing includes data cleaning, data normalization, and data transformation. The data cleaning is the process of identifying and correcting inaccuracies, inconsistencies, and errors in a dataset to improve its quality and reliability for analysis. The data normalization is the process of organizing and structuring data to reduce redundancy and improve data integrity within the database 250 or the dataset. The data normalization involves transforming data into a standard format, making the data consistent and easier to analyze. The data transformation is the process of converting the data from one format or structure into another to make it suitable for analysis, integration, or storage. The data transformation process is essential in data preparation, allowing organizations to clean, standardize, and manipulate the data to meet specific analytical or operational requirements.
[0043] Upon preprocessing the received data, the training unit 225 is configured to train a model utilizing the first set of data to identify trends in the first set of data received. In an embodiment, the model includes, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) model. The model using a specific data set, referred to as the first set of data. The dataset typically contains historical records, which include various features related to network performance, subscriber behavior, and dynamic conditions. During training, the model analyzes the first set of data to uncover patterns and relationships.
[0044] By examining historical performance, the model can establish trends and identify conditions that typically precede failures, such as recurring spikes in latency or unusual error rates. The models analyze individual subscriber behavior, such as call duration, frequency of use, data consumption, and types of services used (e.g., Voice over Internet Protocol (VoIP), streaming). Identifying changes in the subscriber’s behavior, such as a sudden increase in data usage or frequent disconnections, helps the model to predict one or more anomalies. Recognizing recurring patterns in the data, such as seasonal usage fluctuations, peak traffic times, or performance degradation indicators.
[0045] Upon training the model utilizing the first set of data and identifying the trends in the first set of data received, the predicting unit 230 is configured to predict the one or more anomalies based on the identified trends in the first set of data received. If the model detects an unexpected increase in latency that exceeds one or more thresholds, the predicting unit 230 flags this as one or more anomalies, indicating a potential service issue. In an exemplary embodiment, during the training phase, the model learns that High latency (e.g., above 200 ms) often correlates with increased packet loss. Heavy data usage (e.g., over 10 GB per day) typically occurs during weekends. Service disruptions frequently follow a combination of high latency and the heavy data usage.
[0046] After training, the model identifies the trends, such as normal behavior and seasonal patterns. The normal behavior refers to the latency remains below 150 ms during weekdays, with typical data usage around 5GB per day. The seasonal patterns refer to increased call volume and data usage during holidays, with a corresponding rise in latency. The first set of data for the current week is fed into the model. In an example, the latency for Monday is at 160 ms, data usage at 4GB. The latency for Tuesday spikes at 250 ms, data usage at 15 GB. The latency for Wednesday returns to 170 ms, data usage at 5 GB. On Tuesday, the model analyzes a second set of data such as the latency of 250 ms exceeds the established one or more thresholds of 200 ms. The data usage of 15 GB is significantly higher than the typical daily usage of 5-10 GB. Owing to this, the model flags the Tuesday’s data as the one or more anomalies, indicating potential service issues due to high latency and unusual data usage.
[0047] In an embodiment, the receiving unit 220 is configured to receive the second set of data corresponding to each of the one or more UEs 110 from the one or more data sources in real time. The second set of data refers to a new stream of information that is received in real time, specifically concerning the ongoing performance and activities of UEs 110 in the network 105. In an embodiment, the second set of data pertains to at least one of real time network data and real time subscriber behavior data. The real time network data provides live metrics on network performance, such as bandwidth usage, latency, packet loss, and error rates. The real time subscriber behavior data tracks individual user activities, such as call patterns, data consumption, app usage, and service interactions. The receiving unit 220 is configured for continuous monitoring, enabling it to collect and process data without delays, which is essential for identifying issues as they arise.
[0048] Upon receiving the second set of data corresponding to each of the one or more UEs 110 from the one or more data sources in real time, the comparing unit 235 is configured to compare the second set of data with the one or more predicted anomalies. The comparing unit 235 is to analyze the second set of data against previously predicted one or more anomalies to assess network performance and subscriber behavior. The predicted one or more anomalies are deviations from established norms identified by the model during the training phase. The comparing unit 235 systematically evaluates the second set of data against the predicted one or more anomalies. The comparing unit 235 checks whether the current network metrics align with the conditions under which the predicted one or more anomalies are predicted. In an example, if the predicted one or more anomalies suggests latency exceeding 250 ms due to high traffic, the comparing unit 235 looks for current latency levels in the second set of data. If the comparison indicates that the second set of data reflects conditions matching the predicted one or more anomalies, the comparing unit 235 confirms the anomaly.
[0049] Upon comparison the second set of data with the one or more predicted anomalies, the identifying unit 240 is configured to identify one or more deviations. The one or more deviations occurred based on a comparison of the second set of data with the one or more predicted anomalies. The identifying unit 240 analyzes the results of the comparison between the second set of data and the predicted one or more anomalies, identifying any significant one or more deviations that may indicate issues affecting network performance or subscriber experience. The identifying unit 240 evaluates the outcomes of the comparison to pinpoint specific one or more deviations. The identifying unit 240 may categorize one or more deviations such as critical deviations, and moderate deviations. The critical deviations refer to severe anomalies that require immediate action (e.g., latency far exceeding expected limits). The moderate deviations refer to issues that may not require urgent intervention but should be monitored (e.g., slight increases in data usage that are above normal but manageable). The identifying unit 240 analyzes the current network conditions, and subscriber behavior patterns.
[0050] Upon identifying the one or more deviations based on the comparison of the second set of data with the one or more predicted anomalies, the initiating unit 245 is configured to initiate one or more actions in response to identification of the one or more deviations. In an embodiment, the one or more actions correspond to at least one of transmitting an alert to network engineers and allocating one or more resources to address the one or more predicted anomalies. The network engineers receive insights into the nature and severity of the anomalies. Based on the insights, the initiating unit 245 allocates one or more resources proactively to address issues before they impact the subscribers. The resource allocation may involve rerouting network traffic, optimizing server loads, or other corrective actions. The initiating unit 245 may trigger alerts to the network engineers, prompting them to investigate further and take necessary one or more actions. In an exemplary embodiment, if the identifying unit 240 detects that the latency has exceeded a critical threshold, the alert may suggest scaling up network resources or investigating potential network bottlenecks.
[0051] FIG. 3 is a block diagram of an architecture 300 that can be implemented in the system of FIG.2, according to one or more embodiments of the present disclosure. The architecture 300 of the system 120 includes the UI 215, a probing unit 305, a probing interface 310, a data integrating unit 315, a data pre-processing unit 320, a model training unit 325, a real time monitoring unit 330, a data lake 335 and a proactive resource allocation unit 340.
[0052] The architecture 300 of the system 120 is configured to interact with the probing unit 305. The probing unit 305 collects the first and second set of data from the one or more data sources. In an embodiment, the one or more data sources include at least, the file input, the source path, the input stream, the HTTP 2, the DFS, and the NAS. The probing unit 305 transmits the data to the data integrating unit 315 via the probing interface 310.
[0053] The data integrating unit 315 performs all data integration operation of the received data from the probing interface 310 and transmits the integrated data to the data pre-processing unit 320. The data pre-processing unit 320 includes data cleaning, data normalization, and data transformation. The data cleaning is the process of identifying and correcting inaccuracies, inconsistencies, and errors in a dataset to improve its quality and reliability for analysis. The data normalization is the process of organizing and structuring data to reduce redundancy and improve data integrity within the data lake 335 or the dataset. The data normalization involves transforming data into a standard format, making the data consistent and easier to analyze. The data transformation is the process of converting the data from one format or structure into another to make it suitable for analysis, integration, or storage. The data transformation process is essential in data preparation, allowing organizations to clean, standardize, and manipulate the data to meet specific analytical or operational requirements.
[0054] Upon pre-processing the data, the model training unit 325 is configured to train the model utilizing the received data. In an embodiment, the model includes, but is not limited to, Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions of the one or more anomalies based on the data. The model training unit 325 is further configured to store the pre-processed data and an output of the model in the data lake 335.
[0055] The real time monitoring unit 330 is configured to monitor the second set of data from the one or more data sources in real time. The real time monitoring unit 330 is configured to compare the second set of data with the predicted one or more anomalies by the machine learning model. Upon comparing, the one or more deviations are identified based on the comparison of the second set of data with the one or more predicted anomalies. When the one or more deviations are identified, the system 120 initiates the one or more actions in response to identification of the one or more deviations.
[0056] The proactive resource allocation unit 340 is configured to initiate one or more actions in response to identification of the one or more deviations. In an embodiment, the one or more actions correspond to at least one of transmitting an alert to network engineers and allocating one or more resources to address the one or more predicted anomalies. The proactive resource allocation unit 340 allocates one or more resources proactively to address issues before they impact the subscribers. The resource allocation may involve rerouting network traffic, optimizing server loads, or other corrective actions. The proactive resource allocation unit 340 triggers the alerts to the network engineers, prompting them to investigate further and take necessary one or more actions.
[0057] The UI 215 displays the predicted one or more anomalies to take the one or more actions in response to identification of the one or more deviations to the user by using the one or more UEs 110.
[0058] FIG. 4 is a flow diagram illustrating a method for predicting the one or more anomalies associated with the one or more UEs 110, according to one or more embodiments of the present disclosure.
[0059] At step 405, the method 400 includes the step of receiving the first set of data corresponding to each of the one or more UEs 110 from the one or more data sources by the receiving unit 220. In an embodiment, the first set of data pertains to at least one of Call Detail Records (CDRs), historical network performance data, historical trends and subscriber behavior data. The CDRs are structured data entries that capture information about each call, call duration, call quality, or communication event that occurs within the network 105. The historical network performance data refers to the accumulated records and metrics that reflect the operational performance over time. The historical network performance data provides insights into how the network 105 has functioned in the past, enabling operators to analyze trends, identify issues, and make informed decisions for future. The historical network performance data includes bandwidth utilization, data throughput, latency, packet loss, and jitter. The historical trends refer to patterns or changes observed in data over a specific period of time. The subscriber behavior data refers to the collection and analysis of information about how the subscribers interact with the network 105. The subscriber behavior data provides insights into individual and group usage patterns, preferences, and behaviors, which is utilized for optimizing network performance, enhancing user experience, and informing business strategies.
[0060] The receiving unit 220 receives the first set of data from the one or more data sources. In an embodiment, the one or more data sources include at least, a file input, a source path, an input stream, a Hyper Text Transfer Protocol 2 (HTTP 2), a Distributed File System (DFS), and a Network Attached Storage (NAS). The file input refers to reading data from the files stored locally or on the server 115. The files can be in different formats, including, but not limited to, Comma Separated Values (CSV), JavaScript Object Notation (JSON), extensible Markup Language (XML), or text files. In an exemplary embodiment, the data is stored in the CSV file, and the retrieving unit 220 fetches the data for processing. The receiving unit 220 receives the first set of data from the file and loads the first set of data into the memory 210 for further processing.
[0061] The source path typically refers to the directory or network location where the data files are stored. The receiving unit 220 fetches the data by following the provided file path. In an exemplary embodiment for the source path, the system 120 stores images in a specific directory. The receiving unit 220 navigates to a designated source path and retrieves all files that match the required criteria (e.g., .jpg images). The input stream refers to continuous data that is read in real-time from a stream of data (e.g., data being transmitted over the network 105 or generated by sensors). In an exemplary embodiment, the data is being received from an Application Programming Interface (API) or a live data stream, and the receiving unit 220 fetches the continuous data in real-time. The HTTP 2 is a protocol used for communication over the web, which improves upon HTTP/1.1 by offering multiplexing and better performance for handling multiple requests. In an exemplary embodiment, the receiving unit 220 receives the data from the web server using the HTTP 2. The receiving unit 220 uses the HTTP 2 to fetch the data from remote web servers or APIs.
[0062] The DFS is the distributed file system used to store large datasets across multiple machines. The DFS is commonly used in big data environments to store and retrieve large amounts of data. The receiving unit 220 connects to the DFS to receive the file for processing. The NAS is a dedicated file storage system that provides Local Area Network (LAN) access to the data. The NAS allows multiple users or systems to access the data from a centralized storage device. The receiving unit 220 fetches the data from a NAS device over the network 105. In an exemplary embodiment, if the data is stored on the NAS, the receiving unit 220 fetches the data via network protocols. Upon receiving the first set of data from the one or more data sources, the received data is stored in a data frame for further processing.
[0063] At step 410, the method 400 includes the step of training the model utilizing the first set of data to identify trends in the first set of data received by the training unit 225. In an embodiment, the model includes, but not limited to, an Artificial Intelligence/Machine Learning (AI/ML) model. The model using a specific data set, referred to as the first set of data. The dataset typically contains historical records, which include various features related to network performance, subscriber behavior, and dynamic conditions. During training, the model analyzes the first set of data to uncover patterns and relationships.
[0064] By examining historical performance, the model can establish trends and identify conditions that typically precede failures, such as recurring spikes in latency or unusual error rates. The models analyze individual subscriber behavior, such as call duration, frequency of use, data consumption, and types of services used (e.g., Voice over Internet Protocol (VoIP), streaming). Identifying changes in the subscriber’s behavior, such as a sudden increase in data usage or frequent disconnections, helps the model to predict one or more anomalies. Recognizing recurring patterns in the data, such as seasonal usage fluctuations, peak traffic times, or performance degradation indicators.
[0065] At step 415, the method 400 includes the step of predicting the one or more anomalies based on the identified trends in the first set of data received by the predicting unit 230. If the model detects an unexpected increase in latency that exceeds one or more thresholds, the predicting unit 230 flags this as one or more anomalies, indicating a potential service issue. In an exemplary embodiment, during the training phase, the model learns that High latency (e.g., above 200 ms) often correlates with increased packet loss. Heavy data usage (e.g., over 10 GB per day) typically occurs during weekends. Service disruptions frequently follow a combination of high latency and the heavy data usage. After training, the model identifies the trends, such as normal behavior and seasonal patterns. The normal behavior refers to the latency remains below 150 ms during weekdays, with typical data usage around 5GB per day. The seasonal patterns refer to increased call volume and data usage during holidays, with a corresponding rise in latency. The first set of data for the current week is fed into the model. In an example, the latency for Monday is at 160 ms, data usage at 4GB. The latency for Tuesday spikes at 250 ms, data usage at 15 GB. The latency for Wednesday returns to 170 ms, data usage at 5 GB. On Tuesday, the model analyzes a second set of data such as the latency of 250 ms exceeds the established one or more thresholds of 200 ms. The data usage of 15 GB is significantly higher than the typical daily usage of 5-10 GB. Owing to this, the model flags the Tuesday’s data as the one or more anomalies, indicating potential service issues due to high latency and unusual data usage.
[0066] FIG. 5 is a flow chart illustrating the method 500 predicting the one or more anomalies associated with the one or more UEs 110, according to one or more embodiments of the present disclosure.
[0067] At step 505, the method 500 includes the step of receiving the first and second set of data from the one or more data sources by the probing unit 305 via the probing interface 310. In an embodiment, the one or more data sources include at least, the file input, the source path, the input stream, the HTTP 2, the DFS, and the NAS. The probing unit 305 transmits the data to the data integrating unit 315 via the probing interface 310.
[0068] At step 510, the method 500 includes the step of performing all data integration operation of the received data from the probing interface 310 and transmits the integrated data to the data pre-processing unit 320. At step 515, the method 500 includes the step of pre-processing the data. At step 520, the method 500 includes the step of performing data cleaning, data normalization, and data transformation by the data pre-processing unit 320. The data cleaning is the process of identifying and correcting inaccuracies, inconsistencies, and errors in a dataset to improve its quality and reliability for analysis. The data normalization is the process of organizing and structuring data to reduce redundancy and improve data integrity within the data lake 335 or the dataset. The data normalization involves transforming data into a standard format, making the data consistent and easier to analyze. The data transformation is the process of converting the data from one format or structure into another to make it suitable for analysis, integration, or storage. The data transformation process is essential in data preparation, allowing organizations to clean, standardize, and manipulate the data to meet specific analytical or operational requirements.
[0069] Upon training the model utilizing the first set of data and identifying the trends in the first set of data received, the predicting unit 230 is configured to predict the one or more anomalies based on the identified trends in the first set of data received. If the model detects an unexpected increase in latency that exceeds one or more thresholds, the predicting unit 230 flags this as one or more anomalies, indicating the potential service issue.
[0070] At step 525, the method 500 includes the step of monitoring the second set of data from the one or more data sources in real time by the real time monitoring unit 330. The real time monitoring unit 330 is configured to compare the second set of data with the predicted one or more anomalies by the machine learning model. At step 530, the method 500 includes the step of identifying the one or more deviations based on the comparison of the second set of data with the one or more predicted anomalies. When the one or more deviations are identified, the system 120 initiates the one or more actions in response to identification of the one or more deviations.
[0071] At step 535, the method 500 includes the step of training the model utilizing the received data by the model training unit 325. In an embodiment, the model includes, but is not limited to, Artificial Intelligence/Machine Learning (AI/ML) model to analyze the trends and the patterns and make predictions of the one or more anomalies based on the data. The model training unit 325 is further configured to store the pre-processed data and an output of the model in the data lake 335. Determination is made to check whether the output of the model is optimal or not.
[0072] At step 540, the method 500 includes the step of retraining the model if the output of the model is not optimal by the model training unit 325. The model training unit 325 collects more relevant or diverse data to improve model training. The current model's performance is analysed using metrics like accuracy, precision, recall, or F1 score to identify specific weaknesses. The hyperparameters are adjusted to improve model performance. The updated dataset is used. The model is retrained, incorporating any improvements from the previous steps. The model is validated using cross-validation techniques to ensure it generalizes well to unseen data. After retraining, monitor the model's performance on a test set and in production to ensure it meets the desired standards. If performance is still not satisfactory, repeat the process, making further adjustments as necessary.
[0073] At step 545, the method 500 includes the step of allocating the one or more resources if the output of the model is optimal. The initiating unit 245 is configured to initiate the one or more actions in response to identification of the one or more deviations by the proactive resource allocation unit 340. In an embodiment, the one or more actions correspond to at least one of transmitting the alert to network engineers and allocating one or more resources to address the one or more predicted anomalies. The proactive resource allocation unit 340 allocates one or more resources proactively to address issues before they impact the subscribers. The resource allocation may involve rerouting network traffic, optimizing server loads, or other corrective actions. The proactive resource allocation unit 340 triggers the alerts to the network engineers, prompting them to investigate further and take necessary one or more actions.
[0074] In another aspect of the embodiment, a non-transitory computer-readable medium stored thereon computer-readable instructions that, when executed by a processor 205 is disclosed. The processor 205 is configured to receive a first set of data corresponding to each of the one or more UEs from one or more data sources. The processor 205 is configured to train a model utilizing the received data to identify trends in the received data. The processor 205 is configured to predict the one or more anomalies based on the identified trends in the received data.
[0075] A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIGS.1-5) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
[0076] The present disclosure provides technical advancement for predicting, by the one or more processors, the one or more anomalies based on the identified trends in the received data. The AI/ML-based approach provides highly accurate subscriber-wise failure predictions, improving the quality of service. Network engineers can proactively address potential failures, reducing downtime and improving subscriber satisfaction. The system optimizes resource allocation, minimizing operational costs and resource wastage.
[0077] The present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features. The listed advantages are to be read in a non-limiting manner.

REFERENCE NUMERALS

[0078] Environment - 100
[0079] Network-105
[0080] User equipment- 110
[0081] Server - 115
[0082] System -120
[0083] Processor - 205
[0084] Memory - 210
[0085] User interface-215
[0086] Receiving unit – 220
[0087] Training unit– 225
[0088] Predicting unit- 230
[0089] Comparing unit– 235
[0090] Identifying unit- 240
[0091] Initiating unit- 245
[0092] Database -250
[0093] Architecture- 300
[0094] Probing unit-305
[0095] Probing interface- 310
[0096] Data integrating unit- 315
[0097] Data preprocessing unit- 320
[0098] Model training unit- 325
[0099] Real time monitoring unit- 330
[00100] Datalake-335
[00101] Proactive resource allocation unit - 340
,CLAIMS:CLAIMS
We Claim:
1. A method (400) for predicting one or more anomalies associated with one or more User Equipment’s (UEs) (110), the method (400) comprising the steps of:
receiving, by one or more processors (205), a first set of data corresponding to each of the one or more UEs from one or more data sources;
training, by the one or more processors (205), a model utilizing the received data to identify trends in the received data; and
predicting, by the one or more processors (205), the one or more anomalies based on the identified trends in the received data.

2. The method (400) as claimed in claim 1, comprising the steps of:
receiving, by the one or more processors (205), a second set of data corresponding to each of the one or more UEs (110) from the one or more data sources in real time;
comparing, by the one or more processors (205), the second set of data with the one or more predicted anomalies;
identifying, by the one or more processors (205), one or more deviations based on a comparison of the second set of data with the one or more predicted anomalies; and
initiating, by the one or more processors (205), one or more actions in response to identification of the one or more deviations.

3. The method (400) as claimed in claim 1, wherein the first set of data pertains to at least one of Call Detail Records (CDR), historical network performance data, historical trends and subscriber behavior data.

4. The method (400) as claimed in claim 2, wherein the second set of data pertains to at least one of real time network data and real time subscriber behavior data.

5. The method (400) as claimed in claim 1, wherein the one or more data sources include at least, a file input, a source path, an input stream, a Hyper Text Transfer Protocol 2 (HTTP 2), a Distributed File System (DFS), and a Network Attached Storage (NAS).

6. The method (400) as claimed in claim 2, wherein the one or more actions correspond to at least one of transmitting an alert to network engineers and allocating one or more resources to address the one or more predicted anomalies.

7. A system (120) for predicting one or more anomalies associated with one or more User Equipment’s (UEs) (110), the system (120) comprising:
a receiving unit (220) configured to receive, a first set of data corresponding to each of the one or more UEs (110) from one or more data sources;
a training unit (225) configured to train, a model utilizing the received data to identify trends in the received data; and
a predicting unit (230) configured to predict, the one or more anomalies based on the identified trends in the received data.

8. The system (120) as claimed in claim 8, wherein the system (120) comprises:
the receiving unit (220) configured to receive, a second set of data corresponding to each of the one or more UEs (110) from the one or more data sources in real time;
a comparing unit (235) configured to compare, the second set of data with the one or more predicted anomalies;
an identifying unit (240) configured to identify, one or more deviations based on a comparison of the second set of data with the one or more predicted anomalies; and
an initiating unit (245) configured to initiate, one or more actions in response to identification of the one or more deviations.

9. The system (120) as claimed in claim 8, wherein the first set of data pertains to at least one of Call Detail Records (CDR), historical network performance data, historical trends and subscriber behavior data.

10. The system (120) as claimed in claim 9, wherein the second set of data pertains to at least one of real time network data and real time subscriber behavior data.

11. The system (120) as claimed in claim 8, wherein the one or more data sources include at least, a file input, a source path, an input stream, a Hyper Text Transfer Protocol 2 (HTTP 2), a Distributed File System (DFS), and a Network Attached Storage (NAS).

12. The system (120) as claimed in claim 9, wherein the one or more actions correspond to at least one of transmitting an alert to network engineers and allocating one or more resources to address the one or more predicted anomalies.

Documents

Application Documents

# Name Date
1 202321071952-STATEMENT OF UNDERTAKING (FORM 3) [20-10-2023(online)].pdf 2023-10-20
2 202321071952-PROVISIONAL SPECIFICATION [20-10-2023(online)].pdf 2023-10-20
3 202321071952-FORM 1 [20-10-2023(online)].pdf 2023-10-20
4 202321071952-FIGURE OF ABSTRACT [20-10-2023(online)].pdf 2023-10-20
5 202321071952-DRAWINGS [20-10-2023(online)].pdf 2023-10-20
6 202321071952-DECLARATION OF INVENTORSHIP (FORM 5) [20-10-2023(online)].pdf 2023-10-20
7 202321071952-FORM-26 [27-11-2023(online)].pdf 2023-11-27
8 202321071952-Proof of Right [12-02-2024(online)].pdf 2024-02-12
9 202321071952-DRAWING [19-10-2024(online)].pdf 2024-10-19
10 202321071952-COMPLETE SPECIFICATION [19-10-2024(online)].pdf 2024-10-19
11 Abstract.jpg 2025-01-11
12 202321071952-Power of Attorney [24-01-2025(online)].pdf 2025-01-24
13 202321071952-Form 1 (Submitted on date of filing) [24-01-2025(online)].pdf 2025-01-24
14 202321071952-Covering Letter [24-01-2025(online)].pdf 2025-01-24
15 202321071952-CERTIFIED COPIES TRANSMISSION TO IB [24-01-2025(online)].pdf 2025-01-24
16 202321071952-FORM 3 [31-01-2025(online)].pdf 2025-01-31