Sign In to Follow Application
View All Documents & Correspondence

System And Method For Predicting And Preventing Call Drop Offs

Abstract: The present disclosure relates to a system (108) and a method (400) for predicting and preventing call drop-offs. The system (108) receives one or more input parameter values from a set of base stations communicatively connected via a network. Each base station includes one or more operational units that enable telecommunication between two or more user equipment (UE). The input parameter values may include, but not be limited to, call detail records (CDR), one or more performance metric values, and the like. met operational units susceptible to failure. The system may trigger execution of one or more auto-corrective processor-executable instructions for resolving and preventing failures to the susceptible operational units. Fig. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.

Specification

FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULE 0) 003
COMPLETE SPECIFICATION
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF DISCLOSURE
[0002] The embodiments of the present disclosure generally relate to
communication networks. In particular, the present disclosure relates to a system and method for predicting and preventing call drop-offs.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The expression ‘Anomaly’ used hereinafter in the specification
refers to, in the context of data analysis and machine learning, an observation or event that significantly deviates from an expected behaviour within a dataset. Anomalies are typically rare occurrences compared to the majority of data points, making their detection important for various applications such as fault detection in system and network security.
BACKGROUND OF DISCLOSURE
[0005] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may

include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0006] Call drops off occur when user equipment, such as mobile phones,
are unable to connect or latch on to base stations, such as cell towers. Drop offs may occur due to exceeding load capabilities, performance degradations over time, damage to components of base stations such as Remote Radio Units (RRU), etc. Base stations may not respond to user equipment, causing inconvenience and distress to the users. Telecom regulators also require base stations to maintain minimum thresholds of performance metrics such as call success rates (CSR) and impose fines on failing to meet the thresholds. Hence, it is important to prevent base stations from dropping calls.
[0007] When such abnormal behaviour is detected, manual inspections may
be performed to identify the cause and rectify the same, thereby leading to increased downtimes and costs. Furthermore, such abnormal behaviours are difficult to detect in real-time, and hence difficult to resolve until performance is degraded.
[0008] There is, therefore, a need in the art to provide a method and a system
that can overcome the shortcomings of the existing prior arts.
SUMMARY
[0009] The present disclosure discloses a system for detecting at least one
anomaly associated with one or more operational units with at least one base station in a network. The system includes one or more processors, and a memory coupled to the one or more processors. The memory includes computer implemented instructions to configure the one or more processors to receive, by a data acquisition engine, one or more input parameter values from at least one base station. The one or more processors are configured to input, by an Artificial Intelligence (AI) engine, the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly associated with the one or more

operational units based on the one or more received input parameter values. The one or more processors are configured to predict the at least one state of the one or more operational units based on the at least one detected anomaly.
[0010] In an embodiment, the system is configured to generate at least one
signal representing the at least one predicted state of the one or more operational units.
[0011] In an embodiment, the one or more input parameter values are
associated with a log pertaining to a customer device.
[0012] In an embodiment, the one or more input parameter values include
caller ID, recipient ID, call initiation timestamp, session length, call quality metrics, and call termination reason.
[0013] In an embodiment, the at least one state is a susceptible to failure
state.
[0014] In an embodiment, the system includes a display unit configured to
receive the at least one generated signal. The display unit is further configured to represent the at least one detected anomaly or at least one predicted state of the one or more operational units.
[0015] In an embodiment, the system includes an interfacing unit
configured to receive a user input representing a machine learning (ML) model selection selected from a plurality of ML models.
[0016] In an embodiment, the system includes a monitoring unit for
generating an alert or a trigger upon determination of the susceptible to failure state.
[0017] In an embodiment, the one or more processors are configured to
capture a call data record (CDR) including the log associated with the customer device in real-time and analyse the captured call data records using the AI engine for determining one or more conditions causing a drop-off in a customer device registration based on the analysis.

[0018] In an embodiment, the one or more conditions include technical
issues associated with the one or more operational units, registration interface complexity, and security vulnerabilities.
[0019] In an embodiment, the monitoring unit is configured to trace an
5 origin of the determined one or more conditions causing the drop-off, perform, based on the traced origin, a root cause analysis of the determined one or more conditions causing the drop-off, and generate a visual representation of the one or more call quality metrics indicative of the detected anomaly. The root cause analysis is performed based on one or more audio-visual indications generated by 10 the AI engine based on the analysed captured call data records.
[0020] In an embodiment, the one or more processors are configured to
perform an exploratory data analysis on the one or more input parameter values and select one or more optimal input parameter values required for inputting to the AI engine.
15 [0021] In an embodiment, the at least one machine learning (ML) model is
a regression-based ML model, a classification-based ML model, a clustering-based ML model, a dimensionality reduction-based ML model, and a reinforcement learning-based ML model.
[0022] The present disclosure discloses a method for detecting at least one
20 anomaly associated with one or more operational units with at least one base station in a network. The method includes receiving, by a data acquisition engine, one or more input parameter values from at least one base station. The method includes inputting, by an Artificial Intelligence (AI) engine, the one or more received input parameter values to at least one machine learning (ML) model to detect the at least 25 one anomaly associated with one or more operational units based on the one or more received input parameter values. The method includes predicting, by the AI engine, the at least one state of the one or more operational units based on the at least one detected anomaly.
[0023] In an embodiment, the method further includes generating at least
5

one signal representing the at least one predicted state of the one or more operational units.
[0024] In an embodiment, the method further includes receiving, by a
display unit, the at least one generated signal and representing the at least one
5 detected anomaly or at least one predicted state of the one or more operational units.
[0025] In an embodiment, the method further includes receiving, by an
interfacing unit, a user input representing a machine learning (ML) model selection selected from a plurality of ML models.
[0026] In an embodiment, the method further includes generating, by a
10 monitoring unit, an alert or a trigger upon determination of the susceptible to failure state.
[0027] In an embodiment, the method further includes capturing, a call data
record (CDR) including the log associated with the customer device in real-time and analysing the captured call data records using the AI engine for determining 15 one or more conditions causing a drop-off in a customer device registration based on the analysis.
[0028] In an embodiment, the method further includes tracing, by the
monitoring unit, an origin of the determined one or more conditions causing the drop-off, performing, by the monitoring unit, based on the traced origin, a root 20 cause analysis of the determined one or more conditions causing the drop-off, and generating, by the monitoring unit, a visual representation of the one or more call quality metrics indicative of the detected anomaly. The root cause analysis may be performed based on one or more audio-visual indications generated by the AI engine based on the analysed captured call data records.
25 [0029] In an embodiment, the method further includes performing, by the
one or more processors, an exploratory data analysis on the one or more input parameter values, and selecting, by the one or more processors, one or more optimal input parameter values required for inputting to the AI engine.
6

[0030] In an exemplary embodiment, the present disclosure discloses a user
equipment configured to detect at least one anomaly associated with one or more operational units with at least one base station in a network. The user equipment includes a processor, and a computer readable storage medium storing 5 programming instructions for execution by the processor. Under the programming instructions, the processor is configured to receive one or more input parameter values from at least one base station. Under the programming instructions, the processor is configured to input the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly 10 associated with one or more operational units based on the one or more input parameter values. Under the programming instructions, the processor is configured to predict the at least one state of the one or more operational units based on the at least one detected anomaly.
OBJECTS OF THE PRESENT DISCLOSURE
15 [0031] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0032] An object of the present disclosure is to provide a system and a
method for predicting and preventing call drop-offs.
[0033] Another object of the present disclosure is to provide a system and a
20 method for forecast and trigger resolution of predict failures.
[0034] Another object of the present disclosure is to provide a system and a
method that dynamically selects and retrains machine learning models used for predicting call drop-off and one or more operational units susceptible to causing said call drop-offs.
25 [0035] Another object of the present disclosure is to provide a system and a
method for preventative maintenance of base stations.
BRIEF DESCRIPTION OF DRAWINGS
7

[0036] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not 5 necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components 10 or circuitry commonly used to implement such components.
[0037] FIG. 1 illustrates an exemplary network architecture for
implementing a system for detecting at least one anomaly associated with one or more operational units connected with at least one base station in a network, in accordance with embodiments of the present disclosure.
15 [0038] FIG. 2 illustrates a block diagram of the system, in accordance with
embodiments of the present disclosure.
[0039] FIG. 3 illustrates a diagram showing interaction between various
components and the one or more processor(s), in accordance with embodiments of the present disclosure.
20 [0040] FIG. 4 illustrates a method for detecting at least one anomaly
associated with one or more operational units connected with at least one base station in a network, in accordance with embodiments of the present disclosure.
[0041] FIG. 5 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
25 [0042] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
8

LIST OF REFERENCE NUMERALS
100 – Network architecture
102-1, 102-2…102-N – Users
104-1, 104-2…104-N – User Equipments 5 106 – Network
108 – System
110 – Monitoring unit
112 – At least one base station
113-1, 113-2 – One or more operational units 10 200- Block diagram
204 – Memory
206 – Interfacing unit
208 – One or more processor(s)
210 – Database 15 212 – Data acquisition engine
214 – Artificial Intelligence (AI) engine
216 – Triggering Engine
218 – Other Units
220 – Display unit 20 500 – Computer system
510 – External Storage Device
520 – Bus
530 – Main Memory
540 – Read Only Memory 25 550 – Mass Storage Device
560 – Communication Port
570 – Processor
DETAILED DESCRIPTION OF DISCLOSURE
[0043] In the following description, for the purposes of explanation, various
9

specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one 5 another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0044] The ensuing description provides exemplary embodiments only, and
10 is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope 15 of the disclosure as set forth.
[0045] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other 20 components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0046] Also, it is noted that individual embodiments may be described as a
25 process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional
10

steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
5 [0047] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or
10 designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any
15 additional or other elements.
[0048] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the 20 phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0049] The terminology used herein is for the purpose of describing
25 particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations,
11

elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
5 [0050] When a user's mobile phone is unable to connect to the nearest cell
tower, it's called device registration drop-off, leading to a decrease in call success rate (CSR). If the CSR falls below a certain threshold, heavy penalties are imposed on the telecom operators. Device registration drop-off not only affects network quality and incurs penalties but also degrades user experience. To address this issue, 10 the present disclosure is configured to disclose a system that captures the logs of user devices attempting to connect in real-time call records and applies machine learning algorithms to detect the reasons for the drop-off. This analysis allows the network operations team to promptly resolve the issue.
[0051] In an aspect, the present disclosure relates to a system and a method
15 for predicting and preventing call drop-offs generated between a predetermined time interval. The system receives one or more input parameter values from a set of base stations communicatively connected via a network. Each base station in the set of base stations includes one or more operational units that enable telecommunication between two or more user equipment (UE). The input parameter 20 values may include, but not be limited to, call detail records (CDR), one or more performance metric values, and the like. The system predicts that one or more operational units of the base stations are susceptible to failure and cause call drop-offs. The system transmits a set of signals to a monitoring unit that provides audio-visual indicating the operational units susceptible to failure. The system may trigger 25 the execution of one or more auto-corrective processor-executable instructions for resolving and preventing failures to the susceptible operational units.
[0052] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGS. 1-5.
[0053] FIG. 1 illustrates an exemplary network architecture for
12

implementing a system (108) for detecting at least one anomaly associated with one or more operational units connected with at least one base station in a network, in accordance with embodiments of the present disclosure.
[0054] Referring to FIG. 1, the network architecture (100) may include one
5 or more computing devices or user equipments (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that
10 one or more user equipments (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipments (104) are depicted in FIG. 1, however
15 any number of the user equipments (104) may be included without departing from the scope of the ongoing description.
[0055] In an embodiment, the user equipment (104) may include smart
devices operating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the user equipment (104) may include, but is not
20 limited to, smartphones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring
25 or interacting with or for the users (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelligent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
13

[0056] In an embodiment, the user equipment (104) may include, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch 5 computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104) may include, but is not limited to, any electrical, electronic,
10 electro-mechanical, or an equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user equipment (104) may include one or more in-built or externally coupled accessories
15 including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touchpad, touch-enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used.
20 The architecture may include a monitoring unit (110) having a user interface that provides audio-visual indications to the user based on a set of signals transmitted by the system (108). In an embodiment, the monitoring unit (110) may be implemented on a UE (104) and may be used by operators of the system (108).
[0057] In an embodiment, the network (106) may include at least one of a
25 Fifth Generation (5G) network, 6G network, or the like. The network (106) may enable the user equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any 30 of a variety of different communication technologies such as a wide area network
14

(WAN), a local area network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like. In an embodiment, the network (106) may include one or more base stations (112), as depicted in FIG. 1, to facilitate communication 5 between one or more UEs (104).
[0058] Base stations (112) serve as the essential infrastructure nodes in
wireless communication networks, facilitating seamless connectivity between the user and the broader network infrastructure. The base stations (112) are equipped with antennas and sophisticated transceiver equipment, enabling them to transmit
10 and receive radio signals over the air interface. Base stations (112) provide radio coverage within their designated cell areas, ensuring that users can make voice calls, send messages, and access data services with reliability and efficiency. Base stations manage critical functions such as handover management, resource allocation, and interference mitigation, optimizing network performance and
15 ensuring a consistent user experience.
[0059] The base station (112) may have coverage defined to be a
predetermined geographic area based on the distance over which a signal may be transmitted. The base station (112) may include, but not be limited to, a wireless access point, evolved NodeB (eNodeB), 5G node or next generation NodeB (gNB),
20 wireless point, transmission/reception point (TRP), and the like. In an embodiment, the base station (112) may include one or more operational units (113-1, 113-2) (as shown in FIG. 3) that enable telecommunication between two or more UEs (104). In an embodiment, the operational units (113-1, 113-2) encompass a diverse range of components crucial for the functioning of the network. These may include, but
25 are not limited to, transceivers responsible for transmitting and receiving signals, baseband units (BBUs) handling signal processing tasks, remote radio units (RRUs) amplifying and relaying signals, antennas for transmitting and receiving electromagnetic waves, mobile switching centres (MSCs) coordinating call routing and mobility management, and radio network control units overseeing network
30 operations and resource allocation. In an embodiment, the one or more operational
15

units (113-1, 113-2) may include, but not be limited to, a plurality of network function units such as Access and Mobility Management Function (AMF) unit, Session Management Function (SMF) unit, Network Exposure Function (NEF) units, or any custom-built functions executing one or more processor-executable 5 instructions, but not limited thereto.
[0060] In accordance with embodiments of the present disclosure, the
system (108) may be designed and configured to predict failures causing call registration drop-offs in base stations by leveraging various data sources and machine learning techniques to anticipate potential issues before they occur. In an 10 embodiment, the system (108) may be configured to raise alerts to prevent the predicted failures.
[0061] In an summarize aspect, the system may be configured to employ the
following steps:
. Data Collection: Collect data from all base stations in the network.
15 The collected data includes information such as signal strength,
traffic volume, and connection status. In an aspect, the system may be configured to gather data from operational units connected to the base stations. The gathered data may include device health metrics, usage patterns, and communication logs.
20 . Preprocessing: The system may be configured to handle missing
values, outliers, and noise in the collected data. In an aspect, the system may be configured to normalize or standardize the data to ensure uniformity and comparability across different features.
. Feature Engineering: The system may be configured to identify
25 features relevant to anomaly detection, such as sudden changes in
traffic patterns, unusual device behaviour, or deviations from
expected performance metrics. In addition, the system may be
configured to reduce the dimensionality of the feature space if
16

needed to improve computational efficiency.
. Anomaly Detection: The system may be configured to utilize
statistical techniques (e.g., Z-score, deviation from the mean, or
distribution-based methods) to identify anomalies in the data. In an
5 aspect, the system may be configured to train supervised or
unsupervised machine learning models (e.g., Isolation Forest, One-Class SVM, Autoencoders) to detect anomalies based on the labelled or unlabeled data. The isolation forest is an unsupervised learning algorithm used for anomaly detection. It isolates anomalies
10 by randomly selecting features and then randomly selecting a split
value between the maximum and minimum values of the selected feature. The one-class support vector machine (SVM) is a supervised learning algorithm used for anomaly detection in which the model is trained on 'normal' instances only (in a one-class
15 classification setting). It learns a decision boundary that separates
the normal data points from the outliers. The autoencoders are a type of neural network used for unsupervised learning tasks, including anomaly detection and dimensionality reduction. They aim to learn a compressed representation (encoding) of the input
20 data, and then reconstruct the data from this representation.
. Alerting and Visualization: The system may be configured to generate alerts whenever anomalies are detected, indicating the affected operational units and the nature of the anomaly.
[0062] FIG. 2 illustrates a block diagram (200) of the system (108), in
25 accordance with embodiments of the present disclosure.
[0063] The system includes one or more processors (208), and a memory
(204) coupled to the one or more processors (208).
[0064] The memory (204) includes computer implemented instructions to
17

configure the one or more processors to perform a method for detecting at least one anomaly associated with the one or more operational units connected with at least one base station. The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable 5 storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as Random Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read¬Only Memory (EPROM), flash memory, and the like.
10 [0065] The one or more processor(s) (208) may be implemented as one or
more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (208) may be configured to
15 fetch and execute computer-readable instructions stored in a memory (204) of the system (108).
[0066] Referring to FIG. 2, the system (108) may include an interfacing unit
(206). The interfacing unit (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage
20 devices, and the like. The interfacing unit (206) is configured to receive a user input representing a machine learning (ML) model selection selected from a plurality of ML models. The interfacing unit (206) may facilitate communication to/from the system (108). The interfacing unit (206) may also provide a communication pathway for one or more components of the system (108). Examples of such
25 components include, but are not limited to, one or more processor(s) (208) and a database (210).
[0067] In an embodiment, the one or more processor(s) (208) (processing
unit) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of
18

the one or more processor(s) (208). In the examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the one or more processor(s) (208) may be processor-executable instructions stored on a non-transitory machine-5 readable storage medium and the hardware for the one or more processor(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the one or more processor(s) (208). In such examples, the system (108) 10 may include the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the one or more processor(s) (208) may be implemented by electronic circuitry.
15 [0068] In an embodiment, the database (210) includes data that may be
either stored or generated as a result of functionalities implemented by any of the components of the processor (208) or the one or more processor(s) (208). In an embodiment, the database (210) may be separate from the system (108). In an embodiment, the database (210) may be indicative of including, but not limited to,
20 a relational database, a distributed database, a cloud-based database, or the like.
[0069] In an exemplary embodiment, the one or more processor(s) (208)
may include one or more engines selected from any of a data acquisition engine (212), an artificial intelligence (AI) engine (214), a triggering engine (216), and other units (218) having functions that may include, but are not limited to, testing, 25 storage, and peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
[0070] In an embodiment, the data acquisition engine (212) may be
configured to receive the one or more input parameter values from the base station (112). In an embodiment, the one or more input parameter values may be received
19

from a set of multiple base stations communicatively connected via a network. For example, the one or more input parameter values may be associated with a log pertaining to a customer device. In an embodiment, the one or more input parameter values include caller ID, recipient ID, call initiation timestamp, session length, call 5 quality metrics, and call termination reason. In an aspect, the caller ID is a unique identifier for the caller initiating the call. The recipient ID is a unique identifier for the recipient or destination of the call. The call initiation timestamp is a timestamp indicating when the call was initiated. The session length is a duration of the call session, measured from initiation to termination. The call quality metrics are the 10 metrics that quantify the quality of the call, such as signal strength, signal-to-noise ratio, jitter, packet loss, and latency. Further, the call termination reason indicates a reason for call termination, which may include factors like user hang-up, network congestion, hardware failure, or software error.
[0071] In an embodiment, the one or more input parameter values may
15 include, but not be limited to, the CDR, the one or more performance metric values, and the like. In an embodiment, the CDR may include, but not be limited to, start time, duration, end time, time to connection, success of the call, failures of the call, parties to the call, location of parties, unique identifiers of devices of the parties, unique identifiers associated with one or more operational units (113-1, 113-2), and 20 other details associated with a call, text messages or any telecommunication exchanges between two or more entities over the network (106). In an embodiment, the one or more CDR data may be collected by the one or more base stations (112) in the network (106) when the UEs (104) engage in such telecommunication exchange. In an embodiment, telecommunication exchanges may include, but not 25 be limited to, text messages, phone calls, multimedia messaging, and the like. In an embodiment, the one or more performance metric values may be indicative of including, but not limited to, Customer Success Ratios (CSR), transmittal speeds, load capacity, latching rate, and the like.
[0072] The one or more processors are configured to predict the at least one
30 state of the one or more operational units based on the at least one detected anomaly.
20

[0073] In an embodiment, the AI engine (214) may be configured to predict
at least one state of the one or more operational units of the base stations. In an aspect, the at least one state is a “susceptible to failure” state. The “susceptible to failure” state refers to the condition of an operational unit or component that is at 5 increased risk of experiencing a failure or malfunction and cause call drop offs. For example, the operational unit(s) of the base station may be configured to operate in various states, including an operating state, a standby state, a maintenance state, a faulty state, an inactive state, and an out-of-service state. In an embodiment, the AI engine (214) may be configured to predict the one or more operational units of the
10 base stations susceptible to failure and cause call drop offs based on the one or more input parameter values. In an embodiment, the AI engine (214) may be indicative of a pretrained machine learning model, expert system, or the like, but not limited to the same, that uses the one or more input parameter values to predict the one or more operational units (113-1, 113-2) of the base stations (112) that are susceptible
15 to failure, and thereby cause call drop offs. The one or more processors are configured to input the one or more received input parameter values to at least one machine learning (ML) model (also referred as machine learning module) to detect the at least one anomaly associated with the one or more operational units based on the one or more received input parameter values. The AI engine (214) receives one
20 or more input parameter values that describe the current state and conditions of the operational units within the base stations. These input parameter values are then systematically fed into one or more pre-trained Machine Learning (ML) models. Each machine learning model processes the input parameter values and applies the learned patterns to develop predictions or scores. These scores are then used by the
25 ML models to pinpoint deviations from the expected behavior through the utilization of statistical anomalies, unexpected patterns, or outlier detection techniques, thereby allowing the detection of at least one anomaly. The one or more processors are configured to predict the at least one state of the one or more operational units based on the at least one detected anomaly. In an embodiment, the
30 system is configured to generate at least one signal representing the at least one predicted state of the one or more operational units.
21

[0074] In an embodiment, the AI engine (214) may be configured to
dynamically select one or more pretrained machine learning (ML) models having an evaluation metric above an evaluation threshold. In an embodiment, the AI engine (214), to perform the prediction of the one or more operational units, may 5 be configured to detect one or more anomalies based on the one or more input parameter values using the one or more ML models. The ML model is further configured to analyze the generated score and the identified deviations. Using the detected anomalies as input, the AI engine predicts the current or future state of the operational units. For example, the system predicts that the data demand suddenly
10 spikes (acting as an anomaly). Then, based on the predicted data demand, the system predicts the state of the operational unit. For example, the predicted data demand is 20 Gbps, and the current capacity of providing data of the operational unit is 16 Gbps. Based on the predicted data demand and the current capacity, the ML model is configured to predict the state of the operational unit. Further, the AI
15 engine (214) may be configured to select the output of an optimal ML model from one or more ML models. In an embodiment, the evaluation threshold may include, but not be limited to, mean square loss, root mean square loss, accuracy, precision, recall, or any other custom evaluation metric for evaluating the performance of output generated by the ML model.
20 [0075] The machine learning module may use machine learning algorithms
that refer to a set of algorithms and statistical models that enable computers to learn and improve from experience without being explicitly programmed. In the context of anomaly detection based on the received parameter values, machine learning algorithms can be used to identify and filter out outliers or noisy data points,
25 improving the accuracy of the data analysis over a predetermined days number of days. Outliers or noisy data points refer to data that does not conform to the expected pattern or trend and can significantly affect the accuracy of the analysis. Machine learning algorithms can be trained to identify these outliers or noisy data points and filter them out from the analysis. By using machine learning algorithms
30 to filter out outliers or noisy data points, the accuracy of the anomaly detection can
22

be improved over the predetermined days number of days. This is because machine learning algorithms can learn from past data and identify patterns and trends that are not immediately apparent to human analysts.
[0076] In an embodiment, the triggering engine (216) may transmit the at
5 least one generated signal to a monitoring unit (110). In an embodiment, the monitoring unit is configured to execute one or more autocorrective processor-executable instructions for resolving and preventing failures to the one or more operational units (113-1, 113-2) having the determined susceptible to failure state. In an embodiment, the monitoring unit is configured to resolve and prevent failures 10 by performing at least one operation, including informing a network operator and initializing a self-diagnosing set up at the one or more operational units (113-1, 113-2) having the determined susceptible to failure state. The monitoring unit provides an audio-visual indicating the operational units (113-1, 113-2) susceptible to failure. In an embodiment, the triggering engine (216) may transmit the at least one 15 generated signal to the monitoring unit (110) that provides audio-visual indicating the operational units (113-1, 113-2) susceptible to failure based on the output of the optimal ML model. In an embodiment, the triggering engine (216) may trigger execution of one or more auto-corrective processor executable instructions for resolving and preventing failures to the susceptible operational units (113-1, 113-20 2). In an embodiment, the triggering engine (216) may trigger execution of one or more auto-corrective processor executable instructions for resolving and preventing failures to the susceptible operational units (113-1, 113-2) upon transmitting the generated signals. Upon detecting a susceptible failure state, the monitoring unit (110) automatically transmits a notification to the network operator or designated 25 personnel. For example, the notification includes detailed information about the detected anomaly, and recommended actions. In another example, the monitoring unit (110) triggers a self-diagnosing setup at the identified operational units (113-1, 113-2).
[0077] In an embodiment, to predict the one or more operational units (113-
30 1, 113-2) susceptible to failure causing call registration drop-offs, the one or more
23

processors (208) may be configured to capture the log associated with the customer device in a call data record in real-time. Further, the one or more processors (208) may be configured to employ the AI engine (214) to the call data records. In an example embodiment, the AI engine (214) may be configured to determine one or 5 more conditions causing a drop-off in a customer device registration. Further, the AI engine (214) may be configured to automatically detect customer device registration drop-off based on the determined one or more conditions.
[0078] In an aspect, the one or more processors are configured to
continuously capture the call data records in real-time, focusing specifically on the
10 logs associated with customer devices within the telecommunications network. These records contain information about call sessions, including timestamps, caller and recipient IDs, call quality metrics, and reasons for call termination. Through the utilization of an AI engine, these captured records undergo thorough analysis to discern the conditions leading to drop-offs in customer device registration. This
15 analysis is conducted using advanced machine learning algorithms, which sift through the data to uncover patterns, correlations, and anomalies indicative of potential issues. The AI engine is then capable of identifying specific conditions, such as network congestion, signal interference, hardware or software failures, and environmental factors, that contribute to the drop-offs in device registration. Based
20 on these findings, the AI engine can make informed decisions, ranging from alerting network operators to triggering maintenance tasks or resource reallocation, aimed at mitigating the impact of the identified conditions and improving network reliability. Moreover, through iterative learning from historical data and feedback, the AI engine continuously refines its algorithms, enhancing its ability to accurately
25 detect and predict conditions leading to drop-offs in customer device registration over time.
[0079] In an embodiment, the one or more conditions include technical
issues associated with the one or more operational units, registration interface complexity, and security vulnerabilities.
24

[0080] In an embodiment, the one or more processors are configured to
perform an exploratory data analysis on the one or more input parameter values and select one or more optimal input parameter values required for inputting to the AI engine.
5 [0081] In an embodiment, the at least one machine learning (ML) model is
a regression-based ML model, a classification-based ML model, a clustering-based ML model, a dimensionality reduction-based ML model, and a reinforcement learning-based ML model. The regression-based ML model predicts a continuous numerical value based on input features. The regression-based ML model learns
10 relationships between input variables (features) and a continuous target variable. Examples include predicting data usage based on features like time, location, and number of users or forecasting peak hours using historical data. The classification-based ML model predicts the category or class label of new observations based on past data. It assigns input data points to predefined categories or classes. For
15 instance, classifying times as busy hours or normal operating hours. The clustering-based ML model groups similar data points into clusters based on their characteristics. It discovers natural groupings in data without predefined categories. For example, segmenting customers into different groups based on their data usage. The dimensionality reduction-based ML model reduces the number of input
20 variables (features) while preserving important information. The dimensionality reduction-based ML model simplifies the complexity of the dataset by transforming high-dimensional data into a lower-dimensional representation. The reinforcement learning-based ML model learns to make decisions by interacting with an environment to achieve a specific goal over time.
25 [0082] In an embodiment, the system includes a display unit (220)
configured to receive the at least one generated signal. The display unit (220) is further configured to represent the at least one detected anomaly or at least one predicted state of the one or more operational units.
[0083] In an embodiment, the one or more processors (208) may be
25

configured to trace an origin of the determined one or more conditions causing the drop-off. The root cause analysis is performed based on one or more audio-visual indications generated by the AI engine (214) based on the analysed captured call data records. Based on the traced origin of the anomalies, the system conducts a 5 detailed root cause analysis. The root cause analysis involves examining potential factors and events contributing to anomalies, such as system failures, environmental changes, or operational errors. Under the root cause analysis, the system may be configured to determine the reason for one or more of the conditions causing the drop-off. For example, if the data demand is increased, then what is the reason 10 behind it? For example, the reason may be that one or more operational units are out of service, causing a load on other operational units. The system may be configured to identify those out-of-service operational units and may be configured to notify the network operator of the same. During the root cause analysis, the first step is to collect data and identify possible causes of problems. Then, cause-and-15 effect relationships are established to uncover the root cause(s). Once potential root causes are identified, the system is configured to verify the identified root causes against the available data. Finally, the system proposes solutions or corrective actions to address the root cause(s) identified.
[0084] Further, the one or more processors (208) may be configured to
20 generate, based on results of the root cause analysis, a visual representation of the one or more metrics indicative of the detected anomaly.
[0085] In an embodiment, the one or more processors (208) may be
configured to perform an exploratory data analysis on the one or more input parameter values. Exploratory Data Analysis (EDA) is an initial step in the data
25 analysis process, where the dataset is examined and explored to understand its characteristics, uncover patterns, identify anomalies, and formulate hypotheses. In one aspect, using the EDA, the present system is configured to identify optimal input parameter values for inserting into the AI engine. To identify the optimal input parameter values, the EDA may be configured to analyze the performance of
30 different parameter combinations by using cross-validation to evaluate each
26

combination of parameters to reduce bias and variance in performance estimation. Further, the one or more processors (208) may be configured to select one or more optimal inputs parameter values required for employing the AI engine (214).
[0086] In an embodiment, the at least one machine learning (ML) model is
5 a regression-based ML model, a classification-based ML model, a clustering-based ML model, a dimensionality reduction-based ML model, and a reinforcement learning-based ML model.
[0087] FIG. 3 illustrates a diagram (300) showing interaction between
various components of the system (108) and the one or more processor(s), in
10 accordance with embodiments of the present disclosure. As shown, the AI engine (214) may be trained using historical data of the one or more input parameter values stored in the database (210). In an embodiment, the system (108) may visualize and analyze the data for preprocessing (as shown by 302). In an example, the data may be visualized to the network operator. In another aspect, the system may be
15 configured to provide a number of data sources to the network operator from which the network operator may select data sources for inputting the data in the AI engine (214). In an example, the preprocessing may include, but not be limited to, imputing data, removing missing values, tokenization, scaling, splitting data into train and test sets, and the like. In an embodiment, the AI engine (214) may train one or more
20 ML models for predicting or forecasting the one or more operational units susceptible to failure, and thereby cause call drop offs. For example, the one or more trained models are used to detect anomalies. The system may be configured to consider the outputs of one or more trained ML models. By performing the performance of the ML models and the requirements of the network operator, the
25 system may be configured to select an ML model that is more efficient. In an embodiment, the AI engine (214) may select one of the trained ML models based on a predefined evaluation metric (as shown by 304). In an embodiment, the ML model may be retrained periodically as the data acquisition engine (212) collects and stores one or more input parameter values in the database (210).
27

[0088] In an aspect, the system may be configured to employ data
visualization (as shown by 302) such that the plurality of machine learning models is shown as a list to the user. The user may be able to select at least one machine learning model from the list shown. The data visualization involves presenting data 5 in graphical or visual formats to gain insights, identify patterns, and communicate findings effectively.
[0089] The model selection (as shown by 304) option involves choosing the
most appropriate machine learning algorithm or statistical model for a given problem based on factors such as data characteristics, performance metrics, 10 interpretability, and computational resources.

15
20
25

[0090] steps:

The model training (as shown by 306) may involve the following
. Feature Engineering: Extract or transform relevant features from the data that can help discriminate between normal and anomalous instances.
. Selection of Anomaly Detection Algorithm: Choose an appropriate anomaly detection algorithm based on the characteristics of the data and the types of anomalies expected.
. Training Data Preparation: Split the dataset into training and validation sets, ensuring a sufficient representation of normal and anomalous instances.
. Model Training: Train the anomaly detection model on the training data, adjusting algorithm parameters to optimize performance.
. Model Evaluation: Evaluate the trained model using the validation dataset, assessing its ability to detect anomalies while minimizing false positives and false negatives.

28

. Threshold Selection: Based on model outputs, determine an appropriate threshold or decision boundary for classifying instances as normal or anomalous.
[0091] The system may be configured to input the one or more received
5 input parameter values to the selected ML model to detect the at least one anomaly associated with the one or more operational units based on the one or more received input parameter values. The system may be configured to predict the at least one state of the one or more operational units based on the at least one detected anomaly. In an embodiment, the system is configured to generate at least one signal
10 representing the at least one predicted state of the one or more operational units. In an embodiment, the AI engine (214) may transmit of the at least one signal to the monitoring unit (110) such that the monitoring unit (110) provides audio-visual indications to the operator, indicating the one or more operational units (113-1, 113-2) that are susceptible to failure. In an embodiment, on providing the audio-visual
15 indications, the operator may manually intervene and resolve issues that may make the one or more operational units (113-1, 113-2) susceptible to failure. In an example, if the AI engine (214) predicts that an operation unit such as antennae of the base station is degrading in performance due to maintenance, the AI engine (214) may provide audio-visual indications to the operator indicating the
20 degradation of performance of said operational unit (113-1, 113-2). The AI engine (214) may also be configured to provide details on the antennae’s location among others. In other embodiments, the AI engine (214) may trigger execution of one or more processor-executable instructions that may resolve the issues that may potentially cause failure in the operational unit (113-1, 113-2). In an example, a
25 micro-service function in a 5G network may experience degradation in performance due to improper management of cache. In such examples, the AI engine (214) may execute a patch or a set of processor executable instructions allowing said operational unit (113-1, 113-2) to manage cache without degrading performance.
[0092] FIG. 4 illustrates a method (400) for detecting at least one anomaly
30 associated with one or more operational units connected with the at least one base
29

station in the network, in accordance with embodiments of the present disclosure.
[0093] At step (402), the method (400) may include receiving, by the
processor (208) of FIG. 2, one or more input parameter values from the at least one base station. In an example, the one or more input parameter values may be 5 associated with a log pertaining to a customer device. In an embodiment, the one or more input parameter values are associated with a log pertaining to a customer device. In an example, the one or more input parameter values include caller ID, recipient ID, call initiation timestamp, session length, call quality metrics, and call termination reason.
10 [0094] At step (404), the method (400) includes inputting, by an Artificial
Intelligence (AI) engine, the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly associated with one or more operational units based on the one or more received input parameter values. In an operative aspect, the method (400) begins with receiving the one or
15 more input parameter values that describe the current state or conditions of operational units within the base stations. The AI engine inputs the received parameter values into one or more pre-trained ML models. These one or more pre-trained ML models have been trained for anomaly detection to recognize patterns indicative of normal and anomalous behavior within the operational units. In an
20 example, each machine learning model processes the input parameter values and learned patterns to generate predictions or scores. The ML models identify deviations of the generated scores from the expected behavior based on statistical anomalies, unexpected patterns, or outlier detection techniques to detect the at least one anomaly.
25 [0095] At step (406), the method (400) includes predicting, by the AI
engine, the at least one state of the one or more operational units based on the at least one detected anomaly. The ML model is further configured to analyze the generated score and the identified deviations. Using the detected anomalies as input, the AI engine predicts the current or future state of the operational units. For
30

example, the system predicts that the data demand suddenly spikes (acting as an anomaly). Then, based on the predicted data demand, the system predicts the state of the operational unit. For example, the predicted data demand is 20 Gbps, and the current capacity of providing data of the operational unit is 16 Gbps. Based on the 5 predicted data demand and the current capacity, the ML model is configured to predict the state of the operational unit. In an example, the at least one state is a susceptible to failure state. In an example, the prediction may be performed by employing an Artificial Intelligence (AI) engine (214) based on the one or more input parameter values. In an embodiment, the AI engine (214) may be configured 10 to perform the prediction of the one or more operational units (113-1, 113-2) by detecting one or more anomalies based on the one or more input parameter values using the one or more ML models. Further, the AI engine (214) may be configured to select output of an optimal ML model from the one or more ML models. In another example, the at least one state is a susceptible to failure state.
15 [0096] In an embodiment, for predicting the one or more operational units
(113-1, 113-2) susceptible to failure causing call registration drop-offs, the method may include a step of capturing the log associated with the customer device in a call data record in real-time. Further, the method may include a step of employing the AI engine (214) to the call data records. In an example, the AI engine (214) may be
20 configured for determining one or more conditions causing a drop-off in a customer device registration. Further, the AI engine (214) may be configured for automatically detecting customer device registration drop-off based on the determined one or more conditions.
[0097] In an embodiment, for determining the one or more conditions
25 causing the drop-off in the customer device registration, the method may include a step of detecting an anomaly in one or more metrics related to the call record data of a customer.
[0098] In an embodiment, the method may include a step of tracing an origin
of the determined one or more conditions causing the drop-off. In an aspect, the
31

system may be configured to trace the origin of the determined one or more conditions by tracing back to the initial conditions that led to the detected anomalies and the determined conditions. For example, the origin is indicative of a source of the anomaly in the one or more metrics related to the call record data of the 5 customer. In an example, a plurality of microservices may be employed to trace the source of the anomaly in the one or more metrics. The origin may be defined as events or conditions that led to the detected anomalies. Tracing the origin of anomalies typically involves a systematic approach to identifying the events or conditions that led to the detected anomalies in the system. In an example, the
10 system may be configured to record timestamps associated with the received data to establish a sequence of activities. In an example, the system may be configured to employ at least one sequence mining algorithm to detect the events that precede anomalies. The sequence mining algorithm is used to discover patterns in sequences of events or transactions. Further, the method may involve performing, based on
15 the traced origin, a root cause analysis of the determined one or more conditions causing the drop-off. The root cause analysis may be performed based on one or more audio-visual indications generated by the AI engine (214) based on the analysed captured call data records. Thereafter, a visual representation of the one or more metrics indicative of the detected anomaly may be generated, based on
20 results of the root cause analysis.
[0099] In an embodiment, the method may involve performing an
exploratory data analysis on the one or more input parameter values and selecting one or more optimal inputs parameter values required for employing the AI engine (214).
25 [00100] In an embodiment, the method further includes a step of generating
at least one signal representing the at least one predicted state of the one or more operational units. The method (400) further includes transmitting, by the processor, of the at least one generated signal to a monitoring unit that provides audio-visual indicating the operational units susceptible to failure. In an example, the
30 transmission may be performed based on the output of the optimal ML model. The
32

method further includes a step of triggering, by the processor, execution of one or more auto-corrective processor-executable instructions for resolving and preventing failures to the susceptible operational units. In an example, the triggering may be performed upon transmitting the at least one generated signal.
5 [00101] In an embodiment, the method further includes receiving, by a
display unit (220), the at least one generated signal and representing the at least one detected anomaly or at least one predicted state of the one or more operational units.
[00102] In an embodiment, the method further includes receiving, by a user
interface, a user input representing a machine learning (ML) model selection 10 selected from a plurality of ML models.
[00103] In an embodiment, the method further includes generating, by a
monitoring unit, an alert or a trigger upon determination of the susceptible to failure state.
[00104] In an embodiment, the method further includes capturing, a call data
15 record (CDR) including the log associated with the customer device in real-time and analysing the captured call data records using the AI engine for determining one or more conditions causing a drop-off in a customer device registration based on the analysis.
[00105] In an embodiment, the one or more conditions includes technical
20 issues associated with the one or more operational units, registration interface complexity, and security vulnerabilities.
[00106] In an exemplary embodiment, the present disclosure discloses a user
equipment configured to detect at least one anomaly associated with one or more operational units with at least one base station in a network. The user equipment 25 includes a processor, and a computer readable storage medium storing programming instructions for execution by the processor. Under the programming instructions, the processor is configured to receive one or more input parameter
33

values from at least one base station. Under the programming instructions, the processor is configured to input the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly associated with one or more operational units based on the one or more input 5 parameter values. Under the programming instructions, the processor is configured to predict the at least one state of the one or more operational units based on the at least one detected anomaly.
[00107] FIG. 5 illustrates an exemplary computer system (500) in which or
with which embodiments of the present disclosure may be implemented. As shown
10 in FIG. 5, the computer system (500) may include an external storage device (510), a bus (520), a main memory (530), a read only memory (540), a mass storage device (550), a communication port (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor (570) and communication ports (560). Processor (570) may include
15 various modules associated with embodiments of the present disclosure.
[00108] In an embodiment, the communication port (560) may be any of an
RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) may be chosen 20 depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[00109] In an embodiment, the memory (530) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art. Read-only memory (540) may be any static storage device(s) e.g., but not limited 25 to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (570).
[00110] In an embodiment, the mass storage (550) may be any current or
future mass storage solution, which may be used to store information and/or
34

instructions. Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one 5 or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[00111] In an embodiment, the bus (520) communicatively couples the
processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended 10 (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[00112] Optionally, operator and administrative interfaces, e.g., a display,
15 keyboard, joystick, and a cursor control device, may also be coupled to the bus (520) to support direct operator interaction with the computer system (500). Other operator and administrative interfaces may be provided through network connections connected through the communication port (560). Components described above are meant only to exemplify various possibilities. In no way should 20 the aforementioned exemplary computer system (500) limit the scope of the present disclosure.
[00113] The present disclosure provides technical advancement related to the
scenario where device registrations are being dropped. This technical advancement overcomes the limitations of current solutions by leveraging call data records from 25 user devices. The present disclosure involves using machine learning algorithms to analyze real-time call records and identify potential causes for the drop-offs. The present disclosure leads to significant enhancements in performance and efficiency. By implementing a system that uses machine learning for anomaly detection to analyze real-time call data records, the disclosed invention helps prevent potential
35

issues with users connecting to nearby cell towers.
[00114] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from 5 the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
10 ADVANTAGES OF THE PRESENT DISCLOSURE
[00115] The present disclosure provides a system and a method for predicting
and preventing call drop-offs.
[00116] The present disclosure provides a system and a method for
forecasting and triggering the resolution of predicted failures.
15 [00117] The present disclosure provides a system and a method that
dynamically selects and retrains machine learning models used for predicting call drop-off and one or more operational units susceptible to causing said call drop¬offs.
[00118] The present disclosure provides a system and a method for
20 preventative maintenance of base stations.
36

We Claim:
1. A system (108) for detecting at least one anomaly associated with one or
more operational units with at least one base station (112) in a network, the system
(108) comprising:
one or more processors (208); and
a memory (204) coupled to the one or more processors (208), wherein the memory (204) includes computer implemented instructions to configure the one or more processors (208) to:
receive, by a data acquisition engine (212), one or more input parameter values from the at least one base station (112); and
input, by an Artificial Intelligence (AI) engine (214), the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly associated with the one or more operational units based on the one or more input parameter values; and
predict at least one state of the one or more operational units based on the at least one detected anomaly.
2. The system (108) as claimed in claim 1, is configured to generate at least one signal representing the at least one predicted state of the one or more operational units.
3. The system (108) as claimed in claim 1, wherein the one or more input parameter values are associated with a log pertaining to a customer device.
4. The system (108) as claimed in claim 1, wherein the one or more input parameter values include caller ID, recipient ID, call initiation timestamp, session length, call quality metrics, and call termination reason.
5. The system (108) as claimed in claim 1, wherein the at least one state is a susceptible to failure state.

6. The system (108) as claimed in claim 2, includes a display unit (220) configured to receive the at least one generated signal and is further configured to represent the at least one detected anomaly or at least one predicted state of the one or more operational units.
7. The system (108) as claimed in claim 1, includes an interfacing unit (206) configured to receive a user input representing a machine learning (ML) model selection selected from a plurality of ML models.
8. The system (108) as claimed in claim 5, includes a monitoring unit (110) for generating an alert or a trigger upon determination of the susceptible to failure state.
9. The system (108) as claimed in claim 3, wherein the one or more processors (208) are configured to:
capture a call data record (CDR) including the log associated with the customer device in real-time; and
analyse the captured call data records using the AI engine (214) for determining one or more conditions causing a drop-off in a customer device registration based on the analysis.
10. The system (108) as claimed in claim 9, wherein the monitoring unit is
configured to:
trace an origin of the determined one or more conditions causing the drop¬off;
perform, based on the traced origin, a root cause analysis of the determined one or more conditions causing the drop-off, wherein the root cause analysis is performed based on one or more audio-visual indications generated by the AI engine (214) based on the analysed captured call data records; and
generate a visual representation of one or more call quality metrics indicative of the detected anomaly.

11. The system (108) as claimed in claim 1, wherein the one or more processors
(208) are configured to:
perform an exploratory data analysis on the one or more input parameter values; and
select one or more optimal input parameter values required for inputting to the AI engine (214).
12. The system (108) as claimed in claim 1, wherein the at least one machine learning (ML) model is a regression-based ML model, a classification-based ML model, a clustering-based ML model, a dimensionality reduction-based ML model, and a reinforcement learning-based ML model.
13. A method (400) for detecting at least one anomaly associated with one or more operational units with at least one base station (112) in a network, the method (400) comprising:
receiving (402), by a data acquisition engine (212), one or more input parameter values from the at least one base station;
inputting (404), by an Artificial Intelligence (AI) engine (214), the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly associated with one or more operational units based on the one or more received input parameter values; and
predicting (406), by the AI engine (214), at least one state of the one or more operational units based on the at least one detected anomaly.
14. The method (400) as claimed in claim 13, further comprising generating at least one signal representing the at least one predicted state of the one or more operational units.
15. The method (400) as claimed in claim 13, wherein the one or more input parameter values are associated with a log pertaining to a customer device.

16. The method (400) as claimed in claim 13, wherein the one or more input parameter values include caller ID, recipient ID, call initiation timestamp, session length, call quality metrics, and call termination reason.
17. The method (400) as claimed in claim 13, the at least one state is a susceptible to failure state.
18. The method (400) as claimed in claim 14, further comprising receiving, by a display unit, the at least one generated signal and representing the at least one detected anomaly or at least one predicted state of the one or more operational units.
19. The method (400) as claimed in claim 13, further comprising receiving, by an interfacing unit, a user input representing a machine learning (ML) model selection selected from a plurality of ML models.
20. The method (400) as claimed in claim 17, further comprising generating, by a monitoring unit, an alert or a trigger upon determination of the susceptible to failure state.
21. The method (400) as claimed in claim 16, further comprising:
capturing, a call data record (CDR) including the log associated with the
customer device in real-time; and
analysing the captured call data records using the AI engine (214) for determining one or more conditions causing a drop-off in a customer device registration based on the analysis.
22. The method (400) as claimed in claim 21, further comprising:
tracing, by the monitoring unit, an origin of the determined one or more conditions causing the drop-off; and
performing, by the monitoring unit, based on the traced origin, a root cause analysis of the determined one or more conditions causing the drop-off, wherein the root cause analysis is performed based on one or more audio-visual indications

generated by the AI engine (214) based on the analysed captured call data records; and
generating, by the monitoring unit, a visual representation of the one or more call quality metrics indicative of the detected anomaly.
23. The method (400) as claimed in claim 13, further comprising:
performing, by the one or more processors (208), an exploratory data analysis
on the one or more input parameter values; and
selecting, by the one or more processors (208), one or more optimal input parameter values required for inputting to the AI engine (214).
24. The method (400) as claimed in claim 13, wherein the at least one machine learning (ML) model is a regression-based ML model, a classification-based ML model, a clustering-based ML model, a dimensionality reduction-based ML model, and a reinforcement learning-based ML model.
25. A user equipment (104) configured to detect at least one anomaly associated with one or more operational units with at least one base station (112) in a network, the user equipment (104) comprising:
a processor; and
a computer readable storage medium storing programming instructions for execution by the processor, the programming instructions to:
receive one or more input parameter values from the at least one base station (112);
input the one or more received input parameter values to at least one machine learning (ML) model to detect the at least one anomaly associated with one or more operational units based on the one or more input parameter values; and
predict the at least one state of the one or more operational units based on the at least one detected anomaly.

Documents

Application Documents

# Name Date
1 202321047105-STATEMENT OF UNDERTAKING (FORM 3) [13-07-2023(online)].pdf 2023-07-13
2 202321047105-PROVISIONAL SPECIFICATION [13-07-2023(online)].pdf 2023-07-13
3 202321047105-FORM 1 [13-07-2023(online)].pdf 2023-07-13
4 202321047105-DRAWINGS [13-07-2023(online)].pdf 2023-07-13
5 202321047105-DECLARATION OF INVENTORSHIP (FORM 5) [13-07-2023(online)].pdf 2023-07-13
6 202321047105-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047105-POA [29-05-2024(online)].pdf 2024-05-29
8 202321047105-FORM 13 [29-05-2024(online)].pdf 2024-05-29
9 202321047105-AMENDED DOCUMENTS [29-05-2024(online)].pdf 2024-05-29
10 202321047105-Power of Attorney [04-06-2024(online)].pdf 2024-06-04
11 202321047105-Covering Letter [04-06-2024(online)].pdf 2024-06-04
12 202321047105-ORIGINAL UR 6(1A) FORM 26-270624.pdf 2024-07-01
13 202321047105-ENDORSEMENT BY INVENTORS [05-07-2024(online)].pdf 2024-07-05
14 202321047105-DRAWING [05-07-2024(online)].pdf 2024-07-05
15 202321047105-CORRESPONDENCE-OTHERS [05-07-2024(online)].pdf 2024-07-05
16 202321047105-COMPLETE SPECIFICATION [05-07-2024(online)].pdf 2024-07-05
17 202321047105-CORRESPONDENCE(IPO)-(WIPO DAS)-06-08-2024.pdf 2024-08-06
18 Abstract-1.jpg 2024-08-08
19 202321047105-FORM 18 [26-09-2024(online)].pdf 2024-09-26
20 202321047105-FORM 3 [04-11-2024(online)].pdf 2024-11-04