Abstract: The present disclosure provides a system (108) and a method (600) for detecting anomalies in a call setup success rate (CSSR) of a telecommunications network. The system (108) is configured for receiving data associated with the CSSR which then is stored in a database (210). The data includes call data records (CDRs). A model is created using a machine learning (ML) engine (214) based on one or more parameters received from a user. The model utilizes one or more regression techniques to create the model. The data is retrieved from the database and the model is executed to analyse the retrieved data to detect the one or more anomalies associated with the retrieved data. The detected anomalies are represented visually. FIG. 3
FORM 2
HE PATENTS ACT, 1970
(39 of 1970) PATENTS RULES, 2003
COMPLETE SPECIFICATION
TITLE OF THE INVENTION SYSTEM AND METHOD FOR DETECTING ANOMALIES IN A CALL SETUP SUCCESS RATE
APPLICANT
of Office-101, Saffron, Nr. JIO PLATFORMS LIMITED
380006, Gujarat, India; Nationality: India
following specification particularly describes the invention and the manner in which it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to,
copyright, design, trademark, integrated circuit (IC) layout design, and/or trade
5 dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates
(hereinafter referred as owner). The owner has no objection to the facsimile
reproduction by anyone of the patent document or the patent disclosure, as it
appears in the Patent and Trademark Office patent files or records, but otherwise
reserves all rights whatsoever. All rights to such intellectual property are fully
10 reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure generally relates to systems and methods for
anomaly detection in a wireless telecommunications network. More particularly,
15 the present disclosure relates to a system and method for detecting anomalies in a
call setup success rate (CSSR) in the telecommunications network using machine learning.
DEFINITION
20 [0003] CSSR - Call Setup Success Rate: A metric that measures the
percentage of calls that are successfully established or set up in a telecommunication network.
[0004] CDRs - Call Data Records: Data records that contain detailed
25 information about individual telephone calls, such as the caller and callee numbers,
call start and end times, call duration, and other call-related details.
BACKGROUND OF THE INVENTION
[0005] The following description of the related art is intended to provide
30 background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
2
present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admission of the prior art.
5 [0006] Nowadays, maintaining and managing large scale
telecommunication networks is a challenging task due to its complexity. One of the
most important network health indicators in a telecommunications network is the
call setup success rate (CSSR). A low CSSR indicates issues in the
telecommunications network and significantly impacts customer experience and
10 satisfaction.
[0007] Compared with previous core network standards, cloud native fifth
generation (5G) core adds complexity, and 5G devices produce a significantly
increased amount of data for the core network to manage. There are high
15 expectations on the 5G core network to deliver superior network experiences.
Conventional systems lack the ability of detecting anomalous network patterns and
generating alerts based on these patterns. Therefore, detecting anomalies may be a
complex task that needs to be addressed to provide seamless calling experience to
users/customers.
20
[0008] Currently, service providers rely on manual thresholds to monitor
the CSSR. However, manual thresholds do not account for variations in the CSSR
due to traffic patterns, seasonal trends, and other dynamic factors, leading to a high
rate of false positives and false negatives. Furthermore, manual analysis of CSSR
25 is time-consuming, error-prone, and does not provide timely insights for effective
troubleshooting.
[0009] There is, therefore, a need in the art to provide a system and a method
that can mitigate the problems associated with the prior arts. 30
3
SUMMARY
[0010] The present disclosure discloses a system for detecting one or more
anomalies in data associated with a call setup success rate (CSSR) of a
telecommunications network. The system comprises a memory and one or more
5 processor(s) which are configured to fetch and execute computer-readable
instructions stored in the memory to receive the data associated with the CSSR via
a user interface and store the data in a database. The data includes call data records
(CDRs). The one or more processor(s) creates a model using a machine learning
(ML) engine based on one or more input parameters received via the user interface.
10 The ML engine utilizes one or more regression techniques to create the model. The
one or more processor(s) retrieve the data from the database and execute the model to analyse the retrieved data to detect the one or more anomalies associated with the data. The one or more detected anomalies are represented visually.
15 [0011] In an embodiment, the one or more input parameters received from
a user via a user interface include at least one of a training period, a test period, one or more features, and a logical partitioning required for configuration of the model.
[0012] In another embodiment, the system further comprises a load balancer
20 configured to receive the data from the user interface and a data collector configured
to collect the CDRs from the data. The system further comprises a data normalizer
configured to normalize the collected CDRs and the database is configured to store
the normalized CDRs. In an exemplary embodiment, the database is an Elastic
Search database.
25
[0013] In an embodiment, the model is created based on the one or more
input parameters received from the load balancer which receives the one or more
input parameters from the user interface. The system generates a visual
representation of the detected one or more anomalies based on a user request via
30 the user interface. The visual representation includes at least one of a graph format
and a table format for displaying predicted values, actual values, and watermark
4
values of the CSSR and the one or more detected anomalies.
[0014] In an embodiment, the one or more processor(s) are configured to
transmit the data and insights derived from analysis of the data to one or more
network operation teams to enable the one or more network operation teams to take
5 appropriate actions. The insights may include information associated the anomalies
such as a code identifying the anomaly, severity of anomality on a scale of 1-10 or low, medium and critical. The insights may further include a recommendation of an operation to be performed by the network team to handle the anomalies.
10 [0015] In another embodiment, the one or more processor(s) are configured
to receive the data via a data ingestion engine from a computing device associated with a user.
[0016] In an embodiment, the one or more regression techniques include a
15 two-factor regression technique, a multitude decision tree technique, a periodicity
technique, a scalar boost technique, and a heuristic gain technique.
[0017] In one embodiment, the one or more processor(s) are configured to
generate one or more hyperparameters associated with the model using the one or
20 more regression techniques and fine-tune the one or more hyper parameters to
generate an optimized model for detecting or identifying the one or more anomalies.
[0018] In an embodiment, a method for detecting anomalies in a call setup
success rate (CSSR) of a telecommunications network is disclosed. The method
25 includes receiving data associated with the CSSR and storing the data in a database.
A model is created using a machine learning (ML) engine based on one or more input parameters received via the user interface. The ML engine utilizes one or more regression techniques to create the model The method further includes retrieving the data from the database, executing the model to analyse the data to detect the one
30 or more anomalies associated with the data. The method represents the one or more
detected anomalies visually. The data includes call data records (CDRs).
5
[0019] In an embodiment, one or more input parameters received via the
user interface include at least one of a training period, a test period, one or more
features, and a logical partitioning required for configuration of the model.
5
[0020] In an embodiment, a load balancer receives the data from the user
interface includes the call data records (CDRs). The CDRs are collected from the
data by a data collector. A data normalizer is used to normalize the collected CDRs.
The database stores the normalized CDRs.
10
[0021] In an embodiment, the database is an Elastic Search database.
[0022] In an embodiment, the model is created based on the one or more
input parameters received from the load balancer which is turn receives from the
15 user interface. A visual representation of the detected one or more anomalies is
generated based on a user request via the user interface. The visual representation includes at least one of a graph format and a table format for displaying predicted values, actual values, and watermark values of the CSSR and the one or more detected anomalies.
20
[0023] In an embodiment, the data and insights derived from the analysis of
the data is transmitted to one or more network operation teams to enable the one or more network operation teams to take appropriate actions based on the detected one or more anomalies. The insights may include information associated the anomalies
25 such as a code identifying the anomaly, severity of anomality on a scale of 1-10 or
low, medium and critical. The insights may further include a recommendation of an operation to be performed by the network team to handle the anomalies.
[0024] In an embodiment, the data is received via a data ingestion engine
30 from a computing device (104) or user equipment (UE) associated with a user.
6
[0025] In an embodiment, the one or more regression techniques include a
two-factor regression technique, a multitude decision tree technique, a periodicity technique, a scalar boost technique, and a heuristic gain technique.
5 [0026] In an embodiment, one or more hyper parameters associated with the
model are generated using the one or more regression techniques and the one or more hyper parameters are fine tuned to generate an optimized model for detecting or identifying the one or more anomalies.
10 [0027] In an embodiment, a user equipment (UE) is communicatively
coupled to a system for detecting anomalies in a call setup success rate (CSSR) of a telecommunications network. The UE receives data, associated with the CSSR and transmits the data to the system for detecting the anomalies in the call setup success rate (CSSR).
15
[0028] In one embodiment, a computer program product comprises a non-
transitory computer-readable medium is disclosed. The non-transitory computer-readable medium stores instructions that, when executed by one or more processors, cause the one or more processors to perform one or more steps. The one or more
20 steps include receiving data associated with the CSSR and storing the data in a
database. A model is created using a machine learning (ML) engine based on one or more input parameters received via the user interface. The ML engine utilizes one or more regression techniques to create the model The method further includes retrieving the data from the database, executing the model to analyse the data to
25 detect the one or more anomalies associated with the data. The method represents
the one or more detected anomalies visually. The data includes call data records (CDRs).
[0029] Other objects and advantages of the present disclosure will be more
30 apparent from the following description, which is not intended to limit the scope of
the present disclosure.
7
OBJECTS OF THE PRESENT DISCLOSURE
[0030] It is an object of the present disclosure to provide a system and a
method that uses artificial intelligence (AI)/machine learning (ML) for detecting
5 anomalies in a call setup success rate (CSSR) in a telecommunication network.
[0031] It is an object of the present disclosure to provide a system and a
method that uses various regression techniques to analyze data and detect anomalies
in the CSSR as a part of an adaptive troubleshooting operations management.
10
[0032] It is an object of the present disclosure to provide a system and a
method that generates a ML model for conducting automated analysis and detecting
anomalies in the CSSR.
15 [0033] It is an object of the present disclosure to provide a system and a
method that generates visual representation of predicted, actual values, watermark values, and signifies predicted anomalies associated with the CSSR.
[0034] It is an object of the present disclosure to provide a system and a
20 method that uses the ML to identify instances where the CSSR falls, for example,
below 99% in specific locations across a geographical area.
[0035] It is an object of the present disclosure to provide a system and a
method that transmits information to network operation teams to initiate appropriate
25 actions and resolve issues associated with the detected anomalies.
BRIEF DESCRIPTION OF DRAWINGS
[0036] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
30 disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the
8
principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such
drawings includes the disclosure of electrical components, electronic components,
5 or circuitry commonly used to implement such components.
[0037] FIG. 1 illustrates an exemplary network architecture for
implementing a system, in accordance with an embodiment of the present
disclosure.
10
[0038] FIG. 2 illustrates an exemplary block diagram of the system, in
accordance with an embodiment of the present disclosure.
[0039] FIG. 3 illustrates an exemplary block diagram of a system
15 architecture of the system, in accordance with an embodiment of the present
disclosure.
[0040] FIG. 4 illustrates an exemplary flow diagram of a method
implementing a machine learning (ML) based anomaly detection for call setup
20 success rate (CSSR), in accordance with an embodiment of the present disclosure.
[0041] FIG. 5 illustrates an exemplary a computer system in which or with
which the embodiments of the present disclosure may be implemented.
25 [0042] FIG. 6 illustrates an exemplary flow chart of a method implementing
a machine learning (ML) based anomaly detection for call setup success rate (CSSR), in accordance with an embodiment of the present disclosure.
[0043] The foregoing shall be more apparent from the following more
30 detailed description of the disclosure.
9
LIST OF REFERENCE NUMERALS
100 - Network architecture
102 - Users
104 - Computing devices/User equipment (UE)
5 106 - Network
108 - System
202 - Processor(s)
204 - Memory
206 - Interface(s)
10 208 - Processing engine(s)
210 - Database
212 - Data ingestion engine
214 - Machine learning (ML) engine
216 - Other engine(s)
15 302 - User interface
304 - Load balancer
306 - Call data records (CDRs)
308 - Data collector
310 - Data normalizer
20 312 - Database (DB)/Elastic Search database
314-1 - Anomaly detection MS 1
314-2 - Anomaly detection MS 2
316 - Cache
500 - Computer system
25 510 - External storage device
520 - Bus
530 - Main memory
540 - Read-only memory
550 - Mass storage device
30 560 - Communication port(s)
570 - Processor
10
600- Flowchart
DETAILED DESCRIPTION
[0044] In the following description, for explanation, various specific details
5 are outlined in order to provide a thorough understanding of embodiments of the
present disclosure. It will be apparent, however, that embodiments of the present
disclosure may be practiced without these specific details. Several features
described hereafter can each be used independently of one another or with any
combination of other features. An individual feature may not address all of the
10 problems discussed above or might address only some of the problems discussed
above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0045] The ensuing description provides exemplary embodiments only and
15 is not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
20 of the disclosure as set forth.
[0046] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
25 specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail to avoid obscuring the embodiments.
30
[0047] Also, it is noted that individual embodiments may be described as a
11
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a
structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations can be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
5 A process is terminated when its operations are completed but could have additional
steps not included in a figure. A process may correspond to a method, a function, a
procedure, a subroutine, a subprogram, etc. When a process corresponds to a
function, its termination can correspond to a return of the function to the calling
function or the main function.
10
[0048] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt,
the subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
15 necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive like the term
20 “comprising” as an open transition word without precluding any additional or other
elements.
[0049] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature,
25 structure, or characteristic described in connection with the embodiment is included
in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined
30 in any suitable manner in one or more embodiments.
[0050] The terminology used herein is to describe particular embodiments
12
only and is not intended to be limiting the disclosure. As used herein, the singular
forms “a”, “an”, and “the” are intended to include the plural forms as well, unless
the context indicates otherwise. It will be further understood that the terms
“comprises” and/or “comprising,” when used in this specification, specify the
5 presence of stated features, integers, steps, operations, elements, and/or
components, but do not preclude the presence or addition of one or more other
features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “and/or” includes any combinations of one or more of the
associated listed items. The present disclosure serves the purpose of detecting
10 anomalies in a call setup success rate (CSSR) of a telecommunications network
using machine learning.
[0051] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGs 1-5.
15
[0052] FIG. 1 illustrates an exemplary network architecture (100) for
implementing a system (108), in accordance with an embodiment of the present
disclosure.
20 [0053] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) may be connected to a system (108) through a network (106). A person of ordinary skill in the art will understand that the one or more computing devices (104-1, 104-2…104-N) may be collectively referred as computing devices (104) and individually referred as a computing device (104). One or more users (102-1,
25 102-2…102-N) may provide one or more requests to the system (108). A person of
ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as users (102) and individually referred as a user (102). Further, the computing devices (104) may also be referred as a user equipment (UE) (104) or as UEs (104) throughout the disclosure.
30
[0054] In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, a laptop, etc. Further, the computing device (104) may
13
include one or more in-built or externally coupled accessories including, but not
limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard.
Furthermore, the computing device (104) may include a mobile phone, smartphone,
virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-
5 purpose computer, a desktop, a personal digital assistant, a tablet computer, and a
mainframe computer. Additionally, input devices for receiving input from the user
(102) such as a touchpad, touch-enabled screen, electronic pen, and the like may be
used.
10 [0055] In an embodiment, the network (106) may include, by way of
example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. The
15 network (106) may also include, by way of example but not limitation, one or more
of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or
20 some combination thereof.
[0056] In an embodiment, the system (108) may receive data and one or
more inputs parameters from the computing device (104) associated with the users
(102). Further, the system (108) may generate a trained model via machine learning
25 (ML) based on the input parameters to identify one or more anomalies associated
with the data. The parameter include training period, test period, features, logical
partitioning, and algorithm name required for the model configuration. The data
may include CDRs for creating a data set or a data source providing data for
analysis.
30
[0057] In an embodiment, the system (108) may utilize a range of advanced
regression techniques that may include but not limited to a two-factor regression
14
technique, a multitude decision tree technique, a periodicity technique, a scalar
boost technique, and a heuristic gain technique to identify the one or more
anomalies. In an embodiment, the system (108) may determine one or more hyper
parameters of these techniques and fine tunes the one or more hyper parameters to
5 optimize the performance of these techniques.
[0058] In an embodiment, the system (108) may visually represent one or
more detected anomalies. The output of the analysis is presented in graph and table formats, displaying predicted and actual values, watermark values, and prominently
10 highlighting the predicted anomalies. This visual representation makes it easier for
users to identify and understand the detected anomalies. The predicted value indicates an estimated value of CSSR based on the ML learning model in a particular time period at a particular location. The actual value represents a real time value of CSSR in a particular time period at a particular location. As an example,
15 the predicted value may be represented as 98% whereas the actual value may be
represented as 95%. The watermark value may represent a threshold value of CSSR below which it is considered that there are anomalies in the CSSR. The watermark value may be different for different time periods and different locations. If the actual CSSR is more than the watermark value, then it may be considered as there is no
20 anomaly in the CSSR. For instance, a line graph might show the actual vs predicted
CSSR for each day in the test period, with anomalous days highlighted in red. A table could list the specific anomalous days along with the actual and predicted CSSR values. The visualization is generated based on user requests, allowing them to choose between different graph and table formats.
25
[0059] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or
30 alternatively, one or more components of the network architecture (100) may
perform functions described as being performed by one or more other components
15
of the network architecture (100).
[0060] FIG. 2 illustrates an example block diagram (200) of a system (108),
in accordance with an embodiment of the present disclosure.
5
[0061] Referring to FIG. 2, in an embodiment, the system (108) may include
one or more processor(s) (202). The one or more processor(s) (202) may be
implemented as one or more microprocessors, microcomputers, microcontrollers,
digital signal processors, central processing units, logic circuitries, and/or any
10 devices that process data based on operational instructions. Among other
capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer readable storage
15 medium, which may be fetched and executed to create or share data packets over a
network service. The memory (204) may comprise any non-transitory storage
device including, for example, volatile memory such as random-access memory
(RAM), or non-volatile memory such as erasable programmable read only memory
(EPROM), flash memory, and the like.
20
[0062] In an embodiment, the system (108) may include an interface(s)
(206). The interface(s) (206) may comprise a variety of interfaces, for example,
interfaces for data input and output devices (I/O), storage devices, and the like. The
interface(s) (206) may facilitate communication through the system (108). The
25 interface(s) (206) may also provide a communication pathway for one or more
components of the system (108). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (210). Further, the processing engine(s) (208) may include a data ingestion engine (212), a machine learning (ML) engine (214) and other engine(s) (216). In an embodiment, the other engine(s) (216)
30 may include, but not limited to a notification engine which provides notification to
the user on detecting the anomalies.
16
[0063] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming (for example,
programmable instructions) to implement one or more functionalities of the
processing engine(s) (208). In examples described herein, such combinations of
5 hardware and programming may be implemented in several different ways. For
example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such
10 instructions. In the present examples, the machine-readable storage medium may
store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (108) may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may
15 be separate but accessible to the system (108) and the processing resource. In other
examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0064] In an embodiment, the processor (202) may receive data via the data
20 ingestion engine (212). The data such may be received from the computing device
(104) associated with the users (102). The data may include information identifying
call data records (CDRs) which are to be analyzed for detecting anomalies in the
call setup success rate (CSSR). The CDRs act as a data set (i.e. a CSSR data source
providing CDRs) for analysis and the processor (202) may store the CDRs in the
25 database (210). Further, the CDRs are different for different location and anomalies
in CSSR are different for different locations.
[0065] The processor (202) may further receive a request (e.g. HTTP
request) via the data ingestion engine (212) (e.g using a User Interface) for creating
30 a model for analysing the CDRs. The request includes training period, test period,
features, logical partitioning, and algorithm name required for the model
17
configuration. A trained model is generated by via a ML engine (214) based on the
request. In an embodiment, the processor (202) may utilize a range of advanced
regression algorithms that may include but not limited to a two-factor regression
algorithm, a multitude decision tree algorithms, a periodicity algorithms, a scalar
5 boost algorithms, and a heuristic gain algorithms designed specifically to identify
the one or more anomalies in the CSSR. An algorithm is selected based on the name
of the algorithm indicated in the request for the model creation. In an embodiment,
the processor (202) may generate one or more hyper parameters associated with the
model and may fine tune them to generate an optimized model for detecting the one
10 or more anomalies.
[0066] The CDRs stored in the database (210) are retrieved for the specified
test periods as indicated in the request. Once the data is obtained, the model with
selected algorithm is executed. After the algorithm completes its analysis, the
15 resulting output is stored back into database. This ensures that the detected
anomalies and any relevant information are readily available for further analysis and reporting.
[0067] In an embodiment, the processor (202) may visually represent one
20 or more detected anomalies based a request from a user. The detected anomalies
may be displayed in graph or table formats, displaying predicted and actual values,
watermark values, and prominently highlighting the predicted anomalies. This
visual representation makes it easier for users to identify and understand the
detected anomalies. The predicted value indicates an estimated value of CSSR
25 based on the ML learning model in a particular time period at a particular location.
The actual value represents a real time value of CSSR in a particular time period at
a particular location. As an example, the predicted value may be represented as 98%
whereas the actual value may be represented as 95%. The watermark value may
represent a threshold value of CSSR below which it is considered that there are
30 anomalies in the CSSR. The watermark value may be different for different time
periods and different locations. If the actual CSSR is more than the watermark
18
value, then it may be considered as there is no anomaly in the CSSR. For instance,
a line graph might show the actual vs predicted CSSR for each day in the test period,
with anomalous days highlighted in red. A table could list the specific anomalous
days along with the actual and predicted CSSR values. The visualization is
5 generated based on user requests, allowing them to choose between different graph
and table formats.
[0068] Although FIG. 2 shows exemplary components of the system (108),
in other embodiments, the system (108) may include fewer components, different
10 components, differently arranged components, or additional functional components
than depicted in FIG. 2. additionally, or alternatively, one or more components of the system (108) may perform functions described as being performed by one or more other components of the system (108).
15 [0069] FIG. 3 illustrates an example block diagram (300) of a system
architecture for a system (108), in accordance with an embodiment of the present disclosure.
[0070] In one embodiment, the system (108) includes a user interface (302)
20 through which the system (108) receives one or more inputs from the users (102).
These inputs include information identifying call data records (CDRs) which are to
be analyzed for detecting anomalies in the call setup success rate (CSSR). The
CDRs act as a data set (i.e. a CSSR data source providing CDRs) for analysis.
Further, the CDRs are different for different location and anomalies in CSSR are
25 different for different locations.
[0071] The inputs may include input parameters such as a training period, a
test period, one or more features, a logical partitioning, and a technique required for
model configuration. Based on these inputs, the processor(s) (202) generate a
30 trained model and store the resulting output of the analysis performed using the
specified techniques in the database (210).
19
[0072] In an exemplary embodiment, a user might specify a training period
which represent a time period for training the ML model. For example, the training
period may be specified by number of days, number of hours etc. The training
period may be provided by a range such as March 12, 2024 to March 30, 2024. The
5 user interface also enables the user to specify whether the user needs to train the
model based on the historical CDRs by providing the training period prior to the current time or upcoming CDRs by providing the training period after the current time. The test period indicates a time period for which the user needs to run the model to detect the one or more anomalies. For example, the test period may be
10 specified by a number of days such as 10 days or hours or specified by a time
interval. The one or more features related to the CDRs may include call duration, call status, caller location, etc. The parameter logical partitioning may include a geographic region for the which anomalies in the CSSR are to be identified. The parameter related to ML technique which algorithm the model will be using analyze
15 the CDRs to detect the anomalies. The input parameters are not limited to only these
parameters and may include other parameters also. The system (108) may receive these parameters via a data ingestion engine (212) from the user's computing device (104).
20 [0073] In one embodiment, the system (108) comprises a load balancer
(304), a data collector (308), a data normalizer (310), and a database (DB) (312). The load balancer (304) receives inputs form the user interface. These inputs include information identifying call data records (CDRs) which are to be analyzed for detecting anomalies in the call setup success rate (CSSR). The inputs may
25 include one or more identifiers of CDRs for identifying the CDRs to be analyzed.
Based on the identified CDRs (306-1 or 306-2), the data collector (308) collects
data from all the identified CDRs (306), the data normalizer (310) normalizes the
collected CDRs, and the database (312), which is the Elastic Search database, stores
the normalized CDRs.
30
[0074] The load balancer (304) further receives one or more requests form
20
the user interface (302) which include a training period, test period, features of the
model, logical partitioning, and algorithm name required for the model
configuration. One or more trained models based anomaly detection microservices
(314-1, 314-2) are generated by via a ML engine (214) based on the request. In one
5 embodiment, the anomalies are detected by the anomaly detection microservices
(314-1, 314-2) and store the detected anomalies in a cache (316).
[0075] In an embodiment, the anomaly detection microservices (314-1, 314-
2) may utilize a range of advanced regression algorithms that may include but not
10 limited to a two-factor regression algorithm, a multitude decision tree algorithms, a
periodicity algorithms, a scalar boost algorithms, and a heuristic gain algorithms designed specifically to identify the one or more anomalies in the CSSR. An algorithm is selected based on the name of the algorithm indicated in the request for the model creation. In an embodiment, the processor (202) may generate one or
15 more hyper parameters associated with the model and may fine tune them to
generate an optimized model for detecting the one or more anomalies.
[0076] In one embodiment, the processor(s) (202) may generate a visual
representation based on a user (102) request, which includes graph and table
20 formats for displaying predicted values, actual values, watermark values, and
highlighting the predicted or detected anomalies. The predicted value indicates an estimated value of CSSR based on the ML learning model in a particular time period at a particular location. The actual value represents a real time value of CSSR in a particular time period at a particular location. As an example, the predicted value
25 may be represented as 98% whereas the actual value may be represented as 95%.
The watermark value may represent a threshold value of CSSR below which it is considered that there are anomalies in the CSSR. The watermark value may be different for different time periods and different locations. If the actual CSSR is more than the watermark value, then it may be considered as there is no anomaly in
30 the CSSR. For instance, a line graph might show the actual vs predicted CSSR for
each day in the test period, with anomalous days highlighted in red. A table could
21
list the specific anomalous days along with the actual and predicted CSSR values. The visualization is generated based on user requests, allowing them to choose between different graph and table formats.
5 [0077] Further, the processor(s) (202) transmit relevant data and insights
derived from the analysis to network operation teams, enabling them to take
appropriate actions based on the identified anomalies. The insights may include
information associated the anomalies such as a code identifying the anomaly,
severity of anomality on a scale of 1-10 or low, medium and critical. The insights
10 may further include a recommendation of an operation to be performed by the
network team to handle the anomalies. Finally, the processor(s) (202) visually represent the detected anomalies based on the optimized model and the input parameters.
15 [0078] In one embodiment, a computer program product comprises a non-
transitory computer-readable medium is disclosed. The non-transitory computer-readable medium stores instructions that, when executed by one or more processors, cause the one or more processors to perform one or more steps. The one or more steps include receiving data associated with the CSSR and storing the data in a
20 database. A model is created using a machine learning (ML) engine based on one
or more input parameters received via the user interface. The ML engine utilizes one or more regression techniques to create the model The method further includes retrieving the data from the database, executing the model to analyse the data to detect the one or more anomalies associated with the data. The method represents
25 the one or more detected anomalies visually. The data includes call data records
(CDRs).
[0079] In one embodiment, the User Equipment (UE)/Computing device
(104) may be communicatively coupled to the system (108) for detecting anomalies
30 in the call setup success rate (CSSR) of the telecommunications network. The User
Equipment UE (104) is configured for receiving data associated with the CSSR.
22
Further, User Equipment UE (104) configured for transmitting the one or more parameters to a system (108), wherein the system (108) is configured for detecting anomalies in a call setup success rate (CSSR) as stated above.
5 [0080] FIG. 4 illustrates an exemplary flow diagram of a method (400)
implementing a ML based anomaly detection method for a call setup success rate (CSSR), in accordance with an embodiment of the present disclosure.
[0081] As illustrated in FIG. 4, the method (400) may include the following
10 steps.
[0082] At step 402: Users (102) operate the system (108) by logging in
through a user interface (UI). The UI may allow users (102) to input various parameters such as the training period, test period, features associated with a ML
15 model and CDR data to consider, logical partitioning criteria, and the desired
machine learning technique for model configuration. For example, a user might specify a training period which represent a time period for training the ML model. For example, the training period may be specified by number of days, number of hours etc. The training period may be provided by a range such as March 12, 2024
20 to March 30, 2024. The user interface also enables the user to specify whether the
user needs to train the model based on the historical CDRs by providing the training period prior to the current time or upcoming CDRs by providing the training period after the current time. The test period indicates a time period for which the user needs to run the model to detect the one or more anomalies. For example, the test
25 period may be specified by a number of days such as 10 days or hours or specified
by a time interval. The one or more features related to the CDRs may include call duration, call status, caller location, etc. The parameter logical partitioning may include a geographic region for the which anomalies in the CSSR are to be identified. The parameter related to ML technique which algorithm the model will
30 be using analyze the CDRs to detect the anomalies. The input parameters are not
limited to only these parameters and may include other parameters also. The system (108) may receive these parameters via a data ingestion engine (212) from the user's
23
computing device (104).
[0083] At step 404: The system (108) may create a CSSR data set using the
received data. This involves a load balancer (304) receiving the data from the user
5 interface and providing the received data to a data collector. The CDRs from the
data are collected by a data collector (308), normalized by a data normalizer (310),
and stored in a database (312). For example, the load balancer might recevie 3
months of raw CDRs. The data collector extracts relevant information associated
with CDRs like call start time, end time, status code, caller ID, etc. The normalizer
10 converts the data into a standard format before storing the data in Elastic Search
database.
[0084] At step 406: The system (108) receives a request (for example
Hypertext Transfer Protocol (HTTP) request) from the UI with the parameters
15 specified in step 402. Based on the request, the system (108) then enables the
creation of a trained machine learning model based the provided parameters.
[0085] The ML engine (214) retrieves the relevant training data from Elastic
Search database based on the input parameters. The ML engine (214) uses one or
20 more regression techniques, such as two-factor regression, multitude decision tree,
periodicity analysis, scalar boosting, and heuristic gain, to train the model to identify CSSR anomalies. The resulting model is stored for future use.
[0086] At step 408: The system (108) may retrieve the required data i.e.
25 CDRs from the Elastic Search database for the specified test period. It checks if the
model is executed successfully on this test dataset. If not, the model is checked and if required the model is modified. Further, if the model is not executed successfully, then the input test data set is checked. If the test dataset is not correct, the test dataset is updated. The model analyzes the test data, compares it to the learned patterns
30 from the training phase, to identify any unusual deviations in the CSSR metrics.
Detected anomalies are stored in a cache (316). As part of this process, the system (108) generates anomaly detection hyperparameters associated with model. It then
24
fine-tunes these hyperparameters to create an optimized anomaly detection model.
[0087] At step 410: It checks if the model is executed successfully on this
test dataset. If not, the model is checked and if required the model is modified.
5 Further, if the model is not executed successfully, then the input test data set is
checked. If the test dataset is not correct, the test dataset is updated. For example,
system (108) may tweak the hyperparameters of the regression model or check the
test data for missing values or inconsistent formats. After making necessary
adjustments, the system (108) may channelise the flow back to step 408 to rerun the
10 anomaly detection.
[0088] At step 412: If step 408 if the model is executed successfully and
detects anomalies in the test data, the system (108) may generate visualizations of the results. This includes graphs and tables displaying the predicted CSSR values,
15 actual values, watermark thresholds, and highlights the anomalous data points. The
predicted value indicates an estimated value of CSSR based on the ML learning model in a particular time period at a particular location. The actual value represents a real time value of CSSR in a particular time period at a particular location. As an example, the predicted value may be represented as 98% whereas the actual value
20 may be represented as 95%. The watermark value may represent a threshold value
of CSSR below which it is considered that there are anomalies in the CSSR. The watermark value may be different for different time periods and different locations. If the actual CSSR is more than the watermark value, then it may be considered as there is no anomaly in the CSSR. For instance, a line graph might show the actual
25 vs predicted CSSR for each day in the test period, with anomalous days highlighted
in red. A table could list the specific anomalous days along with the actual and predicted CSSR values. The visualization is generated based on user requests, allowing them to choose between different graph and table formats.
30 [0089] At step 414: The system (108) further analyses the generated
anomaly graph. In an embodiment, the system (108) waits for the user to analyze
25
the anomaly graph from step 412. It determines based on the user inputs that the user has reviewed the graph visualization.
[0090] At step 416: If the user or system (108) has analyzed the graph, and
5 anomalies are detected based on the analysis, the system (108) sends notifications
to the network operations teams with details on the anomalies. This allows the
network operators to investigate the root causes and take corrective actions to
address any issues impacting the CSSR. The insights derived from the anomaly
detection enable proactive resolution of network problems. For example, if the
10 model detected an unusual dip in CSSR for a particular region on certain days, the
network team could analyze the detailed CDRs to identify any infrastructure failures or configuration issues that need to be fixed.
[0091] At step 418: If the anomalies are not detected based on analysis of
15 the graph, the system (108) terminates the anomaly detection process.
[0092] FIG. 5 illustrates an example computer system (500) in which or
with which the embodiments of the present disclosure may be implemented.
20 [0093] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), a communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor and communication ports. The
25 processor (570) may include various modules associated with embodiments of the
present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication ports(s) (560) may be chosen
30 depending on a network, such as a Local Area Network (LAN), Wide Area Network
(WAN), or any network to which the computer system (500) connects.
26
[0094] In an embodiment, the main memory (530) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the art.
The read-only memory (540) may be any static storage device(s) e.g., but not
limited to, a Programmable Read Only Memory (PROM) chip for storing static
5 information e.g., start-up or basic input/output system (BIOS) instructions for the
processor (570). The mass storage device (550) may be any current or future mass
storage solution, which can be used to store information and/or instructions.
Exemplary mass storage solutions include, but are not limited to, Parallel Advanced
Technology Attachment (PATA) or Serial Advanced Technology Attachment
10 (SATA) hard disk drives or solid-state drives (internal or external, e.g., having
Universal Serial Bus (USB) and/or Firewire interfaces).
[0095] In an embodiment, the bus (520) may communicatively couple the
processor(s) (570) with the other memory, storage, and communication blocks. The
15 bus (520) may be, e.g. a Peripheral Component Interconnect (PCI)/PCI Extended
(PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus
(USB), or the like, for connecting expansion cards, drives, and other subsystems as
well as other buses, such a front side bus (FSB), which connects the processor (570)
to the computer system (500).
20
[0096] In another embodiment, operator and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus (520)
to support direct operator interaction with the computer system (500). Other
operator and administrative interfaces can be provided through network
25 connections connected through the communication port(s) (560). Components
described above are meant only to exemplify various possibilities. In no way should
the aforementioned exemplary computer system (500) limit the scope of the present
disclosure.
30 [0097] FIG. 6 illustrates an exemplary flow chart of a method (600) for
anomaly detection for a call setup success rate (CSSR) using a trained machine learning model, in accordance with an embodiment of the present disclosure.
27
[0098] At step 602, the data associated with the CSSR is received. In one
embodiment, one or more inputs are received from the users (102). These inputs
include information identifying call data records (CDRs) which are to be analyzed
5 for detecting anomalies in the call setup success rate (CSSR). The CDRs act as a
data set (i.e. a CSSR data source providing CDRs) for analysis. Further, the CDRs are different for different location and anomalies in CSSR are different for different locations.
10 [0099] The received data is stored in a database (210) at step 604. A model
is created using a machine learning (ML) engine (214) at step 606. The ML engine (214) utilizes one or more regression techniques to create the model. The data is retrieved from the database (210) at step 608. The model is executed to detect the one or more anomalies associated with the data, wherein the data includes call data
15 records (CDRs) at step 610. The one or more detected anomalies are visually
represented at step 612.
[00100] The present disclosure provides technical advancement related to
detecting anomalies in call setup success rate (CSSR). This advancement addresses
20 the limitations of existing solutions by using a ML model used which is trained
using historical data (CDRs) based on the one or more parameters provided by the user. The disclosure involves analyzing the CDRs using the trained ML model to detect the anomalies in the CSSR. The ML model uses a range of advanced regression algorithms, i.e. two-factor regression, multitude decision tree,
25 periodicity algorithm, scalar boost algorithm, and heuristic gain algorithm. The
results obtained from the analysis can be presented in the form of visually informative graphs and tables, providing a clear and comprehensive understanding of the detected anomalies.
30 [00101] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that
28
many changes can be made in the preferred embodiments without departing from
the principles of the disclosure. These and other changes in the preferred
embodiments of the disclosure will be apparent to those skilled in the art from the
disclosure herein, whereby it is to be distinctly understood that the foregoing
5 descriptive matter is to be implemented merely as illustrative of the disclosure and
not as a limitation.
ADVANTAGES OF THE INVENTION
[00102] The present disclosure provides a system and a method that uses
10 artificial intelligence (AI) and machine learning (ML) for detecting a call setup
success rate (CSSR) in a network.
[00103] The present disclosure provides a system and a method that uses
various techniques to analyze data and detect anomalies in the CSSR as a part of an
15 adaptive troubleshooting operations management.
[00104] The present disclosure provides a system and a method that
generates a ML model for conducting automated analysis and detecting anomalies
in the CSSR.
20
[00105] The present disclosure provides a system and a method that
generates visual representation of predicted, actual values, watermark values, and
signifies predicted anomalies associated with CSSR by highlighting the anomalies.
25 [00106] The present disclosure provides a system and a method that uses the
ML to identify instances where the CSSR falls below a predefined value in specific locations across a region.
[00107] The present disclosure provides a system and a method that transmits
30 information to a network operation teams to initiate appropriate actions and resolve
issues associated with the detected anomalies.
29
WE CLAIM:
1. A system (108) for detecting one or more anomalies in data associated with
5 a call setup success rate (CSSR) of a telecommunications network, the
system (108) comprising:
a memory (204);
one or more processor(s) (202) configured to fetch and execute computer-readable instructions stored in a memory (204) to:
10 receive the data associated with the CSSR via a user
interface, wherein the data includes call data records (CDRs);
store the data in a database (210);
create a model using a machine learning (ML) engine (214)
based on one or more input parameters received via the user
15 interface, wherein the ML engine (214) utilizes one or more
regression techniques to create the model;
retrieve the stored data from the database (210); and
execute the model to analyze the retrieved data for detecting the one or more anomalies associated with the retrieved data.
20
2. The system (108) of claim 1, wherein the one or more input
parameters received from a user (102) via a user interface (302) include at
least one of a training period, a test period, one or more features, and a
logical partitioning required for configuration of the model.
25 .
3. The system (108) of claim 1, further comprising:
30
a load balancer (304) configured to receive the data from the user interface (302);
a data collector (308) configured to collect the CDRs (306) from the data;
5 a data normalizer (310) configured to normalize the collected CDRs
(306); and
the database (210, 312) configured to store the normalized CDRs (306).
10 4.
The system (108) of claim 3, wherein the one or more processor(s) (202) are further configured to:
create the model based on the one or more input parameters received from the load balancer (304), wherein the load balancer (304) receives the one or more input parameters from the user interface; and
15
generate a visual representation of the detected one or more anomalies based on a user request via the user interface (302), wherein the visual representation includes at least one of a graph format and a table format for displaying predicted values, actual values, and watermark values of the CSSR and the one or more detected anomalies. 20
5. The system (108) of claim 4, wherein the one or more processor(s) (202)
are further configured for transmitting the data and one or more insights
derived from analysis of the data to one or more network operation teams
for enabling the one or more network operation teams to take appropriate
25 actions.
6. The system (108) of claim 1, wherein the one or more processor(s) (202)
are configured to receive the data via a data ingestion engine (212) from a
computing device (104) associated with a user (102).
31
7. The system (108) of claim 1, wherein the one or more regression techniques
include a two-factor regression technique, a multitude decision tree
technique, a periodicity technique, a scalar boost technique, and a heuristic
5 gain technique.
8. The system (108) of claim 7, wherein the one or more processor(s) (202)
are configured to generate one or more hyper parameters associated with the
model and fine-tune the one or more hyper parameters to generate an
10 optimized model for detecting the one or more anomalies.
9. A method (600) for detecting anomalies in a call setup success rate (CSSR)
of a telecommunications network, the method comprising steps of:
receiving data associated with the CSSR via a user interface (302),
15 wherein the data includes call data records (CDRs);
storing the data in a database (210);
creating a model using a machine learning (ML) engine (214) based
on one or more input parameters received via the user interface, wherein the
ML engine (214) utilizes one or more regression techniques to create the
20 model;
retrieving the stored data from the database (210);
executing the model to analyze the retrieved data for detecting the one or more anomalies associated with the retrieved data.
25 10. The method (600) of claim 9, wherein:
the one or more input parameters received from a user (102) via the user interface (302) include at least one of a training period, a test period,
32
one or more features, and a logical partitioning required for configuration of the model.
11. The method (600) of claim 9, further comprises steps for:
5 receiving, by a load balancer (304), the data from the user interface
(302);
collecting, by a data collector (308), the CDRs (306) from the data;
normalizing, by a data normalizer (310), the collected CDRs (306); and
10 storing, by the database (210, 312), the normalized CDRs (306).
12. The method (600) of claim 11, further comprises steps for:
creating the model based on the one or more input parameters
received from the load balancer (304), wherein the load balancer (304)
15 receives the one or more input parameters from the user interface (302); and
generating a visual representation of the detected one or more
anomalies based on a user request via the user interface (302), wherein the
visual representation includes at least one of a graph format and a table
format for displaying predicted values, actual values, and watermark values
20 of the CSSR and the one or more detected anomalies.
13. The method (600) of claim 9, further comprises steps for transmitting the
data and one or more insights derived from the analysis of the data to one
or more network operation teams for enabling the one or more network
25 operation teams to take appropriate actions.
14. The method (600) of claim 9, further comprises receiving the data via a data
ingestion engine (212) from a computing device (104) associated with a user
(102).
33
15. The method (600) of claim 9, wherein the one or more regression techniques
include a two-factor regression technique, a multitude decision tree
technique, a periodicity technique, a scalar boost technique, and a heuristic
5 gain technique.
16. The method (600) of claim 15, further comprises generating one or more
hyper parameters associated with the model using the one or more
regression techniques and fine-tune the one or more hyper parameters to
10 generate an optimized model for detecting the one or more anomalies.
17. A User Equipment (UE) (104) communicatively coupled to a system (108)
for detecting anomalies in a call setup success rate (CSSR) of a
telecommunications network, wherein the UE (104) is configured for:
15 receiving data, associated with the CSSR; and
transmitting the data to the system (108), wherein the system (108) is configured for detecting the anomalies in the call setup success rate (CSSR) as claimed in claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321049638-STATEMENT OF UNDERTAKING (FORM 3) [24-07-2023(online)].pdf | 2023-07-24 |
| 2 | 202321049638-PROVISIONAL SPECIFICATION [24-07-2023(online)].pdf | 2023-07-24 |
| 3 | 202321049638-FORM 1 [24-07-2023(online)].pdf | 2023-07-24 |
| 4 | 202321049638-DRAWINGS [24-07-2023(online)].pdf | 2023-07-24 |
| 5 | 202321049638-DECLARATION OF INVENTORSHIP (FORM 5) [24-07-2023(online)].pdf | 2023-07-24 |
| 6 | 202321049638-FORM-26 [19-10-2023(online)].pdf | 2023-10-19 |
| 7 | 202321049638-FORM-26 [26-04-2024(online)].pdf | 2024-04-26 |
| 8 | 202321049638-FORM 13 [26-04-2024(online)].pdf | 2024-04-26 |
| 9 | 202321049638-FORM-26 [30-04-2024(online)].pdf | 2024-04-30 |
| 10 | 202321049638-Request Letter-Correspondence [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202321049638-Power of Attorney [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202321049638-Covering Letter [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202321049638-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf | 2024-07-10 |
| 14 | 202321049638-ORIGINAL UR 6(1A) FORM 26-100724.pdf | 2024-07-15 |
| 15 | 202321049638-FORM-5 [23-07-2024(online)].pdf | 2024-07-23 |
| 16 | 202321049638-DRAWING [23-07-2024(online)].pdf | 2024-07-23 |
| 17 | 202321049638-CORRESPONDENCE-OTHERS [23-07-2024(online)].pdf | 2024-07-23 |
| 18 | 202321049638-COMPLETE SPECIFICATION [23-07-2024(online)].pdf | 2024-07-23 |
| 19 | Abstract-1.jpg | 2024-10-03 |
| 20 | 202321049638-FORM 18 [03-10-2024(online)].pdf | 2024-10-03 |