Sign In to Follow Application
View All Documents & Correspondence

System And Method For Monitoring Performance Data Of A Network In Real Time

Abstract: The present disclosure provides a method and system for monitoring performance data associated with one or more parameters of a network. The one or more parameters may include call release reasons (CRRS). The one or more parameters are selected by a user and performance data associated with the parameters is retrieved from a cache layer. One or more values associated with the one or more parameters are calculated based upon determining that the one or more values were not computed previously. An AI/ML engine generates data based on the one or more parameters for computing the one or more values. The determined values are stored in the cache layer for retrieving by one or more subsequent requests. Updated performance data is generated based on the one or more values and the retrieved performance data for rendering on a user interface Figure.3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
29 July 2023
Publication Number
05/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. SAXENA, Gaurav
B1603, Platina Cooperative Housing Society, Casa Bella Gold, Kalyan Shilphata Road, Near Xperia Mall Palava City, Dombivli, Kalyan, Thane - 421204, Maharashtra, India.
4. SHOBHARAM, Meenakshi
2B-62, Narmada, Kalpataru, Riverside, Takka, Panvel, Raigargh - 410206, Maharashtra, India.
5. BHANWRIA, Mohit
39, Behind Honda Showroom, Jobner Road, Phulera, Jaipur - 303338, Rajasthan, India.
6. GAYKI, Vinay
259, Bajag Road, Gadasarai, District -Dindori - 481882, Madhya Pradesh, India.
7. KUMAR, Durgesh
Mohalla Ramanpur, Near Prabhat Junior High School, Hathras, Uttar Pradesh -204101, India.
8. BHUSHAN, Shashank
Fairfield 1604, Bharat Ecovistas, Shilphata, NH48, Thane - 421204, Maharashtra, India.
9. KHADE, Aniket Anil
X-29/9, Godrej Creek Side Colony, Phirojshanagar, Vikhroli East - 400078, Mumbai, Maharashtra, India.
10. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
11. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
12. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
13. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera District-Kota, Rajasthan - 324001, India.
14. SAHU, Kishan
Ajay Villa, Gali No. 2 Ambedkar Colony, Bikaner, Rajasthan - 334003, India.
15. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
16. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
17. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
18. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
19. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
20. KUSHWAHA, Avinash
SA 18/127, Mauza Hall, Varanasi - 221007, Uttar Pradesh, India.
21. GARG, Harshita
37A, Ananta Lifestyle, Airport Road, Zirakpur, Mohali, Punjab - 140603, India.
22. KUMAR, Yogesh
Village-Gatol, Post-Dabla, Tahsil-Ghumarwin, Distict-Bilaspur, Himachal Pradesh - 174021, India.
23. TALGOTE, Kunal
29, Nityanand Nagar, Nr. Tukaram Hosp., Gaurakshan Road, Akola - 444004, Maharashtra, India.
24. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli, Maharashtra - 421204, India.
25. VISHWAKARMA, Dharmendra Kumar
Ramnagar, Sarai Kansarai, Bhadohi - 221404, Uttar Pradesh, India.
26. SONI, Sajal
K. P. Nayak Market Mauranipur, Jhansi, Uttar Pradesh - 284204, India.

Specification

FORM 2
HE PATENTS ACT, 1970
(39 of 1970) PATENTS RULES, 2003
COMPLETE SPECIFICATION
TITLE OF THE INVENTION
REAL-TIME
APPLICANT
380006, Gujarat, India; Nationality : India
following specification particularly describes the invention and the manner in which it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to, copyright,
5 design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection,
belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as
owner). The owner has no objection to the facsimile reproduction by anyone of the
patent document or the patent disclosure, as it appears in the Patent and Trademark
Office patent files or records, but otherwise reserves all rights whatsoever. All rights
10 to such intellectual property are fully reserved by the owner.
FIELD OF DISCLOSURE
[0002] The embodiments of the present disclosure generally relate to
communication networks. In particular, the present disclosure relates to a system and
15 a method for monitoring performance data of a network in real-time.
DEFINITION
[0003] Call Release Reasons (CRRS) - indicate whether the users voluntarily
terminated the call, or whether the call was terminated by the network due to poor
20 connectivity, improper operation of components of the network, and the like.
[0004] Distributed file system- A distributed file system (DFS) as the name
suggests, is a file system that is distributed on multiple file servers or multiple locations. It allows programs to access or store isolated files as they do with the local ones, allowing programmers to access files from any network or computer.
25 [0005] Load balancer - A load balancer is a device that sits between the user
and the server group and acts as an invisible facilitator, ensuring that all resource
2

servers are used equally.
[0006] Computation Layer is responsible for performing data filtering and
geography-based network functions failure data computation. It retrieves raw error
code data, applies filtering and aggregation operations based on user requests, and
5 computes relevant metrics and insights.
[0007] Network performance data - represents performance of a network at
specific interval of time. The network performance data depends on multiple
parameters such as time, location, type of network etc. For example, the network
performance data may include efficiency, latency, resource utilization, and congestion
10 etc. The performance data associated with Call Release Reasons (CRRS) are number
of voluntarily terminated calls, whether the call was terminated by the network due to poor connectivity, number of calls terminated due to network connectivity, number of calls terminated due improper operation of components of the network.
[0008] The expression ‘KPI (Key Performance Indicator)’ used hereinafter in
15 the specification refers to a measurement and a benchmark to achieve optimal network
performance goals. To support these goals, measuring actual performance against the KPI goals helps the network team make decisions to improve and sustain network performance.
20 BACKGROUND OF DISCLOSURE
[0009] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the present
disclosure. However, it should be appreciated that this section be used only to enhance
25 the understanding of the reader with respect to the present disclosure, and not as
admissions of prior art.
3

[0010] Generally, networks generate high volumes of data as they provide
services to one or more user equipment (UEs). The high volumes of data may be in the
form of session log data, among others, and may be used for deriving performance
metrics for the network. For instance, Call Release Reasons (CRRS) in subscriber
5 session logs indicate the reason of terminating a call between two or more user
equipment (UE). CRRS may indicate whether the users voluntarily terminated the call,
or whether the call was terminated by the network due to poor connectivity, improper
operation of components of the network, and the like. CRRS may be stored and
analysed to understand or infer bottlenecks causing degradation in performance of the
10 network.
[0011] To analyse the performance data of the network, often on-demand
dashboards are used where visualizations may be provided to the users to analyse the performance data of the network. However, in such dashboards, users may have to manually provide inputs to generate visualizations every time the results need to be
15 updated. Such dashboards are not suitable for continuous monitoring that may allow
for real-time analysis of the performance data, such as trend analysis and determining among others. Further, existing dashboards may not be able to handle the computation burdens of analysing trends in performance data in the time span of months or years. Hence, it becomes necessary to optimize the computation efficiency while maintaining
20 the real-time data delivery.
[0012] There is, therefore, a need in the art to provide a method and a system
that can overcome the shortcomings of the existing prior arts.
SUMMARY
25 [0013] In an exemplary embodiment, a method for monitoring performance
data of a network in real-time is described. The embodiment describes monitoring performance data associated with one or more parameters of a network. In an example,
4

the one or more parameters may include call release reasons (CRRS), call set up
success rate (CSSR), and answer-seizure ratio (ASR) etc. The method includes
selecting, by a user, the one or more parameters for monitoring the performance data
associated with the one or more parameters of the network using a user interface (UI).
5 The method includes retrieving, by the workflow engine, the performance data
associated with the one or more parameters from a cache layer. The method calculates one or more values associated with the one or more parameters based upon determining that the one or more values were not computed previously. The computing engine then communicates with an artificial intelligence machine learning engine (AI/ML) which
10 generates data based on the one or more parameters. The computing engine computes
the one or more values based on the data received from the AI/ML engine. Alternatively, the method includes fetching, by the computing engine the one or more values from a distributed file system upon determining that the one or more values were computed previously and are stored in the distributed file system. The method further
15 includes storing, by the workflow engine, the determined one or more values in the
cache layer to be retrieved for one or more subsequent requests including the one or more parameters and generating, by the computing engine updated performance data based on the one or more values and the retrieved performance data. The method comprises transmitting, by the workflow engine, the updated performance data to the
20 user interface for rendering to the user.
[0014] In an embodiment, the performance data associated with CRRS are
number of voluntarily terminated calls, whether the call was terminated by the network
due to poor connectivity, number of calls terminated due to network connectivity,
number of calls terminated due improper operation of components of the network.
25 Further, the one or more values are indicative of the performance data. For example,
the one or more values for the CRRS include the values for number of voluntarily terminated calls, whether the call was terminated by the network due to poor connectivity, number of calls terminated due to network connectivity, number of calls
5

terminated due improper operation of components of the network. In another example,
the one or more for values related to CRRS may include values for acknowledgement
timeout, routes, no answer reason, network issues. In another example, the values for
CSSR may include number of attempts, no of connected calls, and number of dropped
5 call etc.
[0015] In an embodiment, one or more visualizations associated with the one
or more values and updated performance data are displayed to the user via the user interface.
[0016] In an embodiment, the request from the user is received by the workflow
10 engine via a load balancer unit.
[0017] In an embodiment, one or more parameters include call release reasons
(CRRS).
[0018] In an embodiment, the performance data is monitored in real-time.
[0019] In an embodiment, the request includes time interval information which
15 indicates an interval of time during which the performance data is to be monitored.
[0020] In an embodiment, a system for monitoring performance data associated
with one or more parameters of a network is disclosed. The system includes a memory and one or more processor(s) configured to fetch and execute computer-readable instructions stored in a memory. The system includes a user interface through which a
20 user selects the one or more parameters for monitoring the performance data. A
workflow engine receives a request including the one or more parameters for monitoring the performance data. The workflow engine retrieves the performance data associated with the one or more parameters from a cache layer based on the one or more parameters. One or more values associated with the one or more parameters is
25 determined upon determining that the one or more values were not computed
6

previously. The computing engine based on the determination communicates with an
artificial intelligence/machine learning (AI/ML) engine to receive data from the
AI/ML engine based on the one or more parameters. The computing engine computes
the one or more values based on the received data. Alternatively, the computing engine
5 fetches the one or more values from a distributed file system upon determining that the
one or more values were computed previously and are stored in the distributed file
system. The workflow engine stores the determined one or more values in the cache
layer so that the stored values can be retrieved for one or more subsequent requests
including the one or more parameters. The computing engine generates updated
10 performance data based on the one or more values and the retrieved performance data
and transmits the updated performance data to the user interface for rendering to the user.
[0021] In an embodiment, one or more visualizations associated with the one
or more values and updated performance are displayed to the user via the user interface.
15 [0022] In an embodiment, the request from the user is received by the workflow
engine via a load balancer unit.
[0023] In an embodiment, one or more parameters include call release reasons
(CRRS).
[0024] In an embodiment, the performance data is monitored in real-time.
20 [0025] In an embodiment, the request includes time interval information which
indicates an interval of time during which the performance data is to be monitored.
[0026] In an embodiment, a User Equipment (UE) is communicatively coupled
to a system for monitoring performance data associated with one or more parameters
of a network. The UE is configured for transmitting the one or more parameters to the
25 system which is configured for monitoring the performance data according to the
7

method steps as described above.
[0027] In an embodiment, a computer program product is disclosed which
comprises a non-transitory computer-readable medium comprising instructions that,
when executed by one or more processors, cause the one or more processors to perform
5 steps of selecting, by a user the one or more parameters for monitoring the performance
data using a user interface (UI). The steps include receiving a request including the one or more parameters by a workflow engine. The performance data associated with the one or more parameters is retrieved from a cache layer. The steps include determining, by the computing engine, that the one or more values were not computed previously,
10 and communicating with an artificial intelligence/machine learning (AI/ML) engine to
receive data based on the one or more parameters. The computing engine then computes the one or more values based the received data from AI/ML engine. Alternatively, the computing engine may fetch the one or more values from a distributed file system upon determining that the one or more values were computed
15 previously and are stored in the distributed file system. The steps further include
storing, by the workflow engine, the determined one or more values in the cache layer to be retrieved for one or more subsequent requests including the one or more parameters. The steps further include generating by computing engine updated performance data based on the one or more values and retrieved performance data. The
20 steps further include transmitting by the workflow engine, the updated performance
data to the user interface for rendering to the user.
[0028] The foregoing general description of the illustrative embodiments and
the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure, and are not restrictive. 25
OBJECT OF THE PRESENT DISCLOSURE
[0029] Some of the objects of the present disclosure, which at least one
8

embodiment herein satisfies are as listed herein below.
[0030] An object of the present disclosure is to provide a system and a method
for monitoring performance in real-time.
[0031] An object of the present disclosure is to provide a system and a method
5 that optimally utilizes resources and reduces computational burdens for monitoring the
performance data in real time.
[0032] An object of the present disclosure is to provide a system and a method
that utilizes memory to prevent duplication of computation.
[0033] An object of the present disclosure is to provide a system and a method
10 that fine tunes network configurations based on live performance data.
[0034] An object of the present disclosure is to provide a system and a method
that uses an Artificial Intelligence engine to determine one or more computed values that indicate the performance of the network.
[0035] These and other objectives and advantages of the embodiments of the
15 present invention will become readily apparent from the following detailed description
taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The accompanying drawings, which are incorporated herein, and
20 constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed
methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams
9

and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
5 [0037] FIG. 1 illustrates an exemplary network architecture for monitoring
performance data of a network in real-time, according to an embodiment of the present invention.
[0038] FIG. 2 illustrates an exemplary block diagram of a system for the
performance data in real-time, according to an embodiment of the present invention.
10 [0039] FIG. 3 illustrates an exemplary sequence diagram for monitoring the
performance data in real-time, according to an embodiment of the present invention.
[0040] FIG. 4 illustrates an exemplary flow diagram of a method for monitoring
the performance data in real-time, according to an embodiment of the present invention.
[0041] FIG. 5 illustrates an exemplary computer system in which or with which
15 embodiments of the present invention may be implemented.
[0042] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
LIST OF REFERENCE NUMERALS
20 List all the elements, only the name of the term, that are assigned with a reference
character (it can numeric, alphabets, alphanumeric). If the term contain multiple words,
avoid capitalizing the second and subsequent words.
100 – Network architecture
102-1- 102-2…102-N– Users
25 104-1- 104-2…104-N – User equipment
10

106-Network
108-System
202– Processor
204– Memory
5 206– Interface
208– Processing Engine
210– Database
212– Computing Engine
214– Caching Engine
10 216– AI/ML Engine
218- Other Units
220- Workflow engine
224- Load balancing engine
300- Implementation of the system
15 302-User
304-Step
306-Step
308-Step
310-User Interface
20 312-Step
314-Step
316-Step
318-Step
320-Load Balancer engine
25 322-Step
324-Step
326-Step
328-Workflow Engine
330- Computing engine
11

370-Cache Layer
332- AI/ML Engine
334-Distributed File System
370-Cache layer
5 400-Method
402-Step
404-Step
406-Step
408-Step
10 410-Step
412-Step
414-Step
500- Computer system
510- External storage device
15 520- Bus
530- Main memory
540- Read only memory
550- Mass Storage Device
560- Communication Port
20 570- Computer System Processor
DETAILED DESCRIPTION OF THE DISCLOSURE
[0043] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
25 embodiments of the present disclosure. It will be apparent, however, that embodiments
of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the
12

problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0044] The ensuing description provides exemplary embodiments only, and is
5 not intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
10 [0045] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the
15 embodiments in unnecessary detail. In other instances, well-known circuits, processes,
algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0046] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
20 structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a
25 subroutine, a subprogram, etc. When a process corresponds to a function, its
termination can correspond to a return of the function to the calling function or the main function.
13

[0047] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the subject
matter disclosed herein is not limited by such examples. In addition, any aspect or
design described herein as “exemplary” and/or “demonstrative” is not necessarily to be
5 construed as preferred or advantageous over other aspects or designs, nor is it meant to
preclude equivalent exemplary structures and techniques known to those of ordinary
skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,”
and other similar words are used in either the detailed description or the claims, such
terms are intended to be inclusive in a manner similar to the term “comprising” as an
10 open transition word without precluding any additional or other elements.
[0048] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature,
structure, or characteristic described in connection with the embodiment is included in
at least one embodiment of the present disclosure. Thus, the appearances of the phrases
15 “in one embodiment” or “in an embodiment” in various places throughout this
specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0049] The terminology used herein is for the purpose of describing particular
20 embodiments only and is not intended to be limiting of the disclosure. As used herein,
the singular forms “a”, “an” and “the” are intended to include the plural forms as well,
unless the context clearly indicates otherwise. It will be further understood that the
terms “comprises” and/or “comprising,” when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or components,
25 but do not preclude the presence or addition of one or more other features, integers,
steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed
14

items.
[0050] The aspects of the present disclosure are directed to a system and a
method for performance metrics of a network in real-time. The system enables a user
to select one or more parameters for monitoring performance data using a user interface
5 (UI). A workflow engine receives a request including the one or more parameters for
monitoring the performance data and retrieves the performance data associated with the one or more parameters from a cache layer. One or more values associated with the one or more parameters is determined upon determining that the one or more values were not computed previously. The computing engine communicates with an artificial
10 intelligence/machine learning (AI/ML) engine to extract data based on the one or more
parameters. The computing engine computes the one or more values based on the data. Alternatively, the computing engine fetches the one or more values from a distributed file system upon determining that the one or more values were computed previously and are stored in the distributed file system. The workflow engine stores the
15 determined one or more values in the cache layer so that the stored values can be
retrieved for one or more subsequent requests including the one or more parameters. The computing engine generates updated performance data based on the one or more values and the retrieved performance data and transmits the updated performance data to the user interface for rendering to the user.
20 [0051] In an embodiment, the performance data associated with CRRS are
number of voluntarily terminated calls, whether the call was terminated by the network due to poor connectivity, number of calls terminated due to network connectivity, number of calls terminated due improper operation of components of the network. Further, the one or more values are indicative of the performance data. For example,
25 the one or more values for the CRRS include the values for number of voluntarily
terminated call, whether the call was terminated by the network due to poor connectivity, number of calls terminated due to network connectivity, number of calls
15

terminated due improper operation of components of the network.
[0052] The various embodiments throughout the disclosure will be explained
in more detail with reference to FIGs. 1-5.
[0053] Referring to FIG. 1, a network architecture (100) may include one or
5 more computing devices or user equipment (104-1, 104-2…104-N) associated with one
or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a person of ordinary skill in the art will understand that one or more user
10 equipment (104-1, 104-2…104-N) may be individually referred to as the user
equipment (104) and collectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipment (104) are depicted in FIG. 1, however any number of the user
15 equipment (104) may be included without departing from the scope of the ongoing
description.
[0054] In an embodiment, the user equipment (104) may include, but is not
limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device(e.g., a head-mounted
20 display computer device, a head-mounted camera device, a wristwatch computer
device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like. In an embodiment, the user equipment (104)
25 may include but is not limited to, any electrical, electronic, electromechanical, or
equipment, or a combination of one or more of the above devices such as virtual reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer,
16

desktop, personal digital assistant, tablet computer, mainframe computer, or any other
computing device, wherein the user equipment (104) may include one or more in-built
or externally coupled accessories including, but not limited to, a visual aid device such
as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving
5 input from the operator or the entity such as touchpad, touch-enabled screen, electronic
pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used.
[0055] In an embodiment, the user equipment (104) may include smart devices
10 operating in a smart environment, for example, an Internet of Things (IoT) system. In
such an embodiment, the user equipment (104) may include, but is not limited to, smart
phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic,
etc.), networked appliances, networked peripheral devices, networked lighting system,
communication devices, networked vehicle accessories, networked vehicular devices,
15 smart accessories, tablets, smart television (TV), computers, smart security system,
smart home system, other devices for monitoring or interacting with or for the users
(102) and/or entities, or any combination thereof. A person of ordinary skill in the art
will appreciate that the user equipment (104) may include, but is not limited to,
intelligent, multi-sensing, network-connected devices, that can integrate seamlessly
20 with each other and/or with a central server or a cloud-computing system or any other
device that is network-connected.
[0056] Referring to FIG. 1, the user equipment (104) may communicate with a
system (108) through a network (106). In an embodiment, the network (106) may
include at least one of a Fifth Generation (5G) network, 6G network, or the like. The
25 network (106) may enable the user equipment (104) to communicate with other devices
in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this
17

communication. In another embodiment, network (106) may be implemented as, or
include any of a variety of different communication technologies such as a wide area
network (WAN), a local area network (LAN), a wireless network, a mobile network, a
Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network
5 (PSTN), or the like. In an embodiment, each of the UE (104) may have a unique
identifier attribute associated therewith. In an embodiment, the unique identifier
attribute may be indicative of Mobile Station International Subscriber Directory
Number (MSISDN), International Mobile Equipment Identity (IMEI) number,
International Mobile Subscriber Identity (IMSI), Subscriber Permanent Identifier
10 (SUPI) and the like.
[0057] In an embodiment, the network (106) may include one or more base
stations. The UEs (104) may connect to the base stations and request services from
them. The base station may be a network infrastructure that provides wireless access to
one or more UEs associated therewith. The base station may have coverage defined to
15 be a predetermined geographic area based on the distance over which a signal may be
transmitted. The base station may include, but not be limited to, wireless access point, evolved NodeB (eNodeB), 5G node or next generation NodeB (gNB), wireless point, transmission/reception point (TRP), and the like.
[0058] In an embodiment, the base station may include one or more operational
20 units that enable telecommunication between two or more UEs. In an embodiment, the
one or more operational units may include, but not be limited to, transceivers, baseband
unit (BBU), remote radio unit (RRU), antennae, mobile switching centres, radio
network control units, one or more processors associated thereto, and a plurality of
network function units such as Access and Mobility Management Function (AMF)
25 unit, Session Management Function (SMF) unit, Network Exposure Function (NEF)
units, or any custom built functions executing one or more processor-executable instructions, but not limited thereto.
18

[0059] Executing dashboards is a one-time activity performed by the user
whenever needed. The user would need to hit on execute button every time the results
need to be updated. Similarly, reports give the feasibility of checking the compiled
report at later point in time whenever the user wants. But sometimes, continuous
5 monitoring becomes essential to understand the trend in real-time. The computation
involving the trends can include the time span back to even past months or past years. Additionally, it becomes necessary to optimize the computation efficiency while maintaining real-time data delivery. Application monitors massive performance matrix in real-time. In the live-monitoring case, the dashboard output needs to be recalculated
10 at every new interval. For some of the cases of CRR, calculations need not be done
every time and can be used for many more subsequent intervals to get the current interval value of the CRR. In the present disclosure, a caching layer is used that brings down the computation time of CRR to the current interval value computation (for n intervals nt time could be taken, which becomes t only when using the caching layer).
15 Normally, it is not feasible to compute a high number of CRRs in real-time for years
of duration. So, by training the ML model, it is possible output the CRR values whenever needed.
[0060] FIG. 2 illustrates a block diagram of the system (108) for monitoring the
performance data in real-time, according to an embodiment of the present invention.
20 [0061] In an aspect, the system (108) may include one or more processor(s)
(202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, one or
25 more processor(s) (202) may be configured to fetch and execute computer-readable
instructions stored in memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-
19

transitory computer-readable storage medium, which may be fetched and executed to
create or share data packets over a network service. The memory (204) may include
any non-transitory storage device including, for example, volatile memory such as
random-access memory (RAM), or non-volatile memory such as Erasable
5 Programmable Read-Only Memory (EPROM), flash memory, and the like.
[0062] The memory (204) may include, for example, a hard disk drive and/or a
removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as EPROM or PROM), and the
10 like, which is read by and written to by removable storage unit. As will be appreciated,
the removable storage unit includes a computer usable storage medium having stored therein computer software and/or data. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner. The removable storage unit, also called a program storage device or a computer program product, represents a
15 floppy disk, magnetic tape, compact disk, etc. The computer programs (also called
computer control logic) are stored in main memory (204). Such computer programs, when executed, enable the system (108) to perform the functions of the present disclosure as discussed herein. In particular, the computer programs, when executed, enable the one or more processors (102) to perform the functions of the present
20 disclosure. Accordingly, such computer programs represent controllers of the system
(108).
[0063] Referring to FIG. 2, the system (108) may include an interface(s) (206).
The interface(s) (206) may include a variety of interfaces, for example, interfaces for
data input and output devices, referred to as I/O devices, storage devices, and the like.
25 The interface(s) (206) may facilitate communication to/from the system (108). The
interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include but are not
20

limited to, processing unit/engine(s) (208) and database (210).
[0064] In an embodiment, the processing unit/engine(s) (208) may be
implemented as a combination of hardware and programming (for example,
programmable instructions) to implement one or more functionalities of the processing
5 engine(s) (208). In the examples described herein, such combinations of hardware and
programming may be implemented in several different ways. For example, the
programming for the processing engine(s) (208) may be processor-executable
instructions stored on a non-transitory machine-readable storage medium, and the
hardware for the processing engine(s) (208) may include a processing resource (for
10 example, one or more processors), to execute such instructions. In the present
examples, the machine-readable storage medium may store instructions that, when
executed by the processing resource, implement the processing engine(s) (208). In such
examples, system (108) may include the machine-readable storage medium storing the
instructions and the processing resource to execute the instructions, or the machine-
15 readable storage medium may be separate but accessible to system (108) and the
processing resource. In other examples, the processing engine(s) (208) may be
implemented by electronic circuitry.
[0065] In an embodiment, the database (210) includes data that may be either
stored or generated because of functionalities implemented by any of the components
20 of the processor (202) or the processing engines (208). In an embodiment, the database
(210) may be separate from the system (108). In an embodiment, the database (210) may be indicative of including, but not limited to, a relational database, a distributed database, a cloud-based database, or the like. The database may include a distributed file system (222) and a cache layer (226) for storing the data
25 [0066] In an exemplary embodiment, the processing engine (208) may include
one or more engines selected from any of a computing engine (212), a caching engine (214), an AI/ML engine (216), workflow engine (220), load balancing engine (224)
21

and other engines (218) having functions that may include, but are not limited to, testing, storage, and peripheral functions, such as wireless communication unit for remote operation, audio unit for alerts and the like.
[0067] In an embodiment, the computing engine (212) may be configured to
5 receive a request indicative of a request for monitoring performance data associated
with one or more parameters of the network. In an embodiment, the request received
from the operator of the system (108) may be received by a load balancer unit (224)
which then forwards the request to the workflow engine (220). In an embodiment, the
request may include a time-interval parameter for requesting performance data between
10 the time intervals. The request may also include one or more parameters for which
performance data is to be monitored.
[0068] In an embodiment, the workflow engine (220) retrieves the performance
data associated with the one or more parameters from a cache layer (226) based on the one or more parameters. One or more values associated with the one or more
15 parameters is determined upon determining that the one or more values were not
computed previously. The computing engine (212) communicates with an artificial intelligence /machine learning (AI/ML) engine (216) to receive data from the AI/ML engine based on the one or more parameters. The computing engine (212) computes the one or more values based on the received data. In an embodiment, the computing
20 engine provides the one or more parameters and time interval information to the AI/ML
engine (216). The AI/ML engine is a pretrained machine-learning (ML) model based on historical performance data and uses a regression algorithm. The AI/ML engine executes the algorithm to determine or predict the data corresponding to the one or more parameters and time interval. The computing engine on receiving the data from
25 the AI/ML engine computes the one or more values using one or more statistical
methods such as average, median or deviation etc.
[0069] Alternatively, the computing engine (212) fetches the one or more
22

values from a distributed file system upon determining that the one or more values were
computed previously and are stored in the distributed file system. The workflow engine
(220) stores the determined one or more values in the cache layer (226) so that the
stored values can be retrieved for one or more subsequent requests including the one
5 or more parameters. The computing engine (212) generates updated performance data
based on the one or more values and the retrieved performance data and transmits the
updated performance data to the user interface for rendering to the user. In an
embodiment, the performance data associated with CRRS are number of voluntarily terminated calls, whether the call was terminated by the network due to poor
10 connectivity, number of calls terminated due to network connectivity, number of calls
terminated due improper operation of components of the network. The updated performance data may include for example updated number of voluntarily terminated calls, updated number of calls terminated due to network connectivity, updated number of calls terminated due improper operation of components of the network after a
15 specific time interval as included in the request.
[0070] In an embodiment, the caching engine (214) may store one or more
computed values in the cache layer (370) such that the one or more computed values
are retrieved instead of being recomputed for subsequent requests bearing substantially
the same parameters. In an embodiment, the caching engine (214) may associate one
20 or more computed values with the set of parameters provided in request. In an
embodiment, the caching engine (214) may enable the memorization of one or more computed values. By memorizing the one or more computed values and reducing computational burdens, system (108) may allow for live monitoring of the performance data.
25 [0071] In an embodiment, the AI/ML engine (216) may be configured to
determine the one or more computed values based on the one or more parameters and time interval value included in the request. In an embodiment, the AI/ML engine (216)
23

may include a pretrained machine-learning (ML) model, a symbolic expert system, and
the like, or any combination thereof. In embodiments where the AI/ML engine (216)
is a pretrained ML model. The AI/ML engine (216) may be trained based on historical
operational data to predict the one or more computed values. In such embodiments, the
5 AI/ML engine (216) may be continuously retrained based on updated performance data
recorded as the network (106) operates. In an embodiment, the AI/ML engine (216) may generate the one or more computed values when retrieving the performance data for the given set of parameters is expensive or impractical.
[0072] In an embodiment, the one or more computed values may be transmitted
10 to the requesting operator. In an embodiment, visualizations of the performance data
may be displayed on a user interface (310). In an embodiment, the one or more computed values may be transmitted via telecommunication channels, including, but not limited to, via e-mails, notifications, or publications in websites, dedicated dashboards, and the like.
15 [0073] In an embodiment, the performance data associated with CRRS are
number of voluntarily terminated calls, whether the call was terminated by the network due to poor connectivity, number of calls terminated due to network connectivity, number of calls terminated due improper operation of components of the network. Further, the one or more values are indicative of the performance data. For example,
20 the one or more values for the CRRS include the values for number of voluntarily
terminated calls, whether the call was terminated by the network due to poor connectivity, number of calls terminated due to network connectivity, number of calls terminated due improper operation of components of the network.
[0074] FIG. 3 illustrates a flow diagram (300) for monitoring performance data
25 of a network in real-time, according to an embodiment of the present invention. A user
(302) selects one or more parameters such as call release reasons (CRRS) on which the user wants to perform real-time monitoring on a user interface (310).
24

[0075] At step 304, the live monitoring request is sent from user (302) to the
user interface (310). At step 306, the UI (310) sends the request to a load balancer
(320). At step 308, load balancer (320) sends an available instance to a workflow
engine (328). The workflow engine (328) further receives the request and retrieves the
5 performance data from a cache layer (370) at 312. At step 314, the workflow engine
(328) sends a request to a computing engine (330) to determine one or more values associated with the one or more parameters. The computing engine (330) determines that the one or more values were not computed previously and communicates with an artificial intelligence/machine learning (AI/ML) engine (332) to determine the one or
10 more values based on the one or more parameters. At steps 316, the computing engine
fetches, the one or more values from a distributed file system (334) upon determining that the one or more values were computed previously and are stored in the distributed file system (334). The computing engine generates a query and sends the query to the distributed file system (334). If a result indicating presence of the data in the distributed
15 file system (334) is obtained based on the query, then the computing device determine
that the one or more values were computed previously. The result may be in as true or false where true indicates that the one or more values were computed previously and false indicates that the one or more values were not computed previously. At step 318, the computing engine (330) sends a request for generating the updated performance
20 data. The AI/ML engine generates the updated performance data and sends to the
workflow engine. At step 322, computing engine (330) compiles the final output and sends the data with acknowledgment to the workflow engine (328), and at steps 324 and 326, it forwards the data to UI (310) for user (302) to visualize.
[0076] The present disclosure involves the pre-computation of data at the
25 subscriber level. It improves efficiency and reduces computation time during real-time
monitoring. By precomputing the subscribed data, the system can quickly retrieve and
provide the required information without performing complex calculations repeatedly.
25

[0077] When the request reaches computing engine, two categories emerge for
CRR calculation. First is, whether to take help from cache layer (370) for the current
interval output or generate the output from the AI/ML engine. If the data is not present
in the database for the current interval, the prediction is determined using the AI/ML
5 engine. The monitoring can be continued even in the case of data loss due to hardware
issues at counter collectors using the present system.
[0078] FIG. 4 illustrates an exemplary flow diagram of a method (400) for
monitoring the performance data in real-time, according to an embodiment of the
present invention. The performance data is associated with one or more parameters
10 related to the network.
[0079] At step 402, one or more parameters are selected by a user using a user
interface (UI) for real-time monitoring of performance data associated with the one or more parameters. In an example, the one or more parameters may include call release reasons (CRRS), call set up success rate (CSSR), and answer-seizure ratio (ASR) etc.
15 [0080] At step 404, a request is received from the user by a workflow engine
(328) for monitoring the performance data associated with the one or more parameters. In an embodiment, the request received from the user of system (108) is transmitted to a load balancer unit (320) which forwards the request to the workflow engine according to the load on the workflow engine. In an embodiment, the one or more parameters
20 may include a time-interval parameter for determining the performance data between
specific time intervals. The time interval may be user defined. For example, it may be hourly, daily, monthly, or between specific dates or time interval.
[0081] At step 406, the performance data is retrieved from a distributed file
system. At step 408, one or more values associated with the one or more parameters is
25 determined, by the computing engine (330), based upon determining that the one or
more values were not computed previously and communicating with an artificial
26

intelligence /machine learning engine (332) to determine the one or more values based
on the one or more parameters. The AI/ML engine predicts data based on the one or
more parameters and time interval in the request. The computing engine calculates the
one or more values based on the data. In another embodiment, the one or more values
5 associated with the one or more parameters is determined, by fetching, the one or more
values from a distributed file system (334) upon determining that the one or more values were computed previously and are stored in the distributed file system (334).
[0082] In an embodiment, the performance data may include, but not be limited
to, unique identifier associated with UEs (104) connected to the network, attributes
10 associated with the base stations, call duration, call type, and the like. In an example,
the user may request for count of unique values of a CRR from the subscriber session logs. In such examples, the distribution of CRR values may be KPI, in that the network (106) may be inferred to be performing optimally when the number of voluntary terminations of calls is greater than number of involuntary terminations calls, due to
15 poor network, malfunctioning of operational units, etc.
[0083] In an embodiment, the AI/ML engine (216) may be configured to
determine the one or more computed values. In an embodiment, the AI/ML engine (216) may include a pretrained machine-learning (ML) model, a symbolic expert system, and the like, or any combination thereof. In embodiments where the AI/ML
20 engine (216) is a pretrained ML model which is trained on historical operational data
to predict the one or more computed values. In such embodiments, the AI/ML engine (216) may be continuously retrained based on new performance data recorded as the network (106) operates. In an embodiment, the AI engine (216) may generate the one or more computed values when retrieving the performance data for the given
25 parameters is expensive or impractical.
[0084] At step 410, the one or more computed values is stored by a caching
layer (370) to be retrieved for one or more subsequent requests including the same
27

parameters. In an embodiment, a caching engine (214) may store one or more
computed values in the cache layer (370) such that the one or more computed values
are retrieved instead of being recomputed for subsequent requests including the same
parameters. In an embodiment, the caching engine (214) may enable the memorization
5 of one or more computed values. By memorizing one or more computed values and
reducing computational burdens, system (108) may allow for live monitoring of the operational data.
[0085] At step 412, the computing engine (330) generates updated performance
data based on the one or more values and retrieved performance data. At step 414, the
10 one or more values is transmitted by the workflow engine to the user interface for
rendering to the user. In an embodiment, the one or more computed values may be transmitted to the requesting operator. In an embodiment, visualizations of the one or more computed values may be displayed by the monitoring unit (110) on a user interface (310). For example, the visualizations may be in the form of bar chart, graph,
15 table etc. In another example, visualization may also highlight certain performance data
which may lead to an anomaly in the future.
[0086] In an embodiment, the one or more computed values may be transmitted
via telecommunication channels, including, but not limited to, via e-mails,
notifications, or publications in websites, dedicated dashboards, and the like. In an
20 embodiment, preventative maintenance may be performed on the network (106) based
on the received performance data.
[0087] In another exemplary embodiment, a user equipment (UE) is configured
for monitoring performance data of network in real-time is described. The user
equipment includes a processor and a computer readable storage medium storing
25 programming for execution by the processor. The programming includes instructions
for selecting, one or more call release reasons (CRRS) for performing real-time monitoring on a user interface (UI) including a monitoring unit using the method as

disclosed above.
[0088] FIG. 5 illustrates an exemplary computer system (500) in which or with
which embodiments of the present invention may be implemented. As shown in FIG.
4, the computer system (500) may include an external storage device (510), a bus (520),
5 a main memory (530), a read-only memory (540), a mass storage device (550), a
communication port (560), and a processor (570). A person skilled in the art will appreciate that the computer system (500) may include more than one processor (570) and communication ports (560). Processor (570) may include various modules associated with embodiments of the present disclosure.
10 [0089] In an embodiment, the communication port (560) may be any of an RS-
232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (560) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any
15 network to which the computer system (500) connects.
[0090] In an embodiment, the memory (530) may be Random Access Memory
(RAM), or any other dynamic storage device commonly known in the art. Read-only
memory (540) may be any static storage device(s) e.g., but not limited to, a
Programmable Read Only Memory (PROM) chips for storing static information e.g.,
20 start-up or Basic Input/Output System (BIOS) instructions for the processor (570).
[0091] In an embodiment, the mass storage (550) may be any current or future
mass storage solution, which may be used to store information and/or instructions.
Exemplary mass storage solutions include, but are not limited to, Parallel Advanced
Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA)
25 hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial
Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of
29

Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
[0092] In an embodiment, the bus (520) communicatively couples the
processor(s) (470) with the other memory, storage, and communication blocks. The
bus (520) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended
5 (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB) or
the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (400).
[0093] Optionally, operator and administrative interfaces, e.g., a display,
10 keyboard, joystick, and a cursor control device, may also be coupled to the bus (520)
to support direct operator interaction with the computer system (500). Other operator
and administrative interfaces may be provided through network connections connected
through the communication port (560). The components described above are meant
only to exemplify various possibilities. In no way should the aforementioned
15 exemplary computer system (500) limit the scope of the present disclosure.
[0094] The present disclosure provides technical advancement related to
determination of performance data of a network. This advancement addresses the limitations of existing solutions of repetitive calculation for determining the performance data. The present disclosure enables to perform the calculations once and
20 then use the calculation multiple times for calculating the performance data for multiple
user. Therefore, less execution time is needed as the computation is done only once for similar dashboards executed by multiple users. The disclosure uses the artificial intelligence and machine learning model to determine the performance data which offer optimal utilization of resources and reduces computational burdens for monitoring the
25 network performance data.
[0095] While considerable emphasis has been placed herein on the preferred

embodiments, it will be appreciated that many embodiments can be made and that
many changes can be made in the preferred embodiments without departing from the
principles of the disclosure. These and other changes in the preferred embodiments of
the disclosure will be apparent to those skilled in the art from the disclosure herein,
5 whereby it is to be distinctly understood that the foregoing descriptive matter to be
implemented merely as illustrative of the disclosure and not as limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0096] The present disclosure provides a system and a method for monitoring
10 the performance data in real-time.
[0097] The present disclosure provides a system and a method that optimally
utilizes resources and reduces computational burdens for monitoring the network's performance.
[0098] The present disclosure provides a system and a method that utilizes
15 memory to prevent duplication of computation.
[0099] The present disclosure provides a system and a method for identifying
causes for degradation in performance and performing preventative maintenance of the network in real-time.
[00100] The present disclosure provides a system and a method that identifies
20 peak hours for appropriate resource redistribution and resolves potential bottlenecks in
the networks by analysing the network performance data in real-time.
[00101] The present disclosure provides a system and a method that fine-tunes
network configurations based on live network performance data.
[00102] The present disclosure provides a system and a method that uses an
31

Artificial Intelligence engine to determine one or more computed values that indicate the performance of the network.

WE CLAIM:
5
1. A method (400) for monitoring performance data associated with one or more pa-rameters of a network, the method comprising:
selecting (402), by a user (302), the one or more parameters using a user interface
(UI) (310);
10 receiving (404), by a workflow engine (328), a request including the one or more
parameters;
retrieving (406), by the workflow engine (328), the performance data associated with the one or more parameters from a cache layer (370);
determining (408) one or more values associated with the one or more parame-
15 ters by at least one of:
determining, by a computing engine (330), that the one or more values
were not computed previously and communicating with an artificial intelli¬
gence/machine learning engine (AI/ML) engine (332) to determine the one or
more values based on the one or more parameters; and
20 fetching, by the computing engine (330), the one or more values from a
distributed file system (334) upon determining that the one or more values were
computed previously and are stored in the distributed file system (334);
storing (410), by the workflow engine (328), the determined one or more values
in the cache layer (370) for retrieving based on one or more subsequent requests in-
25 cluding the one or more parameters;
generating (412), by workflow engine (328), updated performance data based on the one or more values received form the workflow engine (328) and retrieved per-formance data; and
transmitting (414), by the workflow engine (328), the updated performance data
30 to the user interface (310) for rendering to the user (302).
33

2. The method (400) of claim 1, further comprising:
displaying one or more visualizations associated with the one or more values to the user (302) via the user interface (310). 5
3. The method (400) of claim 1, wherein the request from the user (302) is received by
the workflow engine (328) via a load balancer (320) unit.
4. The method (400) of claim 1, wherein the one or more parameters include call release
10 reasons (CRRS).
5. The method (400) of claim 1, wherein monitoring of the performance data is per¬
formed in real-time.
15 6. The method (400) of claim 1, wherein the request includes time interval information
which indicates an interval of time during which the performance data is to be moni-tored.
7. A system (108) for monitoring performance data associated with one or more pa-
20 rameters of a network, the system (108) comprising:
a memory (204);
one or more processor(s) (202) configured to fetch and execute computer-read¬able instructions stored in a memory (204) to:
select, by a user (302), the one or more parameters for monitoring the
25 performance data using a user interface (UI) (310);
receive, by a workflow engine (328), a request including the one or more parameters for monitoring the performance data;
retrieve, by the workflow engine (328), the performance data associated
with the one or more parameters from a cache layer (370);
34

determine, one or more values associated with the one or more parame-ters by at least one of:
determining, by a computing engine (330), that the one or more
values were not computed previously, and communicating with an arti-
5 ficial intelligence/machine learning (AI/ML) engine (332) to determine
the one or more values based on the one or more parameters; and
fetching, by the computing engine (330), the one or more values
from a distributed file system (334) upon determining that the one or
more values were computed previously and are stored in the distributed
10 file system (334);
store, by the workflow engine (328), the determined one or more values in the cache layer (370) for retrieving based on one or more subsequent requests including the one or more parameters;
generating, by the workflow engine (328), updated performance data
15 based on the one or more values received form the workflow engine (328) and
retrieved performance data; and
transmit, by the workflow engine (328), the updated performance data to the user interface (310) for rendering to the user (302).
20 8. The system (108) of claim 7, wherein one or more visualizations associated with the
one or more values are displayed to the user (302) via the user interface (310).
9. The system (108) of claim 7, wherein the request from the user (302) is received by
the workflow engine via a load balancer (320) unit.
25
10. The system (108) of claim 7, wherein the one or more parameters include call re¬
lease reasons (CRRS).
35

11. The system (108) of claim 7, wherein the performance data is monitored in real-time.
12. The system (108) of claim 7, wherein the request includes a time interval infor-mation which indicates an interval of time during which the performance data is to be
5 monitored.
13. A User Equipment (UE) (104) communicatively coupled to a system (108) for
monitoring performance data associated with one or more parameters of a network,
wherein the UE (104) is configured for:
10 transmitting the one or more parameters to the system (108), wherein the system
(108) is configured for monitoring the performance data as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321051145-STATEMENT OF UNDERTAKING (FORM 3) [29-07-2023(online)].pdf 2023-07-29
2 202321051145-PROVISIONAL SPECIFICATION [29-07-2023(online)].pdf 2023-07-29
3 202321051145-FORM 1 [29-07-2023(online)].pdf 2023-07-29
4 202321051145-DRAWINGS [29-07-2023(online)].pdf 2023-07-29
5 202321051145-DECLARATION OF INVENTORSHIP (FORM 5) [29-07-2023(online)].pdf 2023-07-29
6 202321051145-FORM-26 [25-10-2023(online)].pdf 2023-10-25
7 202321051145-FORM-26 [30-05-2024(online)].pdf 2024-05-30
8 202321051145-FORM 13 [30-05-2024(online)].pdf 2024-05-30
9 202321051145-AMENDED DOCUMENTS [30-05-2024(online)].pdf 2024-05-30
10 202321051145-Request Letter-Correspondence [04-06-2024(online)].pdf 2024-06-04
11 202321051145-Power of Attorney [04-06-2024(online)].pdf 2024-06-04
12 202321051145-Covering Letter [04-06-2024(online)].pdf 2024-06-04
13 202321051145-FORM-5 [26-07-2024(online)].pdf 2024-07-26
14 202321051145-DRAWING [26-07-2024(online)].pdf 2024-07-26
15 202321051145-CORRESPONDENCE-OTHERS [26-07-2024(online)].pdf 2024-07-26
16 202321051145-COMPLETE SPECIFICATION [26-07-2024(online)].pdf 2024-07-26
17 202321051145-ORIGINAL UR 6(1A) FORM 26-160924.pdf 2024-09-23
18 202321051145-FORM 18 [04-10-2024(online)].pdf 2024-10-04
19 Abstract-1.jpg 2024-10-08
20 202321051145-Response to office action [15-10-2024(online)].pdf 2024-10-15
21 202321051145-FORM 3 [12-11-2024(online)].pdf 2024-11-12