Sign In to Follow Application
View All Documents & Correspondence

System And Method For Analyzing Complex Performance Metrics

Abstract: The present disclosure provides a system (108) and a method (600) for analyzing complex performance metrics using machine learning (ML). The system includes an AI/ML model for generating nested templates from user commands. The system (108) enables complex performance measurements to be broken down into more detailed components, where metrics are organised in a parent-child relationship, enabling a deeper analysis of specific aspects or dimensions of performance. Further, the system (108) aids the user in generating reports from various analyses using the nested templates at a desired frequency. The system (108) provides flexibility while choosing performance metrics, analysing them at regular intervals, and computing significant metrics for efficient monitoring of data. The system (108) provides a user-friendly user-interface (UI) that eases the process of generating reports. The system enables a deeper analysis of specific aspects or dimensions of performance. Fig. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 July 2023
Publication Number
05/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. SAXENA, Gaurav
B1603, Platina Cooperative Housing Society, Casa Bella Gold, Kalyan Shilphata Road, Near Xperia Mall Palava City, Dombivli, Kalyan, Thane - 421204, Maharashtra, India.
4. SHOBHARAM, Meenakshi
2B-62, Narmada, Kalpataru, Riverside, Takka, Panvel, Raigargh - 410206, Maharashtra, India.
5. BHANWRIA, Mohit
39, Behind Honda Showroom, Jobner Road, Phulera, Jaipur - 303338, Rajasthan, India
6. GAYKI, Vinay
259, Bajag Road, Gadasarai, District -Dindori - 481882, Madhya Pradesh, India.
7. KUMAR, Durgesh
Mohalla Ramanpur, Near Prabhat Junior High School, Hathras, Uttar Pradesh -204101, India.
8. BHUSHAN, Shashank
Fairfield 1604, Bharat Ecovistas, Shilphata, NH48, Thane - 421204, Maharashtra, India.
9. KHADE, Aniket Anil
X-29/9, Godrej Creek Side Colony, Phirojshanagar, Vikhroli East - 400078, Mumbai, Maharashtra, India.
10. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
11. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
12. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
13. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera District-Kota, Rajasthan - 324001, India.
14. SAHU, Kishan
Ajay Villa, Gali No. 2 Ambedkar Colony, Bikaner, Rajasthan - 334003, India.
15. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
16. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
17. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
18. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
19. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
20. KUSHWAHA, Avinash
SA 18/127, Mauza Hall, Varanasi - 221007, Uttar Pradesh, India.
21. GARG, Harshita
37A, Ananta Lifestyle, Airport Road, Zirakpur, Mohali, Punjab - 140603, India.
22. KUMAR, Yogesh
Village-Gatol, Post-Dabla, Tahsil-Ghumarwin, Distict-Bilaspur, Himachal Pradesh - 174021, India.
23. TALGOTE, Kunal
29, Nityanand Nagar, Nr. Tukaram Hosp., Gaurakshan Road, Akola - 444004, Maharashtra, India.
24. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli, Maharashtra - 421204, India.
25. VISHWAKARMA, Dharmendra Kumar
Ramnagar, Sarai Kansarai, Bhadohi - 221404, Uttar Pradesh, India.
26. SONI, Sajal
K. P. Nayak Market Mauranipur, Jhansi, Uttar Pradesh - 284204, India.

Specification

FORM 2
THE PATENTS ACT, 1970
THE PATENTS RULE 0) 003
COMPLETE SPECIFICATION
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India; Nationality: India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure generally relates to systems and methods for
creating nested dashboards using nested templates for analysing complex performance measurements. More particularly, the present disclosure relates to a system and a method for integrated performance management using machine learning (ML) and the nested templates.
DEFINITIONS
[0003] "Distributed File System" refers to a network file system that spans
multiple servers or nodes, providing scalable and fault-tolerant storage for error code data and other related information.
[0004] "Distributed Data Lake” refers to a scalable storage solution
designed to store and manage large volumes of structured, semi-structured, and unstructured data, including processed and analyzed error code data. Structured data is data that has a standardized format for efficient access by software and humans alike. It is typically tabular with rows and columns that clearly define data attributes. Unstructured simply means that it is datasets (typical large collections of files) that aren't stored in a structured database format. Unstructured data has an internal structure, but it's not predefined through data models. It might be human

generated, or machine generated in a textual or a non-textual format. Semi-structured data is data that does not conform to a data model but has some structure. It lacks a fixed or rigid schema.
[0005] “Load Balancer” is a component that distributes incoming user
requests across multiple instances of the system to ensure optimal performance and scalability.
[0006] “Template” is a pre-created document that already has some
formatting. Rather than starting from scratch to format a document, you can use the formatting of a template to save yourself a lot of time.
[0007] A “nested template” is just basically one template within another.
The advantage of a nested template is that you can go ahead and deploy resources across resource groups.
[0008] “Computation Layer” is responsible for performing data filtering and
geography-based network functions failure data computation. It retrieves raw error code data, applies filtering and aggregation operations based on user requests, and computes relevant metrics and insights.
[0009] “Integrated performance manager” (IPM) works on a holistic
approach to corporate performance management based on the integration of multiple factors and long-term value creation into decision-making to drive strategic success.
[0010] A “nested dashboard” is a dashboard canvas that is embedded inside
another dashboard canvas. This embedded dashboard can contain multiple visualizations and widgets just like any other dashboard canvas. There can bet two additional levels (child dashboards) of dashboard at opt level (parent dashboard), for a total of three levels of nesting. The main dashboard canvas can have a first level nested dashboard, which can have a second level nested dashboard, which can then have a third level nested dashboard.
BACKGROUND OF THE INVENTION
[0011] The following description of the related art is intended to provide

background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admission of the prior art.
[0012] Complex performance analysing metrics may require multiple
dashboards for presenting comprehensive data to users. Various dashboards may provide specific metrics or key performance indicators (KPIs) where specific pieces of information may be continually monitored. However, conventional systems involve manual creation and periodical execution of templates, which may be inefficient and take a lot of processing time.
[0013] In conventional systems, for analysing the complex performance
matrix at deeper level, a user may need to create different dashboards. Even at KPI level which is made of counter, child KPI and attributes, the user may need to check for child KPI or counter for analysing the complex performance matrix, which may be time-consuming.
[0014] There is, therefore, a need in the art to provide a system and a method
that can mitigate the problems associated with the prior arts.
SUMMARY OF THE INVENTION
[0015] In an aspect, the present disclosure discloses a method for analyzing
complex performance metrics. The method includes obtaining a request for generating a report. The request comprises a template associated with a plurality of parameters. The method further includes fetching data pertaining to the plurality of parameters from a distributed data lake (DDL). In addition, the method includes generating the report by populating the template based on the fetched data and rendering the report on a graphical user interface (GUI).
[0016] In an embodiment, the upon obtaining the request, the method
comprises determining whether the request is a valid request or an invalid request.

The request validation is performed based on a flag or a token included in the request. The flag or token is compared with a flag or a token prestored in the IPM to valid the request. Upon determining that the request is valid, the method includes proceeding with the report generation and upon determination that the request is invalid, the method includes generating a failure notification.
[0017] In an embodiment, the method further comprises storing the report
in a distributed file system (DFS).
[0018] In an embodiment, generating the report comprises providing the
fetched data pertaining to the plurality of parameters as an input to a machine learning (ML) model. Further, the method includes obtaining insights related to the plurality of parameters from the ML model.
[0019] In an embodiment, fetching the data comprises determining whether
the data is fetched within a pre-defined time period or not. The fetching of the data comprises connecting IPM to the distributed data lake over a network and sending a query to retrieve the required data from the distributed data lake to the IPM. In response to the query or request, the distributed data lake transmits the required data to the IPM. The pre-defined time period may be set by user. The pre-defined time period may be set as, but not limited to, e.g. 1s, 5s or may set as time duration such as between 5s to 20s. If the predefined time period is greater than a retention period then, IPM forwards the request to CL for further processing.
[0020] In an embodiment, when the data is not fetched within the pre-
defined time period, the method comprises transmitting the request for generating the report to a computation layer.
[0021] In an embodiment, the method further comprises computing a first
child dashboard and providing an output of the first child dashboard as an input to a parent dashboard of a nested dashboard. For example, when an initial dashboard displays some visualization, then it acts as parent dashboard and by performing some action on the parent dashboard, it displays another dashboard which is referred as child dashboard.

[0022] In an aspect, the present disclosure discloses a system for analysing
complex performance metrics. The system comprises a processing engine
configured to obtain a request for generating a report. The request comprises a
template associated with a plurality of parameters. The processing engine is also
5 configured to fetch data pertaining to the plurality of parameters from a distributed
data lake (DDL). Further, the processing engine is configured to generate the report by populating the template based on the fetched data and render the report on a graphical user interface (GUI).
[0023] In an embodiment, upon obtaining the request, the processing engine
10 is configured to determine whether the request is a valid request or an invalid
request. Upon determining that the request is valid, the processing engine is configured to proceed with the report generation. Upon determination that the request is invalid, the processing engine is configured to generate a failure notification.
15 [0024] In an embodiment, the processing engine is further configured to
store the report in a distributed file system (DFS).
[0025] In an embodiment, to generate the report, the processing engine is
configured to provide the fetched data pertaining to the plurality of parameters as
an input to a machine learning (ML) model and obtain insights related to the
20 plurality of parameters.
[0026] In an embodiment, to fetch the data, the processing engine is
configured to determine whether the data is fetched within a pre-defined time period or not.
[0027] In an embodiment, when the data is not fetched within the pre-
25 defined time period, the processing engine is configured to transmit the request for
generating the report to a computation layer.
[0028] In an embodiment, the computation layer is configured to compute a
first child dashboard and provide an output of the first child dashboard as an input
6

to a parent dashboard.
[0029] The present disclosure discloses a computing device for analysing
complex performance metrics. The computing device comprises a processor to
obtain a request for generating a report. The request comprises a template associated
5 with a plurality of parameters. The processor is configured to fetch data pertaining
to the plurality of parameters from a distributed data lake (DDL). In addition, the processor is configured to generate the report by populating the template based on the fetched data and render the report on a graphical user interface (GUI).
[0030] The present disclosure discloses a computer program product
10 comprising a non-transitory computer-readable medium comprising instructions
that, when executed by one or more processors, cause the one or more processors
to obtain a request for generating a report. The request comprises a template
associated with a plurality of parameters. In addition, the instructions cause the one
or more processors to fetch data pertaining to the plurality of parameters from a
15 distributed data lake (DDL). Further, the instructions cause the one or more
processors to generate the report by populating the template based on the fetched data and render the report on a graphical user interface (GUI).
OBJECTS OF THE INVENTION
20 [0031] It is an object of the present disclosure to provide a system and a
method for creating nested templates that are used in demand execution, report scheduling, and live monitoring seamlessly.
[0032] It is an object of the present disclosure to provide a system and a
method where an artificial intelligence (AI)/machine learning (ML) model is used
25 for generating nested templates from user commands.
[0033] It is an object of the present disclosure to provide a system and a
method where complex performance measurements are broken down into more detailed components, enabling a deeper analysis of specific aspects or dimensions
7

of performance.
[0034] It is an object of the present disclosure to provide a system and a
method where a user may generate reports from various analyses using the nested templates at a desired frequency.
5 [0035] It is an object of the present disclosure to provide a system and a
method where frequently executed templates from a report type may be reused based on a query from the user.
[0036] It is an object of the present disclosure to provide a system and a
method where a child dashboard may generate an input to a parent dashboard while
10 performing complex calculations and both of these outputs may be provided to the
user.
BRIEF DESCRIPTION OF DRAWINGS
[0037] The accompanying drawings, which are incorporated herein, and
15 constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the components
20 using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
[0038] FIG. 1 illustrates an exemplary network architecture for
25 implementing a proposed system, in accordance with an embodiment of the present
disclosure.
[0039] FIG. 2 illustrates an exemplary block diagram of a proposed system,
in accordance with an embodiment of the present disclosure.
8

[0040] FIG. 3 illustrates an exemplary flow diagram for generating a key
performance indicator (KPI) report/service level report/alarm/counters report through a distributed file system, in accordance with an embodiment of the present disclosure.
5 [0041] FIG. 4 illustrates an exemplary block diagram of a system
architecture for analyzing complex performance metrics, in accordance with an embodiment of the present disclosure.
[0042] FIG. 5 illustrates an exemplary computer system in which or with
which the embodiments of the present disclosure may be implemented.
10 [0043] FIG. 6 illustrates an exemplary flow diagram of a method for
analyzing complex performance metrics, in accordance with an embodiment of the present disclosure.
[0044] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
15
LIST OF REFERENCE NUMERALS
100 – Network architecture
102-1, 102-2…102-N – Users
104-1, 104-2…104-N – Computing Devices
20 106 – Network
108– System
200- Block diagram
202 – One or more processor(s)
204 – Memory
25 206 – Interface(s)
208 – Processing unit/engine(s)
210 – Database
212 – Data Parameter engine
9

214 – Other engine(s)
300- Flow chart
400- Block Diagram
302, 402– GUI
5 304, 404- Load Balancer
306, 406- IPM
308- AI/ML
312, 408- Distributed Data Lake
310, 410- Computational Layer
10 314, 412- Distributed File System
500- Computer system
510 – External Storage Device
520 – Bus
530 – Main Memory
15 540 – Read Only Memory
550 – Mass Storage Device
560 – Communication Port
570- Processor
20 DETAILED DESCRIPTION
[0045] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the
present disclosure. It will be apparent, however, that embodiments of the present
disclosure may be practiced without these specific details. Several features
25 described hereafter can each be used independently of one another or with any
combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
30 [0046] The ensuing description provides exemplary embodiments only and
10

is not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
5 function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
[0047] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
10 specific details. For example, circuits, systems, networks, processes, and other
components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
15 [0048] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged.
20 A process is terminated when its operations are completed but could have additional
steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
25 [0049] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or
11

designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive like the term
5 “comprising” as an open transition word without precluding any additional or other
elements.
[0050] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature,
structure, or characteristic described in connection with the embodiment is included
10 in at least one embodiment of the present disclosure. Thus, the appearances of the
phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
15 [0051] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the
20 presence of stated features, integers, steps, operations, elements, and/or
components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the associated listed items.
25 [0052] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGs. 1-5.
[0053] FIG. 1 illustrates an example network architecture (100) for
implementing a proposed system (108), in accordance with an embodiment of the present disclosure.
12

[0054] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) may be connected to a proposed system (108) through a network (106).
A person of ordinary skill in the art will understand that the one or more computing
devices (104-1, 104-2…104-N) may be collectively referred as computing devices
5 (104) and individually referred as a computing device (104). One or more users
(102-1, 102-2…102-N) may provide one or more requests to the system (108). A
person of ordinary skill in the art will understand that the one or more users (102-
1, 102-2…102-N) may be collectively referred as users (102) and individually
referred as a user (102). Further, the computing devices (104) may also be referred
10 as a user equipment (UE) (104) or as UEs (104) throughout the disclosure.
[0055] In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, a laptop, etc. Further, the computing device (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard.
15 Furthermore, the computing device (104) may include a mobile phone, smartphone,
virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, and a mainframe computer. Additionally, input devices for receiving input from the user (102) such as a touchpad, touch-enabled screen, electronic pen, and the like may be
20 used.
[0056] In an embodiment, the network (106) may include, by way of
example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals,
25 waves, voltage or current levels, some combination thereof, or so forth. The
network (106) may also include, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN),
30 a cable network, a cellular network, a satellite network, a fiber optic network, or
13

some combination thereof.
[0057] In an embodiment, the system (108) may include a framework and
outline for analysis of a network performance. The system (108) may ensure that
the user (102) may have the same format throughout for performance analysis so
5 that reading the reports are easy, consistent and less prone to erroneous
interpretations. The system (108) may provide flexibility to choose certain
performance metrics, analyse them at regular intervals and to compute other
significant metrics of network data, thereby ensuring efficient monitoring of the
data. Further, the system (108) may analyse issues quickly and provide a solution
10 to the user (102).
[0058] In an embodiment, the system (108) may use an AI/ML model for
better analysis during report generation. A machine learning algorithm is a mathematical method to find patterns in a set of data. Machine Learning algorithms are often drawn from statistics, calculus, and linear algebra. Some popular examples
15 of machine learning algorithms include linear regression, decision trees, random
forest, and XGBoost. Further, the system (108) may provide different visualization of KPIs, metrics and provide insights on the generated reports to the user (102). The visualization may be in the form of bar charts, histograms, color coding and other different way of representing the KPIs for visualization. As an example, the KPIs
20 includes radio network KPIs such as Accessibility KPIs, Retainability KPIs,
Mobility KPIs, Availability KPIs, Utilization KPIs, and Traffic KPIs. In another embodiment KPIs may include any sort of measurement, reading, or data that is relevant to the managed network. The disclosure is not limited to these KPIs only and can include other performance indicators also. The complex performance
25 metrics are those metrics which huge computation resources and information from
multiple sourcessuch as jitter, throughput, and latency, device metrics, network metrics, error rate etc. The metric presented here are some of the examples and are limited to these only. The insights may include resource utilization, phase progress, and threats, trends, forecasts and projections etc.
14

[0059] In an embodiment, the user (102) may generate the reports using
same templates at a predetermined time frequency. In an embodiment, the reports
may be generated in different formats such word, excel sheet and PDF etc. The
predetermined time frequency may be hourly, daily, weekly, periodically or non-
5 periodically as defined by the user. Further, the system (108) may provide a user-
friendly user interface (UI) that eases the process instead of writing complex
queries. As an example, the queries may be auto generated based on the selection
of one or more parameters or criteria.
[0060] Although FIG. 1 shows exemplary components of the network
10 architecture (100), in other embodiments, the network architecture (100) may
include fewer components, different components, differently arranged components,
or additional functional components than depicted in FIG. 1. Additionally, or
alternatively, one or more components of the network architecture (100) may
perform functions described as being performed by one or more other components
15 of the network architecture (100).
[0061] FIG. 2 illustrates an example block diagram (200) of a proposed
system (108), in accordance with an embodiment of the present disclosure.
[0062] Referring to FIG. 2, in an embodiment, the system (108) may include
one or more processor(s) (202). The one or more processor(s) (202) may be
20 implemented as one or more microprocessors, microcomputers, microcontrollers,
digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system
25 (108). The memory (204) may be configured to store one or more computer-
readable instructions or routines in a non-transitory computer readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for example, volatile memory such as random-access memory
15

(RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0063] In an embodiment, the system (108) may include an interface(s)
(206). The interface(s) (206) may comprise a variety of interfaces, for example,
5 interfaces for data input and output devices (I/O), storage devices, and the like. The
interface(s) (206) may facilitate communication through the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing engine(s) (208) and a database (210). Further, the processing
10 engine(s) (208) may include a data parameter engine (212) and other engine(s)
(214). In an embodiment, the other engine(s) (214) may include, but not limited to, a data ingestion engine, an input/output engine, a reporting engine, and a notification engine. The processing engine(s) (208) may organise multiple metrics in a parent-child relationship. The processing engine(s) (208) may breakdown
15 complex performance measurements into more detailed components, enabling a
deeper analysis of specific aspects or dimensions of performance.
[0064] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the
20 processing engine(s) (208). In examples described herein, such combinations of
hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a
25 processing resource (for example, one or more processors), to execute such
instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing
30 resource to execute the instructions, or the machine-readable storage medium may
16

be separate but accessible to the system and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0065] The processing engine (208) is configured to obtain a request for
5 generating a report. The request comprises a template associated with a plurality of
parameters. The processing engine (208) is also configured to fetch data pertaining
to the plurality of parameters from a distributed data lake (DDL). Further, the
processing engine (208) is configured to generate the report by populating the
template based on the fetched data and render the report on a graphical user
10 interface (GUI).
[0066] In an embodiment, upon obtaining the request, the processing engine
(208) is configured to determine whether the request is a valid request or an invalid
request. Upon determining that the request is valid, the processing engine (208) is
configured to proceed with the report generation. Upon determination that the
15 request is invalid, the processing engine (208) is configured to generate a failure
notification. The processing engine (208) is further configured to store the report in a distributed file system (DFS).
[0067] In an embodiment, to generate the report, the processing engine
(208) is configured to provide the fetched data pertaining to the plurality of
20 parameters as an input to a machine learning (ML) model and obtain insights related
to the plurality of parameters. To fetch the data, the processing engine (208) is configured to determine whether the data is fetched within a pre-defined time period or not.
[0068] In an embodiment, when the data is not fetched within the pre-
25 defined time period, the processing engine (208) is configured to transmit the
request for generating the report to a computation layer. The computation layer is configured to compute a first child dashboard and provide an output of the first child dashboard as an input to a parent dashboard.
[0069] Although FIG. 2 shows exemplary components of the system (108),
17

in other embodiments, the system (108) may include fewer components, different
components, differently arranged components, or additional functional components
than depicted in FIG. 2. Additionally, or alternatively, one or more components of
the system (108) may perform functions described as being performed by one or
5 more other components of the system (108).
[0070] FIG. 3 illustrates an example flow diagram (300) for generating a
key performance indicator (KPI) report/service level report/ alarm/counters report through a distributed file system, in accordance with an embodiment of the present disclosure.
10 [0071] As illustrated in FIG. 3, the flow diagram (300) may include a
detailed procedure for generating various reports.
[0072] In an embodiment, a user may create a template and makes a report
on a graphical user interface (GUI) (302). The user may send a request to create and store a report to a load balancer (ELB) OR (LB) (304) via the GUI (302) at step
15 316. The ELB (304) may send the request to create and store the report to an
integrated performance manager (IPM) (306) at step 318. At step 320, the IPM converts the request as per JavaScript Object Notation (JSON). After the conversion, the IPM (306) may send a request validation successful message to the LB (304) at step 324. Further, the LB (304) may send the request validation
20 successful message to the GUI (302). At step 322, it is determined whether the
request was successful based on the Request validation Successful received from IPM. The IPM (306) may fetch data from a distributed data lake (312) for report creation at step 330. The IPM may create a report and store the report in a distributed file system (DFS) (314) at step 332.
25 [0073] The IPM (306) may send parameters to an AI/ML model (308) at
step 336 that may be trained using the data and request insights from the IPM (306). Further, the AI/ML model (308) may send the data and insights to the IPM at step 348, while the IPM may send a request successful message to the LB (304) at 338. The LB may further send the request successful message to the GUI (302) at step
18

340 to generate a JSON response at 342. The IPM may fetch data from a
computation layer (CL) (310) at step 352 for report creation at step 354. Based on
a failed request (350), the IPM (306) may generate a report and send a request failed
message to the LB (304) at step 356 which is further provided to GUI at step 358.
5 A first child dashboard may be computed, and the child dashboard output may be
provided as an input for a parent dashboard. For example, in a waterfall dashboard
busy hour of the child dashboard may be taken as an input for the parent dashboard.
The parent dashboard may compute only at a busy hour of a child KPI. Both outputs
of child and parent dashboards may be populated at the GUI. Upon computation,
10 the CL may send the output to the IPM to save the report.
[0074] Further, in an embodiment, the LB (304) may send the request failed
message to the GUI (302) at step 364. Additionally, based on a failed validation
(360), the IPM (306) may send a validation failed message to the LB (304) at step
362, while the LB may send the validation failed message to the GUI (302) at 364.
15 Further, the user (102) may view the report right away and may also view them at
regular intervals based on customized schedule.
[0075] The created report template may be used in different flow seamlessly
such as on Demand execution, report scheduling, live monitoring, etc.
[0076] FIG. 4 illustrates an example block diagram (400) of a system
20 architecture for analyzing complex performance metrics, in accordance with an
embodiment of the present disclosure.
[0077] As illustrated in FIG. 4, in an embodiment, the system (108) may
include a load balancer (404) that stores various requests from a GUI (402). The
load balancer may be accessed by an IPM (406) that processes the various requests
25 from the user (102) and stores an output in a distributed data lake (DDL) (408). The
IPM may use a computation layer (410) that may perform complex calculations such as exponentiation, square root, and trigonometric functionsand store the output in a DFS (412).
[0078] FIG. 5 illustrates an example computer system (500) in which or
19

with which the embodiments of the present disclosure may be implemented.
[0079] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read-only
memory (540), a mass storage device (550), a communication port(s) (560), and a
5 processor (570). A person skilled in the art will appreciate that the computer system
(500) may include more than one processor and communication ports. The
processor (570) may include various modules associated with embodiments of the
present disclosure. The communication port(s) (560) may be any of an RS-232 port
for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit
10 or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other
existing or future ports. The communication ports(s) (560) may be chosen depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (500) connects.
[0080] In an embodiment, the main memory (530) may be Random Access
15 Memory (RAM), or any other dynamic storage device commonly known in the art.
The read-only memory (540) may be any static storage device(s) e.g., but not
limited to, a Programmable Read Only Memory (PROM) chip for storing static
information e.g., start-up or basic input/output system (BIOS) instructions for the
processor (570). The mass storage device (550) may be any current or future mass
20 storage solution, which can be used to store information and/or instructions.
Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
25 [0081] In an embodiment, the bus (520) may communicatively couple the
processor(s) (570) with the other memory, storage, and communication blocks. The bus (520) may be, e.g. a Peripheral Component Interconnect (PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial Bus (USB), or the like, for connecting expansion cards, drives, and other subsystems as
20

well as other buses, such a front side bus (FSB), which connects the processor (570) to the computer system (500).
[0082] In another embodiment, operator and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus (520)
5 to support direct operator interaction with the computer system (500). Other
operator and administrative interfaces can be provided through network
connections connected through the communication port(s) (560). Components
described above are meant only to exemplify various possibilities. In no way should
the aforementioned exemplary computer system (500) limit the scope of the present
10 disclosure.
[0083] FIG. 6 illustrates an exemplary flow diagram of a method (600) for
analyzing complex performance metrics, in accordance with an embodiment of the present disclosure. The flow diagram (600) includes steps which are defined below.
[0084] At step 602, the method (600) may include obtaining a request for
15 generating a report. The request comprises a template associated with a plurality of
parameters. In an example, the IPM is configured to obtain the request. The request may be provided by a user or may be dynamically generated in response to an operation. The parameters may be associated be KPIs, metrics, insights, template variables, operation parameters and parameters related to the presentation of report.
20 [0085] At step 604, the method (600) may include fetching data pertaining
to a plurality of parameters from a distributed data lake (DDL). In an example, the IPM is configured to fetch the data. The data may be fetched by sending a request or a query to the distributed data lake (DDL). The fetched data may include the data associated with the parameters such as parameter values under different conditions,
25 range of values, other parameters associated with these parameters.
[0086] At step 606, the method (600) may include generating the report by
populating the template based on the fetched data. The populating the template may include inserting the values of different parameters in the template to generate the report. In an embodiment, the report is populated in a sequence based on user input
21

or the report may be populated in one instance with any user intervention.
[0087] At step 608, the method (600) may include rendering the report on a
graphical user interface (GUI). The render the report may include displaying the
report in different forms based on user inputs. In an example, the user can drill down
5 to report to different granular levels to obtain different insights. In another
embodiment, the report may be rendered differently based on user defined criteria such as color, and highlighting of certain parameter which are more important for the user.
[0088] In an example, the present disclosure discloses a non-transitory
10 computer-readable medium for analyzing complex performance metrics. The non-
transitory computer-readable medium may be, for example, an internal memory
device or an external memory device. In an example, the non-transitory computer-
readable medium may include instructions that when executed by one or more
processors, cause the one or more processors to obtain a request for generating a
15 report. The request comprises a template associated with a plurality of parameters.
The non-transitory computer readable medium may also include instructions to fetch data pertaining to the plurality of parameters from a distributed data lake (DDL) by sending one or query or request to the distributed data lake (DDL).
[0089] Further, the non-transitory computer-readable medium may include
20 instructions to generate the report by populating the template based on the fetched
data. The non-transitory computer-readable medium may also include instructions to render the report on a graphical user interface (GUI). The populating the template may include inserting the values of different parameters in the template to generate the report.
25 [0090] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the
22

disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
5 [0091] The present disclosure provides a system and a method that ensures
flexibility while choosing performance metrics, analysing them at regular intervals and computing significant metrics for an efficient monitoring of data.
[0092] The present disclosure provides a system and a method where an
AI/ML model contributes towards better analysis of a generated report. Further, the
10 system provides different visualization of key performance indicators (KPIs)
metrics.
[0093] The present disclosure provides a system and a method where a user
may generate reports using same templates at a predetermined time frequency. This
helps in continuous monitoring, eradicating the redundant human effort for periodic
15 creation and analysis of reports.
[0094] The present disclosure provides a system and a method where a user-
friendly user-interface (UI) eases the process of generating reports instead of writing complex queries.
[0095] The present disclosure provides a system and a method where
20 breakdown of complex performance measurements into more detailed components
enables a deeper analysis of specific aspects or dimensions of performance.
[0096] The present disclosure provides technical advancement related to
creation of nested dashboards using nested templates for analysing complex
performance measurements. This advancement addresses the limitations of existing
25 solutions by using the nested templates. The disclosure involves fetching by an
IPM, data pertaining to the plurality of parameters from a distributed data lake (DDL) and generating a report by populating the template based on the fetched data which offer significant improvements in complex performance measurements,
23

enabling a deeper analysis of specific aspects or dimensions of performance.
[0097] The present disclosure provides a system and a method where a
framework makes sure that user has the same format during performance analysis so that interpretation of reports is easy, consistent, and less prone to human errors.
24

WE CLAIM:
1. A method (600) for analyzing performance metrics, the method comprising:
obtaining (602a request for generating a report, wherein the request comprises a template associated with a plurality of parameters;
fetching (604 data pertaining to the plurality of parameters from a distributed data lake (DDL);
generating (606) the report by populating the template based on the fetched data; and
rendering (608) the report on a graphical user interface (GUI).
2. The method (600) as claimed in claim 1, wherein upon obtaining the request, the
method (600) comprises:
determining, by the IPM, whether the request is a valid request or an invalid request;
upon determining that the request is valid, proceeding with the report generation; and
upon determination that the request is invalid, generating a failure notification.
3. The method (600) as claimed in claim 1 further comprising storing the report in a distributed file system (DFS).
4. The method (600) as claimed in claim 1, wherein generating the report comprises:
providing the fetched data pertaining to the plurality of parameters as an input to a machine learning (ML) model; and
obtaining insights related to the plurality of parameters from the ML model.
5. The method (600) as claimed in claim 1, wherein fetching the data comprises
determining whether the data is fetched within a pre-defined time period or not.

6. The method (600) as claimed in claim 5, wherein when the data is not fetched
within the pre-defined time period, the method comprises:
transmitting, by the IPM, the request for generating the report to a computation layer.
7. The method (600) as claimed in claim 6 further comprising:
computing, by the computation layer, a first child dashboard; and providing an output of the first child dashboard as an input to a parent dashboard.
8. A system (108) for analyzing performance metrics, the system comprising:
a processing engine (208) configured to:
obtain a request for generating a report, wherein the request comprises a template associated with a plurality of parameters;
fetch data pertaining to the plurality of parameters from a distributed data lake (DDL);
generate the report by populating the template based on the fetched data; and
render the report on a graphical user interface (GUI).
9. The system (108) as claimed in claim 8, wherein upon obtaining the request, the
processing engine (208) is configured to:
determine whether the request is a valid request or an invalid request; upon determining that the request is valid, proceed with the report generation; and
upon determination that the request is invalid, generate a failure notification.
10. The system (108) as claimed in claim 8, wherein the processing engine (208) is
further configured to store the report in a distributed file system (DFS).

11. The system (108) as claimed in claim 8, wherein to generate the report, the
processing engine (208) is configured to:
provide the fetched data pertaining to the plurality of parameters as an input to a machine learning (ML) model; and
obtain insights related to the plurality of parameters.
12. The system (108) as claimed in claim 8, wherein to fetch the data, the processing engine (208) is configured to determine whether the data is fetched within a pre¬defined time period or not.
13. The system (108) as claimed in claim 12, wherein when the data is not fetched within the pre-defined time period, the processing engine (208) is configured to:
transmit the request for generating the report to a computation layer.
14. The system (108) as claimed in claim 13, wherein the computation layer is
configured to:
compute a first child dashboard; and
provide an output of the first child dashboard as an input to a parent dashboard.
15. A computing device (104) for analyzing performance metrics, the computing
device (104) comprising:
a processor configured to:
obtain a request for generating a report, wherein the request comprises a template associated with a plurality of parameters;
fetch data pertaining to the plurality of parameters from a distributed data lake (DDL);
generate the report by populating the template based on the fetched data; and
render the report on a graphical user interface (GUI).

Documents

Application Documents

# Name Date
1 202321049640-STATEMENT OF UNDERTAKING (FORM 3) [24-07-2023(online)].pdf 2023-07-24
2 202321049640-PROVISIONAL SPECIFICATION [24-07-2023(online)].pdf 2023-07-24
3 202321049640-FORM 1 [24-07-2023(online)].pdf 2023-07-24
4 202321049640-DRAWINGS [24-07-2023(online)].pdf 2023-07-24
5 202321049640-DECLARATION OF INVENTORSHIP (FORM 5) [24-07-2023(online)].pdf 2023-07-24
6 202321049640-FORM-26 [19-10-2023(online)].pdf 2023-10-19
7 202321049640-FORM-26 [26-04-2024(online)].pdf 2024-04-26
8 202321049640-FORM 13 [26-04-2024(online)].pdf 2024-04-26
9 202321049640-FORM-26 [30-04-2024(online)].pdf 2024-04-30
10 202321049640-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321049640-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321049640-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321049640-ENDORSEMENT BY INVENTORS [01-07-2024(online)].pdf 2024-07-01
14 202321049640-DRAWING [01-07-2024(online)].pdf 2024-07-01
15 202321049640-CORRESPONDENCE-OTHERS [01-07-2024(online)].pdf 2024-07-01
16 202321049640-COMPLETE SPECIFICATION [01-07-2024(online)].pdf 2024-07-01
17 202321049640-CORRESPONDENCE(IPO)-(WIPO DAS)-10-07-2024.pdf 2024-07-10
18 202321049640-ORIGINAL UR 6(1A) FORM 26-100724.pdf 2024-07-15
19 Abstract1.jpg 2024-08-02
20 202321049640-FORM 18 [01-10-2024(online)].pdf 2024-10-01
21 202321049640-FORM 3 [12-11-2024(online)].pdf 2024-11-12