Abstract: The present disclosure relates to a method and a system for generation of one or more interconnected dashboards. The disclosure encompasses receiving first request for generation of first dashboard and second request to treat the first dashboard as a waterfall dashboard; saving an associated information of the second request; forwarding the associated information for generating report; and the report for storing; receiving third request for generation of second dashboard; and a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; interconnecting the first dashboard and the second dashboard; receiving key performance indicators (KPIs) and aggregations; operations to be applied on the selected KPIs and the aggregations; and computing a pre-computed data. [FIG. 4]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR GENERATION OF INTERCONNETED DASHBOARDS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR GENERATION OF INTERCONNETED
DASHBOARDS
TECHNICAL FIELD OF THE DISCLOSURE
5
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to generation of one or more interconnected dashboards.
10 BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
15 present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] The following description of the related art is intended to provide
20 background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art. 25
[0004] Network performance management systems typically track network
elements and data from network monitoring tools and combine and process such
data to determine key performance indicators (KPI) of the network. Integrated
performance management systems provide the means to visualize the network
30 performance data so that network operators and other relevant stakeholders are able
to identify the service quality of the overall network, and individual/ grouped
2
network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly. 5
[0005] In network performance management system, particularly in visualization sub-systems, the process of performing computations for KPIs and incorporating their outcomes into other computations can present significant challenges. One of the main challenges is the handling of longer intervals and large amounts of data.
10 Network performance management systems deal with vast quantities of data
collected over extended periods of time. Performing computations for KPIs within these longer intervals can be time-consuming and resource intensive. The sheer volume of data can cause delays in processing, leading to inefficiencies in the overall analysis process.
15
[0006] Additionally, integrating the outcomes of these computations into subsequent computations for other KPIs can be problematic. The existing methods often lack the capability to efficiently connect and relate the different KPIs. This results in a fragmented understanding of the network performance, as the
20 relationships and dependencies between KPIs may not be fully captured or taken
into account. As a result, the insights gained from the computations may not provide a comprehensive view of the network's overall performance and may fail to identify critical issues or trends.
25 [0007] Moreover, the lack of efficiency in performing these computations and
incorporating their outcomes limits the ability of network operators and stakeholders to make timely and informed decisions. Delays in data processing and analysis hinder the proactive management of the network, as potential issues or failures may go undetected or unaddressed until they become significant problems.
30
3
[0008] In some instances, this large amount of data has been visualized using
multiple dashboards parallelly. However, parallel observation of multiple
dashboards can pose several problems, especially when performing computations
for Key Performance Indicators (KPIs) and incorporating their outcomes into
5 subsequent computations. Additionally, the large amount of data involved in these
computations can further complicate the process. The main challenges include:
[0009] When performing computations for specific KPIs and incorporating their
outcomes into subsequent computations, data synchronization becomes crucial.
10 Ensuring that the data from different dashboards is aligned and consistent in real-
time can be challenging, especially when dealing with large datasets or disparate data sources.
[0010] Further, performing computations for KPIs and incorporating their
15 outcomes can be time-consuming, especially when dealing with a large amount of
data. Parallel computation of multiple KPIs in real-time may not be feasible within a reasonable time frame, as it can strain computational resources and impact overall system performance.
20 [0011] Complexity of Dependencies: The dependencies between different KPI
computations can be complex, making it challenging to determine the correct order of computations and incorporate their outcomes accurately. Managing the dependencies and ensuring that the results are properly synchronized can be intricate, especially when there are interdependencies between different KPIs.
25
[0012] Moreover, dealing with a large amount of data and performing computations across multiple dashboards requires significant computational resources. Scaling the system to handle the increased workload and ensuring optimal performance can be a challenge, particularly when the amount of data and the complexity of
30 computations grow.
4
[0013] Accordingly, it may be noted that the telecommunication monitoring
services face several challenges when it comes to incorporating results into other
computations for KPIs or counters data. However, by leveraging advanced analytics
capabilities and integrated performance management systems, network operators
5 can gain a more comprehensive view of their network performance, but such
analytics capabilities, as is, are inefficient and therefore not suitable for such instances.
[0014] Thus, there exists an imperative need in the art to provide a solution that can
10 overcome these and other limitations of the existing solutions.
SUMMARY
[0015] This section is provided to introduce certain aspects of the present disclosure
15 in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0016] An aspect of the present disclosure may relate to a method for generation of
20 one or more interconnected dashboards. The method comprises receiving, at a user
interface module, a first request for generation of a first dashboard. The method
further comprises receiving, at the user interface module, a second request to treat
the first dashboard as a waterfall dashboard. The method further comprises
receiving, at an integrated performance management (IPM) module, the second
25 request from the user interface module. The method further comprises saving, by a
storage unit, at the IPM module, an associated information of the second request.
The method further comprises forwarding, by the IPM module to a computation
module, the associated information of second request for generating a report. The
method further comprises forwarding, by the computation module to the IPM
30 module, the report for storing in the storage unit. The method further comprises
receiving, at the user interface module, a third request for generation of a second
5
dashboard. The method further comprises receiving, at the user interface module, a
fourth request for adding the first dashboard as a supporting dashboard to the second
dashboard using the stored report. The method further comprises interconnecting,
by the IPM module, the first dashboard and the second dashboard. The method
5 further comprises computing, by the computation module, a pre-computed data. It
is to be noted that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. Further, the pre-computed data is used to filter the one or more values of the one or more KPIs in the second dashboard. 10
[0017] In an exemplary aspect of the present disclosure, the method further comprises sending, at the IPM module, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
15 [0018] In an exemplary aspect of the present disclosure, the method, uses the first
dashboard which is treated as the waterfall dashboard, as the supporting dashboard for an existing dashboard.
[0019] In an exemplary aspect of the present disclosure, the method comprises
20 receiving, at the user interface module, the one or more key performance indicators
(KPIs) and one or more aggregations in the third request for the first dashboard; and receiving, at the user interface module, the one or more operations to be applied on the one or more KPIs and the one or more aggregations.
25 [0020] In an exemplary aspect of the present disclosure, the method further
comprises setting, by the computation module, a time range for the associated information of the second request.
[0021] In an exemplary aspect of the present disclosure, the pre-computed data is
30 computed for a time period within the set time range, wherein the time period is
received from the user interface module.
6
[0022] Another aspect of the present disclosure may relate to a system for
generation of one or more interconnected dashboards. The system comprises a user
interface module which is configured to receive, a first request for generation of a
5 first dashboard. The user interface module is further configured to receive, a second
request to treat the first dashboard as a waterfall dashboard. The system further comprises an integrated performance management (IPM) module connected with at least the user interface module. The IPM module is configured to receive, the second request from the user interface module. The IPM module is further
10 configured to save, in a storage unit, an associated information of the second
request. The IPM module is further configured to forward, to a computation module, the associated information of the second request for generating a report. The computation module is connected at least to the IPM module, and the computation module is configured to forward, the report to the IPM module, for
15 storing at the storage unit. The user interface module is further configured to
receive, a third request, for generation of a second dashboard. The user interface module is further configured to receive, a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The IPM module is further configured to interconnect the first dashboard
20 with the second dashboard. The computation module is further configured to
compute a pre-computed data. The pre-computed data comprises one or more values for one or more KPIs based on one or more operations. The pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
25
[0023] Yet another aspect of the present disclosure may relate to a user equipment (UE) for generation of one or more interconnected dashboards. The UE comprising: a processor configured to: transmit a first request for generation of a first dashboard; transmit a second request to treat the first dashboard as a waterfall dashboard;
30 transmit a third request for generation of a second dashboard; transmit a fourth
request for adding the first dashboard as a supporting dashboard to the second
7
dashboard using a stored report, wherein for generation of the one or more
interconnected dashboards, process comprises: receiving, at an integrated
performance management (IPM) module, the second request from the user interface
module; saving, by a storage unit, an associated information of the second request;
5 forwarding, by the IPM module to a computation module, the associated
information of the second request for generating a report; forwarding, by the
computation module to the IPM module, the report for storing in the storage unit;
interconnecting, by the IPM module, the first dashboard and the second dashboard;
and computing, by the computation module, a pre-computed data, the pre-computed
10 data comprising one or more values for the one or more KPIs based on the one or
more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0024] Yet another aspect of the present disclosure relates to a non-transitory
15 computer-readable storage medium storing instruction for generation of one or
more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface module to receive: a first request for generation of a first dashboard; a second request to treat the first dashboard as a waterfall dashboard; an integrated
20 performance management (IPM) module connected with at least the user interface
module, the IPM module to: receive, the second request from the user interface module; save, in a storage unit, an associated information of the second request; forward, to a computation module, the associated information of the second request for generating a report; the computation module connected at least to the IPM
25 module, the computation module to: forward, the report to the IPM module, for
storing at the storage unit; the user interface module to further receive: a third request, for generation of a second dashboard; a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report; the IPM module to further interconnect the first dashboard with the second
30 dashboard; and the computation module to further to compute a pre-computed data,
the pre-computed data comprising one or more values for the one or more KPIs
8
based on the one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
5 OBJECTS OF THE DISCLOSURE
[0025] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
10 [0026] It is an object of the present disclosure to provide a system for efficiently
processing and computing data, allowing users to create interconnected dashboards and perform computations on the precomputed data.
[0027] It is another object of the present disclosure to facilitate the creation,
15 computation, and visualization of interconnecting dashboards.
[0028] It is another object of the present disclosure to provide a solution that eliminates the need to monitor multiple dashboards simultaneously.
20 [0029] It is another object of the present disclosure to provide a solution that works
according to a sequential execution approach for reducing the overall computation time for associated dashboards and providing linked results effortlessly.
[0030] It is yet another object of the present disclosure to provide a method of
25 integrating the outcomes of computations for different KPIs by providing a visual
representation of the relationships and dependencies between KPIs.
BRIEF DESCRIPTION OF THE DRAWINGS
30 [0031] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
9
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
5 limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components. 10
[0032] FIG. 1 illustrates an exemplary block diagram of an integrated performance management system, in accordance with the exemplary embodiments of the present disclosure.
15 [0033] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0034] FIG. 3 illustrates an exemplary block diagram of a system for generation of
20 one or more interconnected dashboards, in accordance with exemplary
implementations of the present disclosure.
[0035] FIG. 4 illustrates a method flow diagram for generation of one or more
interconnected dashboards, in accordance with exemplary implementations of the
25 present disclosure.
[0036] FIG. 5 illustrates an exemplary system architecture for implementing interlinked dashboard, in accordance with the exemplary embodiments of the present disclosure. 30
10
[0037] FIG. 6 illustrates an exemplary sequence flow diagram illustrating a process for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure.
5 [0038] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
10 [0039] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one
15 another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the problems discussed above.
[0040] The ensuing description provides exemplary embodiments only, and is not
20 intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
25 disclosure as set forth.
[0041] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
30 specific details. For example, circuits, systems, processes, and other components
11
may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0042] Also, it is noted that individual embodiments may be described as a process
5 which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
10 included in a figure.
[0043] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any
15 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed
20 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0044] As used herein, a “processing unit” or “processor” or “operating processor”
25 includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific
30 Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
12
input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
5 [0045] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The
10 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of
15 a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
[0046] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a
20 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective
25 functions.
[0047] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
30 communication or interaction of one or more modules or one or more units with
13
each other, which also includes the methods, functions, or procedures that may be called.
[0048] All modules, units, components used herein, unless explicitly excluded
5 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
10 circuits (FPGA), any other type of integrated circuits, etc.
[0049] As used herein the user interface module may include an in-built transceiver
unit that has at least one receiver and at least one transmitter configured respectively
for receiving and transmitting data, signals, information, or a combination thereof
15 between units/components within the system and/or connected with the system.
[0050] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the above-
20 mentioned and other existing problems in this field of technology by providing
method and system of generating one or more interconnected dashboards.
[0051] FIG. 1 illustrates an exemplary block diagram of an integrated performance management system [100], in accordance with the exemplary embodiments of the
25 present disclosure. Referring to FIG. 1, the network performance management
system [100] comprises various sub-systems such as: Integrated performance management module [100a], normalization layer [100b], computation layer (CL) [100d], anomaly detection layer [100o], streaming engine [100l], load balancer (LB) [100k], operations and management system [100p], API gateway system
30 [100r], analysis engine [100h], parallel computing framework [100i], forecasting
engine [100t], distributed file system, mapping layer [100s], distributed data lake
14
[100u], scheduling layer [100g], reporting engine [100m], message broker [100e],
graph layer [100f], caching layer [100c], service quality manager [100q] and
correlation engine[100n]. Exemplary connections between these subsystems is also
as shown in FIG. 1. However, it will be appreciated by those skilled in the art that
5 the present disclosure is not limited to the connections shown in the diagram, and
any other connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
[0052] Following are the various components of the system [100], the various
10 components may include:
[0053] Integrated performance management module [100a] comprises a 5G performance engine [100v] and a 5G Key Performance Indicator (KPI) Engine [100w].
15
[0054] 5G Performance Engine [100v]: The 5G Performance engine [100v] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics such as connection speed, latency, data
20 transfer rates, and many others. This raw data is then processed and aggregated as
required, forming a comprehensive overview of network performance. The processed information is then stored in a Distributed Data Lake [100u], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The 5G Performance engine [100v] also enables the reporting and
25 visualization of this performance counter data, thus providing network
administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
30
15
[0055] 5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key
Performance Indicator (KPI) Engine is a dedicated component tasked with
managing the KPIs of all the network elements. It uses the performance counters,
which are collected and processed by the 5G Performance Management engine
5 from various data sources. These counters, encapsulating crucial performance data,
are harnessed by the KPI engine [100w] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed
10 KPI data is then stored in the Distributed Data Lake [100u], ensuring a highly
accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [100w] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the
15 network's performance, thus supporting informed decision-making and efficient
network management.
[0056] Ingestion layer: The Ingestion layer forms a key part of the Integrated Performance Management system. Its primary function is to establish an
20 environment capable of handling diverse types of incoming data. This data may
include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes it by validating its integrity and correctness to ensure
25 it is fit for further use. Following validation, the data is routed to various
components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer plays a vital role in
30 managing the data flow within the system, thus supporting comprehensive and
accurate network performance analysis.
16
[0057] Normalization layer [100b]: The Normalization Layer [100b] serves to
standardize, enrich, and store data into the appropriate databases. It takes in data
that has been ingested and adjusts it to a common standard, making it easier to
5 compare and analyse. This process of "normalization" reduces redundancy and
improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [100u], Caching Layer, and Graph Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer
10 [100b] produces data for the Message Broker, a system that enables communication
between different parts of the performance management system through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine for detailed data examination, the Correlation Engine [100n] for detecting
15 relationships among various data elements, the Service Quality Manager for
maintaining and improving the quality of services, and the Streaming Engine for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
20
[0058] Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance Management system plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and
25 comparability. The Normalizer Layer then inserts this normalized data into various
databases. One such database is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly
30 reduces the time taken to access this data, improving overall system efficiency and
performance. Further, the Caching Layer [100c] serves as an intermediate layer
17
between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these sub-systems with the necessary data from the Caching Layer [100c]. 5
[0059] Computation layer [100d]: The Computation Layer [100d] in the Integrated Performance Management system serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b]. The Normalizer Layer then inserts this
10 standardized data into multiple databases including the Distributed Data Lake
[100u], Caching Layer [100c], and Graph Layer, and also feeds it to the Message Broker. Within the Computation Layer [100d], several powerful sub-systems such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to
15 execute various data processing tasks. The Analysis Engine performs in-depth data
analytics to generate insights from the data. The Correlation Engine [100n] identifies and understands the relations and patterns within the data. The Service Quality Manager assesses and ensures the quality of the services. And the Streaming Engine processes and analyses the real-time data feeds. In essence, the
20 Computation Layer [100d] is where all major computation and data processing
tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
25 [0060] Message broker [100e]: The Message Broker [100e], an integral part of the
Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [100e] facilitates communication between data producers and consumers through message-based
30 topics. This creates an advanced platform for contemporary distributed
applications. With the ability to accommodate a large number of permanent or ad-
18
hoc consumers, the Message Broker [100e] demonstrates immense flexibility in
managing data streams. Moreover, it leverages the filesystem for storage and
caching, boosting its speed and efficiency. The design of the Message Broker
[100e] is centred around reliability. It is engineered to be fault-tolerant and mitigate
5 data loss, ensuring the integrity and consistency of the data. With its robust design
and capabilities, the Message Broker [100e] forms a critical component in managing and delivering real-time data in the system.
[0061] Graph layer [100f]: The Graph Layer [100f], serving as the Relationship
10 Modeler, plays a pivotal role in the Integrated Performance Management system. It
can model a variety of data types, including alarm, counter, configuration, CDR
data, Infra-metric data, 5G Probe Data, and Inventory data. Equipped with the
capability to establish relationships among diverse types of data, the Relationship
Modeler offers extensive modelling capabilities. For instance, it can model Alarm
15 and Counter data, Vprobe and Alarm data, elucidating their interrelationships.
Moreover, the Modeler should be adept at processing steps provided in the model
and delivering the results to the system requested, whether it be a Parallel
Computing system, Workflow Engine, Query Engine, Correlation System [100n],
5G Performance Management Engine, or 5G KPI Engine [100w]. With its powerful
20 modelling and processing capabilities, the Graph Layer [100f] forms an essential
part of the system, enabling the processing and analysis of complex relationships
between various types of network data.
[0062] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key
25 element of the Integrated Performance Management System, endowed with the
ability to execute tasks at predetermined intervals set according to user preferences.
A task might be an activity performing a service call, an API call to another
microservice, the execution of an Elastic Search query, and storing its output in the
Distributed Data Lake [100u] or Distributed File System or sending it to another
30 micro-service. The versatility of the Scheduling Layer [100g] extends to facilitating
graph traversals via the Mapping Layer to execute tasks. This crucial capability
19
enables seamless and automated operations within the system, ensuring that various
tasks and services are performed on schedule, without manual intervention,
enhancing the system's efficiency and performance. In sum, the Scheduling Layer
[100g] orchestrates the systematic and periodic execution of tasks, making it an
5 integral part of the efficient functioning of the entire system.
[0063] Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part of the Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of
10 use-cases. This facility aids in the debugging process and facilitates a better
understanding of call flows. With the Analysis Engine [100h], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at
15 identifying anomalies within the data. When these policies detect abnormal
behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [100h] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the
20 system's overall performance management.
[0064] Parallel Computing Framework [100i]: The Parallel Computing Framework [100i] is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing
25 tasks in parallel. This framework highlights both scalability and fault tolerance,
crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [100j] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is
30 executed sequentially, but multiple chains can be executed simultaneously,
optimizing processing time. To accommodate varying task requirements, the
20
service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [100i] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities. 5
[0065] Distributed File System [100j]: The Distributed File System (DFS) [100j] is a critical component of the Integrated Performance Management System, enabling multiple clients to access and interact with data seamlessly. This file system is designed to manage data files that are partitioned into numerous segments
10 known as chunks. In the context of a network with vast data, the DFS [100j]
effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [100j] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This
15 accessibility is paramount for a system that requires constant data input and output,
as is the case in a robust performance management system.
[0066] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital component of the Integrated Performance Management System, designed to
20 efficiently distribute incoming network traffic across a multitude of backend servers
or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request
25 dispatch, and context-based request dispatch. Round-robin scheduling is a simple
method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based
30 dispatching routes traffic based on the contextual information about the incoming
requests. For example, in an event-driven architecture, the LB [100k] manages
21
event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system. 5
[0067] Streaming Engine [100l]: The Streaming Engine [100l], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data
10 processing and delivery, enhancing the system's ability to respond promptly to
dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [100l]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker
15 [100e], and Caching Layer [100c] to provide seamless, real-time data flow. Stream
Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the
20 requirement and deliver it to the UI in real-time. The streaming engine's [100l] goal
is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
[0068] Reporting Engine [100m]: The Reporting Engine [100m] is a key
25 subsystem of the Integrated Performance Management System. The fundamental
purpose of designing the Reporting Engine [100m] is to dynamically create report
layouts of API data, catered to individual client requirements, and deliver these
reports via the Notification Engine. The REM serves as the primary interface for
creating custom reports based on the data visualized through the client's dashboard.
30 These custom dashboards, created by the client through the User Interface (UI),
provide the basis for the Reporting Engine [100m] to process and compile data from
22
various interfaces. The main output of the Reporting Engine [100m] is a detailed
report generated in Excel format. The Reporting Engine’s [100m] unique capability
to parse data from different subsystem interfaces, process it according to the client's
specifications and requirements, and generate a comprehensive report makes it an
5 essential component of this performance management system. Furthermore, the
Reporting Engine [100m] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
10
[0069] FIG. 2 illustrates an exemplary block diagram of a computing device [200] (also referred to herein as a computer system [200]) upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing
15 device [200] may also implement a method for generation of one or more
interconnected dashboards, utilising the system. In another implementation, the computing device [200] itself implements the method for generation of one or more interconnected dashboards using one or more units configured within the computing device [200], wherein said one or more units are capable of
20 implementing the features as disclosed in the present disclosure.
[0070] The computing device [200] encompasses a wide range of electronic devices capable of processing data and performing computations. Examples of computing device [200] include, but are not limited only to, personal computers,
25 laptops, tablets, smartphones, servers, and embedded systems. The devices may
operate independently or as part of a network and can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally, computing device [200] may include peripheral devices, such as monitors, keyboards, and printers, as well as integrated components within larger electronic systems, highlighting their
30 versatility in various technological applications.
23
[0071] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a processor [204]
coupled with bus [202] for processing information. The processor [204] may be, for
example, a general purpose microprocessor. The computing device [200] may also
5 include a main memory [206], such as a random access memory (RAM), or other
dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204]. Such
10 instructions, when stored in non-transitory storage media accessible to the processor
[204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and
15 instructions for the processor [204].
[0072] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a
20 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor
25 [204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
30 the device to specify positions in a plane.
24
[0073] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
5 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
10 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
15 [0074] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222]. For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or
20 a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical,
25 electromagnetic, or optical signals that carry digital data streams representing
various types of information.
[0075] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
30 communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
25
ISP [226], the local network [222], host [224] and the communication interface [218]. The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution. 5
[0076] Referring to FIG. 3, an exemplary block diagram of a system [300] for generation of one or more interconnected dashboards, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one user interface module [302], at least one Integrated
10 performance management (IPM) module [100a], at least one storage unit [305], and
at least one computation module [306]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown,
15 however, the system [300] may comprise multiple such units or the system [300]
may comprise any such numbers of said units, as required to implement the features of the present disclosure.
[0077] The system [300] is configured for generation of the one or more
20 interconnected dashboards, with the help of the interconnection between the
components/units of the system [300].
[0078] For generation of the one or more interconnected dashboards, the user interface module [302] of the system [300] is configured to receive, a first request
25 for generation of a first dashboard. The first request in the specification refers to the
initial user action to generate a dashboard within the network performance management system. The first request includes several key components such as the dashboard's name, the type of data it will display, and the specific key performance indicators (KPIs) and metrics the user wants to monitor. For example, a user might
30 issue a first request to create a "Network Traffic Dashboard," specifying that it
should display KPIs like total data throughput, packet loss rate, and latency over
26
selected time intervals. Additionally, the request can include parameters for data
aggregation methods (e.g., hourly averages, daily totals) and any initial filter criteria
(e.g., specific geographic regions or network nodes). By defining these elements,
the first request sets up the fundamental structure and purpose of the dashboard,
5 enabling the system to gather and organize the necessary data for effective
performance monitoring and analysis.
[0079] The user interface module [302] is further configured to receive, a second request to treat the first dashboard as a waterfall dashboard. The second request
10 refers to the user action of designating the first dashboard as a waterfall dashboard.
The second request includes specific information such as the type of dashboard being created, the parameters that need to be precomputed, and the logic for how these parameters should be processed. For example, if the first dashboard tracks network throughput, the second request might specify that the busiest hour for each
15 day should be calculated and stored. This precomputed data can then be used in the
second dashboard to analyse success call ratios during those busy hours. By setting these parameters in the second request, users ensure that the necessary computations are done in advance, streamlining subsequent analyses, and making the overall process more efficient and accurate.
20
[0080] The integrated performance management (IPM) module [100a] is configured to receive, the second request from the user interface module [302]. The IPM module [100a] is further configured to send, an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard. It is
25 pertinent to note that the first dashboard, treated as the waterfall dashboard, may be
used as the supporting dashboard for an existing dashboard. The waterfall dashboard is a specialized type of dashboard within a system that is designated for precomputation. This means that the data and key performance indicators (KPIs) associated with a waterfall dashboard are calculated in advance, allowing this
30 precomputed data to be used as a foundational input for other dashboards. For
example, if a dashboard tracks the busiest hour of network traffic each day (a
27
Throughput KPI) over the past 90 days, this precomputed data can then be used to
calculate and display related metrics, such as the Success Call Ratio KPI, within the
same or another dashboard. By designating a dashboard as a waterfall dashboard,
users can streamline complex sequential calculations, ensuring efficient and timely
5 performance analysis without the need to recompute data repeatedly. This approach
enhances the overall efficiency and effectiveness of network performance management by enabling interconnected and dependent dashboards to utilize precomputed outputs, thereby providing a more comprehensive and accurate understanding of network performance dynamics.
10
[0081] The Integrated performance management (IPM) module [100a] is further configured to save, in the storage unit [305], an associated information of the second request. Associated information refers to the specific data and metadata required to process requests, generate reports, and perform computations within the dashboards
15 in a network performance management system. This information can include details
such as the time range for data analysis, the specific key performance indicators (KPIs) to be monitored, the type of computations to be performed on the KPIs, user preferences for data display, and configurations for integrating multiple dashboards. For example, if a user requests to generate a dashboard to monitor network
20 throughput over the past 90 days, the associated information will include the
selected KPI (network throughput), the specified time range (90 days), and any specific computation rules or operations (such as calculating the busiest hour of each day). Additionally, if the user wants to use this dashboard as a waterfall dashboard to support another dashboard that calculates the Success Call Ratio, the
25 associated information will also include the necessary integration configurations
and pre-computed values needed to link the two dashboards.
[0082] The Integrated performance management (IPM) module [100a] is further
configured to forward to the computation module [306], the associated information
30 of the second request for generating a report. The computation module [306] is
configured to forward, the report to the IPM module [100a], for storing at the
28
storage unit [305]. In an exemplary aspect, the report is created after the CL [100d]
processes the necessary data retrieved from the Distributed File System (DFS)
[100j]. The report includes detailed computations of key performance indicators
(KPIs), aggregated data, and any other metrics specified by the user. For example,
5 if the user has designated a Waterfall Dashboard to precompute the busiest hour of
the day for network throughput over the past 90 days, the report will contain this
computed data. Additionally, it might include calculations for success call ratios
and other related KPIs over the same period. The generated report is then sent to
the Integrated Performance Management (IPM) module [100a], where it is saved
10 and can be used for further analysis or to create interconnected dashboards. This
comprehensive report provides users with detailed insights and enables efficient performance management by precomputing and aggregating critical network performance data.
15 [0083] The user interface module [302] is further configured to receive, a third
request, for generation of a second dashboard. The third request is a step where the user interface module [303] receives a request for the generation of a second dashboard. The third request includes specific details about the key performance indicators (KPIs) and aggregations that the user wants to incorporate into the second
20 dashboard. Additionally, the third request may specify the operations to be applied
to these selected KPIs and aggregations. For example, a user might request the generation of a second dashboard that includes a KPI for network latency and an aggregation of average latency over the last 30 days. The user might also specify an operation to filter this data to show only peak usage times.
25
[0084] The user interface module [302] is further configured to receive, a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The fourth request involves adding the first dashboard, designated as a waterfall dashboard, as a supporting dashboard to the
30 second dashboard. The fourth request includes utilizing the stored report, referred
using name or identifier for interconnecting dashboards and enabling the sequential
29
execution of precomputed data from one dashboard to influence another. For
example, the third request, which is for generating a second dashboard, involves
setting up a new dashboard that could monitor different network parameters or
KPIs. In this case, the user might want to incorporate insights from the first
5 dashboard, such as the busiest hour of network usage, into the second dashboard's
calculations. By making the fourth request, the system links the first dashboard's
precomputed data to the second dashboard, allowing for comprehensive analysis.
For example, if the first dashboard calculates the busiest hour for network
throughput, this data can then be used in the second dashboard to analyse Success
10 Call Ratio during those busy hours. Thus, the third request sets up the new
monitoring parameters, while the fourth request integrates previously computed data to enhance the new dashboard's analytical capabilities.
[0085] The IPM module [100a] is further configured to interconnect the first
15 dashboard with the second dashboard. The user interface module [302] is further
configured to receive, selection of one or more key performance indicators (KPIs) and one or more aggregations in the first dashboard. The user interface module [302] is further configured to receive, one or more operations to be applied on the selected one or more KPIs and the one or more aggregations. The computation
20 module [306] is further configured to compute a pre-computed data. It is important
to note that the pre-computed data comprises one or more values for the one or more KPIs based on the one or more operations. It is further noted that the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
25
[0086] It is to be further noted that the one or more modules, units, components (including but not limited to the user interface module [302], the Integrated performance management (IPM) module [100a], the storage unit [305], and the computation module [306] used herein may be software modules configured via
30 hardware modules/processor, or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
30
a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. 5
[0087] The computation module [306] is further configured to set a time range for
the associated information of the second request. The computation module [306]
can define a specific period within which the data will be precomputed and
analysed. For example, if a user specifies a time range of the last 30 days via the
10 user interface module [303], the computation module [306] will use this time range
to process and compute relevant KPIs for that period. This functionality ensures that the resulting analysis and insights are based on the user-defined timeframe, providing tailored and precise performance metrics for the specified duration.
15 [0088] The pre-computed data is computed for a time period within the set time
range, wherein the time period is received from the user interface module [302]. The users can specify a particular time range through the user interface, such as the last 30 days or the previous quarter. The system will then use this specified time range to calculate the pre-computed data, such as the busiest hour for network
20 throughput during that period. For example, if a user specifies a time range of the
last 30 days through the user interface, the system will calculate the busiest hour for network throughput during those 30 days. This precomputed data can then be used to analyse other metrics, such as the Success Call Ratio, within the same 30-day period.
25
[0089] Referring to FIG. 4, an exemplary flow diagram of method [400] for generation of one or more interconnected dashboards, in accordance with exemplary implementations of the present disclosure is shown. In an implementation, the method [400] is performed by the system [300]. Further, in an
30 implementation, the system [300] may be present in a server device to implement
31
the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402].
[0090] At step [404], the method [400] comprises receiving, at a user interface
5 module [302], a first request for generation of a first dashboard. The first request in
the specification refers to the initial user action to generate a dashboard within the network performance management system. The first request includes several key components such as the dashboard's name, the type of data it will display, and the specific key performance indicators (KPIs) and metrics the user wants to monitor.
10 For example, a user might issue a first request to create a "Network Traffic
Dashboard," specifying that it should display KPIs like total data throughput, packet loss rate, and latency over selected time intervals. Additionally, the request can include parameters for data aggregation methods (e.g., hourly averages, daily totals) and any initial filter criteria (e.g., specific geographic regions or network nodes).
15 By defining these elements, the first request sets up the fundamental structure and
purpose of the dashboard, enabling the system to gather and organize the necessary data for effective performance monitoring and analysis.
[0091] At step [406], the method [400] comprises receiving, at the user interface
20 module [302], a second request to treat the first dashboard as a waterfall dashboard.
The second request refers to the user action of designating the first dashboard as a
waterfall dashboard. The second request includes specific information such as the
type of dashboard being created, the parameters that need to be precomputed, and
the logic for how these parameters should be processed. For example, if the first
25 dashboard tracks network throughput, the second request might specify that the
busiest hour for each day should be calculated and stored. This precomputed data
can then be used in the second dashboard to analyse success call ratios during those
busy hours. By setting these parameters in the second request, users ensure that the
necessary computations are done in advance, streamlining subsequent analyses, and
30 making the overall process more efficient and accurate.
32
[0092] At step [408], the method [400] comprises receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302].
5 [0093] In an implementation of the present disclosure, the method [400] further
comprises sending, at the IPM module [100a], an acknowledgement of the second request for treating the first dashboard as the waterfall dashboard.
[0094] In an implementation of the present disclosure, in the method [400], the first
10 dashboard treated as the waterfall dashboard, may be used as the supporting
dashboard for an existing dashboard.
[0095] The waterfall dashboard is a specialized type of dashboard within a system that is designated for precomputation. This means that the data and key performance
15 indicators (KPIs) associated with a waterfall dashboard are calculated in advance,
allowing this precomputed data to be used as a foundational input for other dashboards. For example, if a dashboard tracks the busiest hour of network traffic each day (a Throughput KPI) over the past 90 days, this precomputed data can then be used to calculate and display related metrics, such as the Success Call Ratio KPI,
20 within the same or another dashboard. By designating a dashboard as a waterfall
dashboard, users can streamline complex sequential calculations, ensuring efficient and timely performance analysis without the need to recompute data repeatedly. This approach enhances the overall efficiency and effectiveness of network performance management by enabling interconnected and dependent dashboards to
25 utilize precomputed outputs, thereby providing a more comprehensive and accurate
understanding of network performance dynamics.
[0096] At step [410], the method [400] comprises saving, by a storage unit [305],
at the IPM module [100a], an associated information of the second request.
30 Associated information refers to the specific data and metadata required to process
requests, generate reports, and perform computations within the dashboards in a
33
network performance management system. This information can include details
such as the time range for data analysis, the specific key performance indicators
(KPIs) to be monitored, the type of computations to be performed on the KPIs, user
preferences for data display, and configurations for integrating multiple dashboards.
5 For example, if a user requests to generate a dashboard to monitor network
throughput over the past 90 days, the associated information will include the
selected KPI (network throughput), the specified time range (90 days), and any
specific computation rules or operations (such as calculating the busiest hour of
each day). Additionally, if the user wants to use this dashboard as a waterfall
10 dashboard to support another dashboard that calculates the Success Call Ratio, the
associated information will also include the necessary integration configurations and pre-computed values needed to link the two dashboards.
[0097] At step [412], the method [400] comprises forwarding, by the IPM module
15 [100a] to a computation module [306], the associated information of second request
for generating a report.
[0098] At step [414], the method [400] comprises forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit
20 [305]. In an exemplary aspect, the report is created after the CL [100d] processes
the necessary data retrieved from the Distributed File System (DFS) [100j]. The report includes detailed computations of key performance indicators (KPIs), aggregated data, and any other metrics specified by the user. For example, if the user has designated a Waterfall Dashboard to precompute the busiest hour of the
25 day for network throughput over the past 90 days, the report will contain this
computed data. Additionally, it might include calculations for success call ratios and other related KPIs over the same period. The generated report is then sent to the Integrated Performance Management (IPM) module [100a], where it is saved and can be used for further analysis or to create interconnected dashboards. This
30 comprehensive report provides users with detailed insights and enables efficient
34
performance management by precomputing and aggregating critical network performance data.
[0099] At step [416], the method comprises receiving, at the user interface module
5 [302], a third request for generation of a second dashboard. The third request is a
step where the user interface module [303] receives a request for the generation of a second dashboard. The third request includes specific details about the key performance indicators (KPIs) and aggregations that the user wants to incorporate into the second dashboard. Additionally, the third request may specify the
10 operations to be applied to these selected KPIs and aggregations. For example, a
user might request the generation of a second dashboard that includes a KPI for network latency and an aggregation of average latency over the last 30 days. The user might also specify an operation to filter this data to show only peak usage times.
15
[0100] At step [418], the method comprises receiving, at the user interface module [302], a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report. The fourth request involves adding the first dashboard, designated as a waterfall dashboard, as a supporting dashboard
20 to the second dashboard. The fourth request includes utilizing the stored report for
interconnecting dashboards and enabling the sequential execution of precomputed data from one dashboard to influence another. For example, the third request, which is for generating a second dashboard, involves setting up a new dashboard that could monitor different network parameters or KPIs. In this case, the user might want to
25 incorporate insights from the first dashboard, such as the busiest hour of network
usage, into the second dashboard's calculations. By making the fourth request, the system links the first dashboard's precomputed data to the second dashboard, allowing for comprehensive analysis. For example, if the first dashboard calculates the busiest hour for network throughput, this data can then be used in the second
30 dashboard to analyse Success Call Ratio during those busy hours. Thus, the third
35
request sets up the new monitoring parameters, while the fourth request integrates previously computed data to enhance the new dashboard's analytical capabilities.
[0101] At step [420], the method [400] comprises interconnecting, by the IPM
5 module [100a], the first dashboard and the second dashboard.
[0102] At step [422], the method [400] comprises computing, by the computation
module [306], a pre-computed data. It is to be noted that the pre-computed data
comprises one or more values for the one or more KPIs based on the one or more
10 operations. Further, the pre-computed data is used to filter the one or more values
of the one or more KPIs in the second dashboard.
[0103] Thereafter, the method [400] terminates at step [424].
15 [0104] In the preferred embodiment as illustrated in FIG. 5, the connections
between the various components of the system [500] are established using different protocols and mechanisms, as well known in the art. For example:
[0105] UI Module to IPM: The connection between the User Interface (UI) [532]
20 and the Integrated Performance Management (IPM) module [100a] is established
using an HTTP connection. HTTP (Hypertext Transfer Protocol) is a widely used protocol for communication between web browsers and servers. It allows the UI [532] to send requests and configurations to the IPM module [100a], and also receive responses or acknowledgments. 25
[0106] IPM to DDL: The connection between the IPM module [100a] and the
Distributed Data Lake (DDL) [535] is established using a TCP (Transmission
Control Protocol) connection. TCP is a reliable and connection-oriented protocol
that ensures the integrity and ordered delivery of data packets. By using TCP, the
30 IPM module [100a] can save and retrieve relevant data from the DDL [535] for
computations, ensuring data consistency and reliability.
36
[0107] IPM to CL: The connection between the IPM module [100a] and the
Computation Layer (CL) [534] is also established using an HTTP connection.
Similar to the UI [532] to IPM [100a] module connection, this HTTP connection
5 allows the IPM module [100a] to forward requests and computations, which
includes large computations and/or complex queries, to the CL [534]. The CL [534] processes the received instructions and returns the results or intermediate data to the IPM module [100a].
10 [0108] CL to DFS: The connection between the Computation Layer (CL) [534]
and the Distributed File System (DFS) [536] is established using a File IO connection. File IO typically refers to the operations performed on files, such as reading from or writing to files. In this case, the CL [534] utilizes File IO operations to store and manage large files used in computations within the DFS [536]. The
15 DFS [536] usually includes historical data, i.e., data is stored for longer time
periods. This connection allows the CL [534] to efficiently access and manipulate the required files.
[0109] In some embodiments, the plurality of modules includes a load balancer
20 [537] for managing connections. The load balancer [537] is adapted to distribute
the incoming network traffic across multiple servers or components to ensure
optimal resource utilization and high availability. Particularly, the load balancer
[537] is commonly employed to evenly distribute incoming requests across multiple
instances of the IPM module [100a] or CL [534], providing scalability and fault
25 tolerance to the system [100]. Overall, these connections and the inclusion of the
load balancer [537] help to facilitate effective communication, data transfer, and resource management within the system, enhancing its performance and reliability.
[0110] In operation, the user creates a dashboard request on the User Interface (UI)
30 [532] and designates it as an interlinked Dashboard eligible for precomputation.
Thereafter, the Integrated performance management module [100a] processes the
37
dashboard request and generates the computed output for each interlinked
dashboard at the computation layer [534]. The computed output is stored in a
suitable format for future reference and retrieval. Thereafter, the user may add the
created interlinked dashboard as a supporting dashboard to a newly created or
5 existing dashboard which is delegated to the Computation Layer (CL) [534] that in
turn processes the execution requests, accesses the stored data, and performs the
necessary computations using the precomputed data from the interlinked
Dashboard. The precomputed output of these operations is stored and used to filter
the values of KPIs in subsequent dashboard requests by the user for execution. The
10 user may filter the values of KPIs as per the requirement.
[0111] As is evident from the above, the present disclosure provides a technically advanced solution for generation of one or more interconnected dashboards. The present solution particularly involves categorizing dashboards as interlinked and/or
15 interconnected Dashboards, performing precomputations, and utilizing the
precomputed data in associated dashboards. Interlinked and/or interconnected dashboards provide a sequential and consolidated view of data, allowing for the precomputation of essential dashboards. By categorizing dashboards as interlinked dashboards, the need for parallel observation of multiple dashboards is eliminated.
20 Instead, the focus is shifted to analyse interconnected KPIs on a single consolidated
dashboard, where computations are performed in advance, and the outcomes are readily available for subsequent computations. This approach reduces cognitive overload, simplifies data synchronization, improves processing time, and enhances scalability.
25
[0112] It would be appreciated by the person skilled in the art that the technique of the present disclosure streamlines the process of performing complex computations across interconnected dashboards by precomputing data in a Waterfall Dashboard. This allows for quicker and more efficient data analysis as computations are
30 reduced and data from one dashboard can directly influence another.
38
[0113] In an example, a network engineer uses the user interface [532] to designate
a primary dashboard that monitors network throughput as a Waterfall Dashboard.
This dashboard includes KPIs like total data transferred, peak transfer rates, and
times of peak activity. The user interface [532] sends a request to the Integrated
5 Performance Management (IPM) module [100a] to precompute data for the
Waterfall Dashboard. The IPM acknowledges the request and forwards it to the Computation Layer (CL) [534]. The CL [534] processes the request, performing computations to identify things like the busiest hours or days for network traffic in the past 90 days. These computed results are stored in a Distributed Data Lake
10 (DDL) [535]. The engineer then creates a new dashboard to monitor server response
times and links this new dashboard to the precomputed data from the Waterfall Dashboard. When the engineer wants to view the server response times during the busiest network hours, a request is sent from the user interface [532] to the IPM module [100a]. The IPM module [100a] then forwards this request to the CL [534],
15 which retrieves the precomputed data from the DDL [535] and uses it to calculate
server response times during the busiest network hours. The CL [534] sends these calculated results back to the IPM module [100a], which saves the data in a predetermined format. The results are then displayed on the user interface [532], showing server response times during the busiest network hours based on the data
20 from the Waterfall Dashboard. By using this method, the engineer can understand
how server performance is affected during peak network traffic times without having to manually calculate and correlate data between two separate dashboards.
[0114] FIG. 6 illustrates an exemplary sequence flow diagram illustrating a process
25 [600] for generation of one or more interconnected dashboards, in accordance with
exemplary implementations of the present disclosure.
[0115] At S_1, the process [600] includes the creation of a Waterfall Dashboard.
The user [602] initiates this process through the user interface (UI) [604] by
30 selecting options to create a new dashboard and marking it as a Waterfall
Dashboard, which indicates that it will be used for precomputation purposes.
39
[0116] At S_2, the process [600] includes requesting resources from the Load
Balancer (LB). The UI [604] sends a request to the load balancer [100k] to identify
an available instance of the Integrated Performance Management (IPM) module
5 [100a] for the dashboard creation.
[0117] At S_3, the process [600] includes the load balancer [100k] identifying an
available instance of IPM module [100a]. Once an available instance of IPM
module [100a] is identified, the load balancer [100k] forwards the request to the
10 instance of IPM module [100a] to begin the dashboard creation process.
[0118] At S_4, the process [600] includes saving the dashboard information. The
IPM module [100a] receives the request and saves the initial configuration and
metadata of the Waterfall Dashboard to the Distributed Data Lake (DDL) [100u],
15 such that all necessary information is saved.
[0119] At S_5, the process [600] includes sending an acknowledgment. The IPM module [100a] sends an acknowledgment back to the UI [604] indicating that the dashboard information has been successfully saved in the DDL [100u]. 20
[0120] At S_6, the process [600] includes notifying the user [602] of the successful save. The UI [604] displays a message to the user [602] confirming that the Waterfall Dashboard has been saved successfully.
25 [0121] At S_7, the process [600] includes computing the Waterfall Dashboards.
The IPM module [100a] initiates the computation of the Waterfall Dashboard by sending the computation request to the Computation Layer (CL) [100d], which is responsible for performing the intensive data processing tasks.
40
[0122] At S_8, the process [600] includes bringing data from the DFS. The CL [100d] retrieves the required data from the Distributed File System (DFS) [100j], which contains the raw and historical data needed for the computations.
5 [0123] At S_9, the process [600] includes responding to the data request. The DFS
[100j] sends the requested data back to the CL [100d], enabling the CL to perform the necessary computations.
[0124] At S_10, the process [600] includes performing the Waterfall computation.
10 The CL [100d] processes the data to compute the pre-defined KPIs and other
metrics for the Waterfall Dashboard, ensuring the data is ready for further use.
[0125] At S_11, the process [600] includes saving the generated output. The CL
[100d] sends the computed results back to the IPM module [100a], which then saves
15 this output data for future reference and use in interconnected dashboards.
[0126] At S_12, the process [600] includes creating and saving an Excel file. The IPM module [100a] creates an Excel file containing the computed results and saves it for easy access and review by the user.
20
[0127] At S_13, the process [600] includes the execution of an associated dashboard. The user [602] initiates this process through the UI [604], requesting the generation of a second dashboard that will use the precomputed data from the Waterfall Dashboard.
25
[0128] At S_14, the process [600] includes requesting resources from the Load Balancer (LB). The UI [604] sends a request to the load balancer [100k] to identify an available instance of IPM module [100a] to handle the new dashboard generation.
30
41
[0129] At S_15, the process [600] includes the load balancer [100k] identifying an available IPM instance. The load balancer [100k] finds an available IPM module [100a] and forwards the dashboard generation request to it.
5 [0130] At S_16, the process [600] includes forwarding the request. The IPM
module [100a] receives the request and forwards it to the CL [100d] to access the precomputed data and perform any additional computations required for the second dashboard.
10 [0131] At S_17, the process [600] includes accessing stored data. The CL [100d]
retrieves the relevant precomputed data and any additional necessary data from the DFS [100j] to generate the second dashboard.
[0132] At S_18, the process [600] includes sending the required data. The DFS
15 [100j] sends the necessary data back to the CL [100d], enabling it to complete the
computations for the second dashboard.
[0133] At S_19, the process [600] includes performing data computation. The CL
[100d] processes the retrieved data to compute the required KPIs and metrics for
20 the second dashboard.
[0134] At S_20, the process [600] includes sending the KPI data. The CL [100d] sends the computed KPI data and other relevant results back to the IPM module [100a]. 25
[0135] At S_21, the process [600] includes finalizing the output. The IPM module [100a] processes the received data, finalizes the output, and prepares it for presentation to the user.
30 [0136] At S_22, the process [600] includes sending the computed data along with
a notification. The IPM module [100a] sends the final computed data and a
42
notification to the load balancer [100k] to inform the user that the second dashboard is ready.
[0137] At S_23, the process [600] includes forwarding the notification. The load
5 balancer [100k] forwards the notification and the computed data to the UI [604].
[0138] At S_24, the process [600] includes presenting the output to the user. The UI [604] displays the final output to the user [602], showing the results of the second dashboard.
10
[0139] At S_24A, the process [600] includes the user [602] clicking on the notification. This step is initiated when the user receives a notification about the availability of the new dashboard or the updated data. By clicking on this notification, the user signals their intent to view more detailed information, or
15 results related to the dashboard.
[0140] At S_25, the process [600] includes raising a request to show the result. This
step involves the UI [604] allowing the user's interaction by displaying an initial
summary or overview of the dashboard results. This gives the user a quick glance
20 at the key metrics or highlights of the computed data.
[0141] At S_26, the process [600] includes fetching the result from the UI [604] to
the load balancer [100k]. Here, the UI [604] sends a request to the load balancer
[100k] to retrieve the detailed data necessary for a comprehensive view of the
25 dashboard. This step ensures that the UI can present the most current and detailed
data to the user.
[0142] At S_27, the process [600] includes forwarding the request from the load
balancer [100k] to the IPM module [100a]. The load balancer [100k], upon
30 receiving the request from the UI [604], forwards this request to the IPM module
[100a] to obtain the detailed results.
43
[0143] At S_28, the process [600] includes finalizing the output at the IPM module
[100a]. The IPM module [100a] processes the request and prepares the detailed
results, ensuring that all relevant data is accurately compiled and ready for
5 presentation.
[0144] At S_29, the process [600] includes sending the computed KPI data from
the IPM module [100a] to the load balancer [100k]. KPI data refers to the computed
KPIs based on the request. The IPM module [100a] sends this computed KPI data
10 to the load balancer [100k], ensuring that the user receives the most recent
information.
[0145] At S_30, the process [600] includes forwarding the data from the load
balancer [100k] to the UI [604]. The load balancer [100k] takes the detailed results
15 and the computed KPI data from the IPM module [100a] and sends it to the UI [604]
for display to the user.
[0146] At S_31, the process [600] includes showing the result from the UI [604] to
the user [602]. The UI [604] presents the final, detailed results to the user [602].
20 This step concludes the process, providing the user with a comprehensive view of
the dashboard data, including any updated KPIs and detailed metrics, enabling effective analysis and decision-making.
[0147] The present disclosure offers several advantages over existing methods.
25 These include:
[0148] Efficiency: The use of interlinked Dashboards eliminates the need for parallel observation of multiple dashboards and streamlines the computation process, saving time and computational resources. 30
44
[0149] Data Consolidation: Interlinked Dashboards provide a consolidated view of interconnected KPIs, allowing for a comprehensive understanding of network performance in a single dashboard.
5 [0150] Resource Optimization: By precomputing essential data, the disclosure
optimizes computational resources and enhances scalability, making it suitable for large-scale networks and heavy computations.
[0151] Improved Insights: The interconnected nature of interlinked Dashboards
10 enables users to identify relationships and dependencies between different KPIs,
leading to deeper insights and better decision-making.
[0152] Furthermore, the interlinked dashboard addresses the challenge of integrating the outcomes of computations for different KPIs by providing a visual
15 representation of the relationships and dependencies between KPIs. The cascading
format of the dashboard allows the users to see how changes in one KPI affect other KPIs downstream. This approach provides a more comprehensive understanding of the network's performance, allowing network operators and stakeholders to make more informed decisions about how to optimize network performance and service
20 quality.
[0153] Yet another aspect of the present disclosure may relate to a user equipment (UE) for generation of one or more interconnected dashboards. The UE comprising: a processor configured to: transmit a first request for generation of a first dashboard;
25 transmit a second request to treat the first dashboard as a waterfall dashboard;
transmit a third request for generation of a second dashboard; transmit a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using a stored report, wherein for generation of the one or more interconnected dashboards, process comprises: receiving, at an integrated
30 performance management (IPM) module [100a], the second request from the user
interface module [302]; saving, by a storage unit [305], an associated information
45
of the second request; forwarding, by the IPM module [100a] to a computation
module [307], the associated information of the second request for generating a
report; forwarding, by the computation module [306] to the IPM module [100a],
the report for storing in the storage unit [305]; interconnecting, by the IPM module
5 [100a], the first dashboard and the second dashboard; and computing, by the
computation module [306], a pre-computed data, the pre-computed data comprising one or more values for the one or more KPIs based on the one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
10
[0154] Yet another aspect of the present disclosure relates to a non-transitory computer-readable storage medium storing instruction for generation of one or more interconnected dashboards, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a user interface
15 module [302] to receive: a first request for generation of a first dashboard; a second
request to treat the first dashboard as a waterfall dashboard; an integrated performance management (IPM) module [100a] connected with at least the user interface module [302], the IPM module [100a] to: receive, the second request from the user interface module [302]; save, in a storage unit [305], an associated
20 information of the second request; forward, to a computation module [306], the
associated information of the second request for generating a report; the computation module [306] connected at least to the IPM module [100a], the computation module [306] to: forward, the report to the IPM module [100a], for storing at the storage unit [305]; the user interface module [302] to further receive:
25 a third request, for generation of a second dashboard; a fourth request, for adding
the first dashboard as a supporting dashboard to the second dashboard using the stored report; the IPM module [100a] to further interconnect the first dashboard with the second dashboard; and the computation module [306] to further to compute a pre-computed data, the pre-computed data comprising one or more values for the
30 one or more KPIs based on the one or more operations, and wherein the pre-
46
computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
[0155] Further, in accordance with the present disclosure, it is to be acknowledged
5 that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units, as disclosed in the disclosure, should not be
10 construed as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
15 [0156] It should be noted that the terms "first", "second", "primary", "secondary",
"target" and the like, herein do not denote any order, ranking, quantity, or importance, but rather are used to distinguish one element from another.
[0157] As is evident from the above, the present disclosure provides a technically
20 advanced solution for generating and interconnecting dashboards. The present
solution automates the precomputation of key performance indicators (KPIs) and
their integration into various dashboards, allowing users to designate a dashboard
as a waterfall dashboard, making it eligible for precomputation, and use its output
as a base for other dashboards. This enables sequential execution of dashboards and
25 facilitates the interconnection of multiple dashboards to provide a comprehensive
understanding of network performance. Further, the present solution addresses the
need for handling complex computations over extended intervals, ensuring that the
results of these computations can be used in subsequent calculations for other KPIs
or counters. This feature allows users to define the importance of one dashboard's
30 data for the computation of others, enhancing efficiency and accuracy in
performance management. Implementing the features of the present invention,
users can set a time range for the associated information of a request, allowing the
saved pre-computed values to be used for calculating the value of another KPI
within a specified time range of up to 90 days in the past. This flexibility ensures
that historical data can be effectively utilized for current performance evaluations.
5 Additionally, the present solution enables users to receive and apply one or more
key performance indicators (KPIs) and aggregations, as well as operations on selected KPIs and aggregations, in a user-friendly manner. This comprehensive approach offers a robust and efficient method for managing and optimizing network performance through interconnected and precomputed dashboards.
10
[0158] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations
15 of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
48
We Claim:
5
10
15
20
25
1. A method [400] for generation of one or more interconnected dashboards in, the method [400] comprising:
- receiving, at a user interface module [302], a first request for generation of a first dashboard;
- receiving, at the user interface module [302], a second request to treat the first dashboard as a waterfall dashboard;
- receiving, at an integrated performance management (IPM) module [100a], the second request from the user interface module [302];
- saving, by a storage unit [305], an associated information of the second request;
- forwarding, by the IPM module [100a] to a computation module [307], the associated information of the second request for generating a report;
- forwarding, by the computation module [306] to the IPM module [100a], the report for storing in the storage unit [305];
- receiving, at the user interface module [302], a third request for generation of a second dashboard;
- receiving, at the user interface module [302], a fourth request for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report;
- interconnecting, by the IPM module [100a], the first dashboard and the second dashboard; and
- computing, by the computation module [306], a pre-computed data, the pre-computed data comprising one or more values for key performance indicators (KPIs) based on one or more operations, and wherein the pre-computed data is further used to filter the one or more values of the one or more KPIs in the second dashboard.
30
49
2. The method [400] as claimed in claim 1, wherein the method [400] further
comprises:
- sending, at the IPM module [100a], an acknowledgement of the second
request for treating the first dashboard as the waterfall dashboard.
5
3. The method [400] as claimed in claim 1, wherein the first dashboard treated
as the waterfall dashboard, may be used as the supporting dashboard for an
existing dashboard.
10 4. The method [400] as claimed in claim 1, further comprises:
- receiving, at the user interface module [302], the one or more KPIs and one or more aggregations in the third request for the first dashboard; and
- receiving, at the user interface module [302], the one or more operations to be applied on the one or more KPIs and the one or more aggregations.
15
5. The method [400] as claimed in claim 1, further comprises setting, by the computation module [306], a time range for the associated information of the second request.
20 6. The method [400] as claimed in claim 5, wherein the pre-computed data is
computed for a time period within the set time range, wherein the time period is received from the user interface module [302].
7. A system [300] for generation of one or more interconnected dashboards,
25 the system [300] comprises:
- a user interface module [302] configured to receive:
o a first request for generation of a first dashboard; o a second request to treat the first dashboard as a waterfall dashboard;
50
- an integrated performance management (IPM) module [100a]
connected with at least the user interface module [302], the IPM
module [100a] is configured to:
o receive, the second request from the user interface module
5 [302];
o save, in a storage unit [305], an associated information of the
second request;
o forward, to a computation module [306], the associated
information of the second request for generating a report;
10 - the computation module [306] connected at least to the IPM module
[100a], the computation module [306] configured to:
o forward, the report to the IPM module [100a], for storing at the storage unit [305];
- the user interface module [302] further configured to receive:
15 o a third request, for generation of a second dashboard;
o a fourth request, for adding the first dashboard as a supporting dashboard to the second dashboard using the stored report;
- the IPM module [100a] further configured to interconnect the first
20 dashboard with the second dashboard; and
- the computation module [306] further configured to compute a pre-
computed data, the pre-computed data comprising one or more
values for one or more key performance indicators (KPIs) based on
one or more operations, and wherein the pre-computed data is
25 further used to filter the one or more values of the one or more KPIs
in the second dashboard.
8. The system [300] claimed in claim 7, wherein the IPM module [100a] is
further configured to send, an acknowledgement of the second request for
30 treating the first dashboard as the waterfall dashboard.
9. The system [300] claimed in claim 7, wherein the first dashboard, treated as the waterfall dashboard, may be used as the supporting dashboard for an existing dashboard.
5 10. The system [300] as claimed in claim 7, wherein the user interface module
[302] further configured to:
- receive the one or more KPIs and one or more aggregations in the third
request for the first dashboard; and
- receive the one or more operations to be applied on the one or more KPIs
10 and the one or more aggregations.
11. The system [300] as claimed in claim 7, wherein the computation module
[306] is further configured to set a time range for the associated information
of the second request.
15
12. The system [300] as claimed in claim 11, wherein the pre-computed data is
computed for a time period within the set time range, wherein the time
period is received from the user interface module [302].
20 13. A user equipment (UE) for generation of one or more interconnected
dashboards, the UE comprising: - a processor configured to:
o transmit a first request for generation of a first dashboard;
o transmit a second request to treat the first dashboard as a
25 waterfall dashboard;
o transmit a third request for generation of a second dashboard;
o transmit a fourth request for adding the first dashboard as a
supporting dashboard to the second dashboard using a stored
report, wherein
30 for generation of the one or more interconnected dashboards,
process comprises:
receiving, at an integrated performance management (IPM) module [100a], the second request from a user interface module [302]; saving, by a storage unit [305], an associated
5 information of the second request;
▪ forwarding, by the IPM module [100a] to a
computation module [307], the associated
information of the second request for generating a report;
10 ▪ forwarding, by the computation module [306] to the
IPM module [100a], the report for storing in the storage unit [305]; ▪ interconnecting, by the IPM module [100a], the first dashboard and the second dashboard; and
15 ▪ computing, by the computation module [306], a pre-
computed data, the pre-computed data comprising one or more values for one or more KPIs based on one or more operations, and wherein the pre-computed data is further used to filter the one or more
20 values of the one or more KPIs in the second
dashboard.
| # | Name | Date |
|---|---|---|
| 1 | 202321049549-STATEMENT OF UNDERTAKING (FORM 3) [23-07-2023(online)].pdf | 2023-07-23 |
| 2 | 202321049549-PROVISIONAL SPECIFICATION [23-07-2023(online)].pdf | 2023-07-23 |
| 3 | 202321049549-FORM 1 [23-07-2023(online)].pdf | 2023-07-23 |
| 4 | 202321049549-FIGURE OF ABSTRACT [23-07-2023(online)].pdf | 2023-07-23 |
| 5 | 202321049549-DRAWINGS [23-07-2023(online)].pdf | 2023-07-23 |
| 6 | 202321049549-FORM-26 [21-09-2023(online)].pdf | 2023-09-21 |
| 7 | 202321049549-Proof of Right [23-10-2023(online)].pdf | 2023-10-23 |
| 8 | 202321049549-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf | 2023-12-08 |
| 9 | 202321049549-FORM-5 [22-07-2024(online)].pdf | 2024-07-22 |
| 10 | 202321049549-ENDORSEMENT BY INVENTORS [22-07-2024(online)].pdf | 2024-07-22 |
| 11 | 202321049549-DRAWING [22-07-2024(online)].pdf | 2024-07-22 |
| 12 | 202321049549-CORRESPONDENCE-OTHERS [22-07-2024(online)].pdf | 2024-07-22 |
| 13 | 202321049549-COMPLETE SPECIFICATION [22-07-2024(online)].pdf | 2024-07-22 |
| 14 | 202321049549-FORM 3 [02-08-2024(online)].pdf | 2024-08-02 |
| 15 | 202321049549-Request Letter-Correspondence [20-08-2024(online)].pdf | 2024-08-20 |
| 16 | 202321049549-Power of Attorney [20-08-2024(online)].pdf | 2024-08-20 |
| 17 | 202321049549-Form 1 (Submitted on date of filing) [20-08-2024(online)].pdf | 2024-08-20 |
| 18 | 202321049549-Covering Letter [20-08-2024(online)].pdf | 2024-08-20 |
| 19 | 202321049549-CERTIFIED COPIES TRANSMISSION TO IB [20-08-2024(online)].pdf | 2024-08-20 |
| 20 | Abstract-1.jpg | 2024-10-03 |
| 21 | 202321049549-FORM 18A [12-03-2025(online)].pdf | 2025-03-12 |
| 22 | 202321049549-FER.pdf | 2025-05-30 |
| 23 | 202321049549-FORM 3 [01-07-2025(online)].pdf | 2025-07-01 |
| 24 | 202321049549-FER_SER_REPLY [05-07-2025(online)].pdf | 2025-07-05 |
| 25 | 202321049549-US(14)-HearingNotice-(HearingDate-05-12-2025).pdf | 2025-11-11 |
| 1 | 202321049549_SearchStrategyNew_E_Search049549E_13-03-2025.pdf |