Abstract: The present disclosure relates to a method and system for analysis of key performance indicators (KPIs). The method [400] encompasses receiving, by a receiving unit [302] from a user interface (UI) [304] , a request for a set of KPIs to be determined; retrieving, by a retrieving unit [306] via an integrated performance management (IPM) module [100a], data related to the requested set of KPIs; computing, by a computing unit [310] via an integrated performance management (IPM) module [100a], based on the retrieved data, the set of KPIs; generating, by a processing unit [312], based on the received request, an output dataset comprising the computed set of KPIs; and transmitting, by a transmitting unit [314] to the UI [304], the generated output dataset. [FIG. 4]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs)”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
2
METHOD AND SYSTEM FOR ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs)
TECHNICAL FIELD
5
[0001]
Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to a method and a system for analysis of key performance indicators (KPIs).
10
BACKGROUND
[0002]
The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the 15 present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as an admission of the prior art.
[0003]
Network performance management systems typically track network 20 elements and data from network monitoring tools and then combine and process such data to determine key performance indicators (KPIs) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders can identify the service quality of the overall network, and individual/ grouped 25 network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
30
3
[0004]
In network performance management systems, particularly in visualization sub-systems, there is an increased requirement of monitoring and analysing the performance of networks to ensure that they are operating at optimal levels. Key Performance Indicators (KPIs) are used to measure the performance of these networks and are often displayed on a dashboard for easy visualization. 5 However, there are various challenges that arise when analysing and visualizing KPIs on a dashboard in network performance management systems.
[0005]
One of the main challenges is the granularity of the data. Network data is analysed at different levels of granularity in terms of time, such as hourly, daily, 10 weekly, or monthly. However, transitioning from a higher level of granularity to a lower level or vice versa in real-time can be difficult. For example, if a user creates a dashboard that reports all the KPIs on an hourly basis but finds an absurd value in one of the KPIs, they may want to inquire about the value minute-wise to analyse and pinpoint the issue behind it. This requires the user to manually create yet 15 another dashboard, fetch the minute-wise data, and wait for reports. This process shifts the focus of the entire analysis from hour-wise to minute-wise or weekly, depending on the user's choice, and that too in real-time.
[0006]
Another challenge is the sheer volume of data. Network performance 20 management systems generate a vast amount of data that needs to be analysed and visualized. This can be overwhelming for users who may not have the necessary skills or tools to interpret the data effectively. Additionally, displaying too much data on a dashboard can cause information overload and make it difficult to identify key insights. 25
[0007]
Furthermore, different users may have different requirements when it comes to analysing and visualizing KPIs. For example, a network administrator may want to view KPIs related to network availability and response time, while a business analyst may be interested in KPIs related to user behaviour and 30
4
engagement. Therefore, creating a dashboard that caters to the needs of all users
can be challenging.
[0008]
Additionally, there is also a challenge of ensuring data accuracy and reliability. Network performance management systems rely on accurate and reliable 5 data to generate KPIs. However, data inaccuracies or inconsistencies can lead to incorrect KPIs and misleading insights.
[0009]
There had been some efforts with a motivation to overcome above mentioned challenges. For example, in some of the developments, to address the 10 challenge of data granularity, creation of multiple parallel dashboards for different time intervals were suggested. While this provides some flexibility, it leads to dashboard clutter and requires users to switch between dashboards to gain comprehensive insights.
15
[0010]
Some organizations use data visualization tools that offer more customization and interactivity. However, these tools may require specialized skills, and integrating them with existing systems can be complex and costly.
[0011]
Another shortcoming is the potential for data misinterpretation. Even 20 with accurate data and reliable measurement tools, users may still misinterpret the data or draw incorrect conclusions. This can lead to poor decision-making and potentially negative outcomes for the organization.
[0012]
Additionally, most of these solutions provide only predefined types of 25 insights, which cannot be changed dynamically. As a result, the insights gained from the computations may not provide a comprehensive view of the network's overall performance and may fail to identify critical issues or trends.
[0013]
Moreover, the lack of efficiency in performing these computation 30 outcomes limit the ability of network operators and stakeholders to make timely
5
and informed decisions. Delays in data processing and analysis hinder the proactive
management of the network, as potential issues or failures may go undetected or unaddressed until they become significant problems.
[0014]
In existing inventions, backend systems and front-end interfaces often 5 functioned more or less independently. While backend systems collected and processed data, front-end interfaces (like dashboards) displayed this data. However, the interaction between these two components was limited and often one-way: backend systems would send data to the front end, and the front end had little to no influence over the backends’ operations. With a one-way communication path, 10 users on the front end could not directly influence backend processes. If users needed to adjust the granularity of data or modify the parameters of a query, they had to do so indirectly and wait for the backend to deliver the new results. This led to delays and inefficiencies. Without real-time interaction with the backend, any anomalies detected on the front-end dashboard could not be promptly investigated 15 by adjusting backend processes. This led to slower issue identification and resolution. In the old system, backend servers processed data at a fixed pace and often with pre-set parameters. Without a dynamic link to the front end, the system could not adjust these parameters based on real-time requirements or insights.
20
[0015]
Accordingly, it may be noted that the telecommunication monitoring services face several challenges when it comes to creating dashboards for performing various computations for KPIs for the purpose of gaining various insights, trends, patterns, and the like. Moreover, the currently known mechanisms are mostly inefficient, inaccurate, and static and therefore, often limited in their 25 functionality and do not provide the level of detail required for in-depth analysis.
[0016]
Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions.
30
6
[0017]
The present invention solves these issues by establishing a closed-loop integration between the front end and the backend. Here, the backend system does not just serve data to the front end but also dynamically responds to front-end inputs.
5
SUMMARY
[0018]
This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope 10 of the claimed subject matter.
[0019]
An aspect of the present disclosure may relate to a method for analysis of key performance indicators (KPIs). The method includes receiving, by a receiving unit from a user interface (UI), a request for a set of KPIs to be 15 determined. The method further includes retrieving, by a retrieving unit via an integrated performance management (IPM) module, data related to the requested set of KPIs. The method further comprises computing, by a computing unit via an integrated performance management (IPM) module, based on the retrieved data, the set of KPIs. The method further comprises generating, by a processing unit, based 20 on the received request, an output dataset comprising the computed set of KPIs. The method further comprises transmitting, by a transmitting unit to the UI, the generated output dataset.
[0020]
In an exemplary aspect of the present disclosure, the data related to the 25 requested KPIs is at least stored in a database, and wherein the step of retrieving, by the retrieving unit, data related to the requested set of KPIs comprises determining, by the retrieving unit, a time extent of the requested set of KPIs, wherein, if the time extent is greater than a retention period of the database, the method comprises computing, by the computing unit via a computation layer (CL), 30 the data related to the requested set of KPIs, and the time extent is less than or equal
7
to the retention period of the database, the method comprises retrieving, by the
retrieving unit, from the database, the data related to the requested set of KPIs.
[0021]
In an exemplary aspect of the present disclosure, the processing unit comprises a learning engine comprising an artificial intelligence (AI)/ machine 5 learning (ML) model, and wherein the method comprises at least one of translating, by the learning engine, the received request to a predefined format compatible with at least one of the CL, and the database; and generating, by the learning engine, based on the received request, an output dataset comprising the computed set of KPIs. 10
[0022]
In an exemplary aspect of the present disclosure, the output dataset comprises a report indicating a behavioural trend of the computed set of KPIs over a predefined duration of time, wherein the report is indicated in one or more predefined formats, and wherein the output dataset is configured to be manipulated 15 to indicate the computed set of KPIs according to at least a set of aggregation parameters related to the set of KPIs.
[0023]
In an exemplary aspect of the present disclosure, the aggregation parameters are one or more time hierarchy parameters related to the predefined 20 duration of time.
[0024]
In an exemplary aspect of the present disclosure, the method comprises providing, by a displaying unit at the UI, a dashboard, and wherein the dashboard is configured to display the output dataset. 25
[0025]
Another aspect of the present disclosure may relate to a system for analysis of key performance indicators (KPIs). The system comprises a transmitting unit configured to receive from a user interface (UI), a request for a set of KPIs to be determined. The system further comprises a retrieving unit configured to retrieve 30 via an integrated performance management (IPM) module, data related to the
8
requested set of KPIs. The system further comprises a computing unit configured
to compute via an integrated performance management (IPM) module, based on the retrieved data, the set of KPIs. The system further comprises a processing unit configured to generate based on the received request, and an output dataset comprising the computed set of KPI. The system further comprises a displaying 5 unit configured to transmit to the UI, the generated output dataset.
[0026]
Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for analysis of key performance indicators (KPIs), the instructions include executable code which, 10 when executed by one or more units of a system, causes: a receiving unit to receive, from a user interface (UI), a request for a set of KPIs to be determined; retrieving unit to retrieve via an integrated performance management (IPM) module, data related to the requested set of KPIs; computing unit to compute via an integrated performance management (IPM) module, based on the retrieved data, the set of 15 KPIs; a processing unit to generate based on the received request, an output dataset comprising the computed set of KPI; transmitting unit to transmit to the UI, the generated output dataset.
OBJECTS OF THE INVENTION 20
[0027]
Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0028]
It is an object of the present disclosure to provide a system for efficiently 25 processing and computing data, allowing users to create interactive dashboards with drill-down/roll-up options.
[0029]
It is another object of the present disclosure to facilitate the creation, computation, and visualization of KPI’s on different levels of granularities in 30 network performance management systems.
9
[0030]
It is yet another object of the present disclosure to create a system where the interface (previously referred to as dashboards) and backend systems are tightly integrated. This aims to establish a seamless interaction between user input and backend data processing.
5
[0031]
It is yet another object of the present disclosure to establish a closed-loop integration where user interactions from the interface can dynamically adjust backend operations in real-time, promoting real-time anomaly detection and data analysis.
10
[0032]
It is yet another object of the present disclosure to ensure that the heavy lifting of data analysis is done at the backend, with results then dynamically fed back to the user interface for visualization and further interaction.
[0033]
It is yet another object of the present disclosure to enhance the efficiency 15 of data processing by leveraging the distributed nature of the backend systems, making it capable of handling high volumes of data and delivering timely and accurate insights to users.
[0034]
It is yet another object of the present disclosure to provide a method of 20 implementing an interactive dashboard for different KPIs by providing a visual representation of the relationships and dependencies between KPIs.
BRIEF DESCRIPTION OF THE DRAWINGS
25
[0035]
The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the 30 principles of the present disclosure. Also, the embodiments shown in the figures are
10
not to be construed as limiting the disclosure, but the possible variants of the method
and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components. 5
[0036]
FIG.1 illustrates an exemplary block diagram of a network performance management system, in accordance with the exemplary embodiments of the present invention.
10
[0037]
FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented, in accordance with exemplary implementation of the present disclosure.
[0038]
FIG. 3 illustrates an exemplary block diagram of a system for analysis of 15 key performance indicators (KPIs), in accordance with exemplary implementations of the present disclosure.
[0039]
FIG. 4 illustrates a method flow diagram for analysis of key performance indicators (KPIs) in accordance with exemplary implementations of the present 20 disclosure.
[0040]
FIG. 5 illustrates an exemplary block diagram of a system architecture for analysis of key performance indicators (KPIs), in accordance with exemplary implementations of the present disclosure. 25
[0041]
FIG. 6 illustrates a process flow diagram for analysis of key performance indicators (KPIs) in accordance with exemplary implementations of the present disclosure.
30
11
[0042]
The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
5
[0043]
In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one 10 another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
[0044]
The ensuing description provides exemplary embodiments only, and is 15 not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope 20 of the disclosure as set forth.
[0045]
Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skills in the art that the embodiments may be practiced without these 25 specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0046]
Also, it is noted that individual embodiments may be described as a 30 process which is depicted as a flowchart, a flow diagram, a data flow diagram, a
12
structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. 5
[0047]
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not 10 necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner 15 similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0048]
As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic 20 circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable 25 Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor. 30
13
[0049]
As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The 5 user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of 10 a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0050]
As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a 15 form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective 20 functions.
[0051]
As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define 25 communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
[0052]
All modules, units, components used herein, unless explicitly excluded 30 herein, may be software modules or hardware processors, the processors being a
14
general
-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc. 5
[0053]
As used herein the transceiver unit includes at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information, or a combination thereof between units/components within the system and/or connected with the system. 10
[0054]
As used herein, integrated performance management (IPM) module refers to a module that provides node-wise KPIs for any operational requirements and displays details from various subsystems, helping analyse the root cause whenever a service outage occurs. Furthermore, the IPM module provides node-15 wise detailed information in near-real-time whenever there is any discrepancy in the underlying network. Also, the IPM module helps execute on-demand KPIs whenever triggers are received from operational agents. IPM module stores the output data aggregated from KPIs and counters in distributed data lakes or caching layers for further processing and interacts with the workflow engine for 20 execution of a workflow whenever a KPI is breached.
[0055]
As used herein, computational layer refers to a layer that manages requests from external systems, controlling access and verifying that requests are appropriately authorized. 25
[0056]
As used herein, the machine learning/artificial model refers to a trained model based on a neural network-based model, and a decision tree-based model and the like.
30
15
[0057]
As used herein, distributed data lake refers to centralized repository designed to store, process, and secure large amounts of structured, semi structured, and unstructured data. It can store data in its native format and process any variety of it, ignoring size limits.
5
[0058]
As used herein, distributed file system refers to a data storage and management scheme that allows users or applications to access data files such PDFs, word documents, images, video files, audio files etc., from shared storage across any one of multiple networked servers. With data shared and stored across a cluster of servers, DFS enables many users to share storage resources and data files 10 across many machines.
[0059]
As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing 15 methods and systems for analysis of key performance indicators (KPIs).
[0060]
Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
20
[0061]
FIG. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention. Referring to FIG. 1, the network performance management system [100] comprises various sub-systems such as: integrated performance management (IPM) module [100a], normalization layer [100b], computation layer 25 [100d], anomaly detection layer [100o], streaming engine [100l], load balancer [100k], operations and management system [100p], API gateway system [100r], analysis engine [100h], parallel computing framework [100i], forecasting engine [100t], distributed file system [100j], mapping layer [100s], distributed data lake [100u], scheduling layer [100g], reporting engine [100m], message broker [100e], 30 graph layer [100f], caching layer [100c], service quality manager [100q],
16
correlation engine[100n], and ingestion layer [100x]. Exemplary connections
between these subsystems are also as shown in FIG.2. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this 5 disclosure.
[0062]
Following are the various components of the network performance management system [100], the various components may include:
10
[0063]
Integrated performance management (IPM) module [100a] comprises one or more performance engine [100v] and one or more Key Performance Indicator (KPI) Engine [100w].
[0064]
Performance Management Engine [100v]: The Performance 15 Management engine [100v] is a crucial component of the integrated system, responsible for collecting, processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics such as connection speed, latency, data transfer rates, and many others. This raw data is then processed and aggregated as required, forming a comprehensive 20 overview of network performance. The processed information is then stored in a Distributed Data Lake [100u], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The Performance Management engine [100v] also enables the reporting and visualization of this performance counter data, thus providing network administrators with a real-time, insightful 25 view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
[0065]
Key Performance Indicator (KPI) Engine [100w]: The Key 30 Performance Indicator (KPI) Engine is a dedicated component tasked with
17
managing the KPIs of all the network elements. It uses the performance counters,
which are collected and processed by the Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [100w] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are 5 computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed KPI data is then stored in the Distributed Data Lake [100u], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [100w] 10 is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
15
[0066]
Ingestion layer [100x]: The Ingestion layer [100x] forms a key part of the Integrated Performance Management system. Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for 20 maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer [100x] processes it by validating its integrity and correctness to ensure it is fit for further use. Following validation, the data is routed to various components of the system, including the Normalization layer [100b], Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen 25 based on where the data is required for further analytics and processing. By serving as the first point of contact for incoming data, the Ingestion layer [100x] plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
30
18
[0067]
Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in 5 various databases like the Distributed Data Lake [100u], Caching Layer [100c], and Graph Layer [100f], depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management 10 system through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include the Analysis Engine [100h] for detailed data examination, the Correlation Engine [100n] for detecting relationships among various data elements, the Service Quality Manager [100q] for maintaining and improving the quality of services, and the 15 Streaming Engine [100l] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the Normalization Layer's [100b] role in the entire system.
20
[0068]
Caching layer [100c]: The Caching Layer [100c] in the Integrated Performance Management system plays a significant role in data management and optimization. During the initial phase, the Normalization Layer [100b] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various 25 databases. One such database is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [100c], the system significantly reduces the time taken to access this data, improving overall system efficiency and 30 performance. Further, the Caching Layer [100c] serves as an intermediate layer
19
between the data sources and the sub
-systems, such as the Analysis Engine [100h], Correlation Engine [100n], Service Quality Manager, and Streaming Engine. The Normalization Layer [100b] is responsible for providing these subsystems with the necessary data from the Caching Layer [100c].
5
[0069]
Computation layer [100d]: The Computation Layer [100d] in the Integrated Performance Management system serves as the main hub for complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the Normalization Layer [100b]. The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake 10 [100u], Caching Layer [100c], and Graph Layer [100f], and also feeds it to the Message Broker [100e]. Within the Computation Layer [100d], several powerful sub-systems such as the Analysis Engine [100h], Correlation Engine [100n], Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine 15 performs in-depth data analytics to generate insights from the data. The Correlation Engine [100n] identifies and understands the relations and patterns within the data. The Service Quality Manager assesses and ensures the quality of the services. And the Streaming Engine processes and analyses the real-time data feeds. In essence, the Computation Layer [100d] is where all major computation and data processing 20 tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics.
[0070]
Message broker [100e]: The Message Broker [100e], an integral part of 25 the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [100e] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed 30 applications. With the ability to accommodate a large number of permanent or ad-
20
hoc consumers, the Message Broker [100e] demonstrates immense flexibility in
managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [100e] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design 5 and capabilities, the Message Broker [100e] forms a component in managing and delivering real-time data in the system.
[0071]
Graph layer [100f]: The Graph Layer [100f], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It 10 can model a variety of data types, including alarm, counter, configuration, CDR data, Infra-metric data, Probe Data may be, for example data related to 4G network, 5G network, 6G network but not limited to, other communication network may also be possible, and Inventory data. Equipped with the capability to establish relationships among diverse types of data, the Relationship Modeler offers 15 extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the Modeler should be adept at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel Computing system, Workflow Engine, Query Engine, Correlation System [100n], Performance 20 Management Engine, or KPI Engine [100w]. With its powerful modelling and processing capabilities, the Graph Layer [100f] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
25
[0072]
Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the 30 Distributed Data Lake [100u] or Distributed File System or sending it to another
21
microservice
. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer 5 [100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
[0073]
Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part of the Integrated Performance Management System, designed to provide an 10 environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [100h], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. 15 The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [100h] provides a robust analytical environment for systematic data interrogation, facilitating efficient 20 problem identification and resolution, thereby contributing significantly to the system's overall performance management.
[0074]
Parallel Computing Framework [100i]: The Parallel Computing Framework [100i] is a key aspect of the Integrated Performance Management 25 System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [100j] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service 30 Configuration Management (SCM) Sub-System. Each task in a workflow is
22
executed sequentially, but multiple chains can be executed simultaneously,
optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [100i] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly 5 improving the system's performance management capabilities.
[0075]
Distributed File System [100j]: The Distributed File System (DFS) [100j] is a component of the Integrated Performance Management System, enabling multiple clients to access and interact with data seamlessly. This file system is 10 designed to manage data files that are partitioned into numerous segments known as chunks. In the context of a network with vast data, the DFS [100j] effectively allows for the distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS [100j] also supports diverse operations, facilitating 15 the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
[0076]
Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital 20 component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies 25 to manage traffic. These include round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within the 30 headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based
23
dispatching routes traffic based on the contextual information about the incoming
requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of 5 the overall performance management system.
[0077]
Streaming Engine [100l]: The Streaming Engine [100l], also referred to as Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data 10 pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [100l]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming 15 Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker [100e], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data 20 Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the requirement and deliver it to the UI in real-time. The streaming engine's [100l] ultimate goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
25
[0078]
Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine (not shown). The REM serves as the primary 30 interface for creating custom reports based on the data visualized through the
24
client's dashboard. These custom dashboards, created by the client through the User
Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process 5 it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the Reporting Engine [100m] integrates seamlessly with the Notification Engine (not shown) to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily 10 accessible and usable, thereby improving overall client satisfaction and system usability.
[0079]
FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which the features of the present disclosure may be implemented in 15 accordance with exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method for analysis of key performance indicators utilising the system. In another implementation, the computing device [200] itself implements the method for analysis of key performance indicators using one or more units configured within 20 the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0080]
The computing device [200] may include a bus [202] or other communication mechanism for communicating information, and a hardware 25 processor [204] coupled with bus [202] for processing information. The hardware processor [204] may be, for example, a general-purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The 30 main memory [206] also may be used for storing temporary variables or other
25
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory 5 (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
[0081]
A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and 10 instructions. The computing device [200] may be coupled via the bus [102] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the 15 bus [202] for communicating information and command selections to the processor [204]. Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212]. This input device typically has two degrees 20 of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0082]
The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware, 25 and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206]. Such 30 instructions may be read into the main memory [206] from another storage medium,
26
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions. 5
[0083]
The computing device [200] also may include a communication interface [218] coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222]. For example, the communication interface [218] may be an 10 integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such 15 implementation, the communication interface [218] sends and receives electrical, electromagnetic, or optical signals that carry digital data streams representing various types of information.
[0084]
The computing device [200] can send messages and receive data, 20 including program code, through the network(s), the network link [220] and the communication interface [218]. In the Internet example, a server [230] might transmit a requested code for an application program through the Internet [228], the ISP [226], the local network [222], host [224] and the communication interface [218]. The received code may be executed by the processor [204] as it is received, 25 and/or stored in the storage device [210], or other non-volatile storage for later execution.
[0085]
The computing device [200] encompasses a wide range of electronic devices capable of processing data and performing computations. Examples of 30 computing devices [200] include, but are not limited only to, personal computers,
27
laptops, tablets, smartphones, servers, and embedded systems. The devices may
operate independently or as part of a network and can perform a variety of tasks such as data storage, retrieval, and analysis. Additionally, computing devices [200] may include peripheral devices, such as monitors, keyboards, and printers, as well as integrated components within larger electronic systems, showcasing their 5 versatility in various technological applications.
[0086]
Referring to FIG. 3, an exemplary block diagram of a system [300] for analysis of key performance indicators, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one 10 receiving unit [302], at least one user interface [304], at least one retrieving unit [306], at least one integrated performance module [100a], at least one computing unit [310], at least one processing unit [312], at least one transmitting unit [314], at least one database [316] and at least one displaying unit [318]. Also, all of the components/ units of the system [300] are assumed to be connected to each other 15 unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, 20 the system [300] may be present in a user device to implement the features of the present disclosure. The system [300] may be a part of the user device / or may be independent of but in communication with the user device (may also referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly 25 in the server/ network entity and partly in the user device.
[0087]
The system [300] is configured for analysis of key performance indicators, with the help of the interconnection between the components/units of the system [300]. 30
28
[0088]
The system [300] comprises a receiving unit [302] configured to receive from a user interface (UI) [304], a request for a set of KPIs to be determined. The receiving unit [302] receives the request from a user via the user interface associated with a user device (such as mobile, smart phone, a laptop etc.). The user interface (UI) [304] acts as the point of interaction where users can define their requirements, 5 such as selecting specific KPIs, setting time ranges, and choosing the level of detail needed for their analysis. In an exemplary aspect, when the user inputs the request through the UI [304], the receiving unit [302] processes this received input for determining the set of KPIs that need to be analysed, enabling effective tracking and assessment of performance metrics. The set of KPIs may include, but not 10 limited only to at least one of a call drop rate, a mute call rate, an IP throughput, a cell effective throughput, a handover success rate, a session setup success rate, and an attach signalling failure rate. For example, a network administrator who is responsible for monitoring the performance of a network uses the UI [304] to request data on specific KPIs such as "network latency," "packet loss," and 15 "throughput." The administrator may also specify that these KPIs should be analysed over the last 24 hours with data aggregated at an hourly level. Upon submitting this request via the UI [304], the receiving unit [302] captures all these details and prepares them for further processing.
20
[0089]
The input from the user may be of different types. The type of inputs can include at least one of a command, a code, or a natural language query. A natural language input could be something like, "Show me the network performance over the past 24 hours," where the system interprets the user’s request and automatically determines the necessary KPIs, such as "network uptime" and "latency," and the 25 appropriate time granularity.
[0090]
The system [300] comprises a retrieving unit [306] configured to retrieve via an integrated performance management (IPM) module [100a], data related to the requested set of KPIs. The retrieving unit [306] retrieves the data related to the 30 requested set of KPIs via the integrated performance management (IPM) module
29
[100a]. The retrieving unit [306] us
es integrated performance management (IPM) module [100a] to fetch/retrieve data related to the requested KPIs. Upon retrieving the user’s KPI request, the retrieving unit [306] interacts with the IPM module [100a] to access and collect the necessary data. This retrieving unit [306] is responsible for accessing the necessary data that corresponds to the KPIs requested 5 by the user through the user interface (UI). The IPM module [100a] acts as an intermediary, facilitating the retrieval process by coordinating with various data sources and ensuring that the appropriate data is fetched for analysis. For example, a network administrator requests data on KPIs such as "network uptime" and "bandwidth utilization" over the past month. Once the request is received by the 10 receiving unit [302], the retrieving unit [306] utilises the IPM module [100a] to locate and retrieve the relevant data from the system’s databases or data lakes.
[0091]
In an exemplary aspect, the retrieving unit [306] also handles scenarios where the data related to the request may need to be processed or filtered before 15 being returned. For example, if the requested KPI involves a specific subset of data, such as "average response time" for users in a particular geographic location, the IPM module [100a] will guide the retrieving unit [306] to apply the necessary filters during the retrieval process. This ensures that the data provided is both relevant and accurate, tailored specifically to the parameters set by the user. 20
[0092]
The data related to the requested KPIs is at least stored in a database [316]. To retrieve data related to the requested set of KPIs, the retrieving unit [306] is configured to determine the time extent of the requested set of KPIs. The time extent of the requested set of KPIs may be analysed at different levels of granularity 25 in terms of time, such as, seconds, minutes, hourly, daily, weekly, or monthly.
[0093]
In an exemplary aspect, if the time extent is less than or equal to a retention period of the database, the retrieving unit [306] is configured to retrieve, from the database [316], the data related to the requested set of KPIs. After 30 analysing the time extent (e.g., seconds, minutes, hours, weekly, etc.), the retrieving
30
unit [306] via
integrated performance management (IPM) module [100a] determines if the time extent is less than or equal to the retention period of the database . In an exemplary aspect, the retrieving unit [306] fetches data related to the set of KPI by first assessing the time extent specified in the request. If this time extent falls within the database's retention period, the retrieving unit [306] retrieves 5 the relevant data directly from the database [316]. This ensures that the data is current and within the permissible storage duration, facilitating accurate KPI analysis.
[0094]
In an exemplary aspect, if the time extent is greater than the retention 10 period of the database, the computing unit [310] is configured to compute, via a computation layer (CL), the data related to the requested set of KPIs. The time extent refers to the specific duration over which the data is to be retrieved, such as hours, days, or weeks. In an exemplary aspect, if the selected time frame is greater than the retention period of the database or in case complex queries are involved 15 making the time extent greater that the retention period of the database, the retrieving unit [306] via the IPM module [100a] may then forward the request to the computing unit [310] via the computation layer (CL). In an exemplary aspect, the computing unit [310] via CL may compute the request and send the results back to the IPM module [100a]. For instance, if a user requests KPIs covering a six-20 month period, but the database only retains detailed data for the past three months, the retrieving unit [306] will pass the request to the computing unit [310], which will then use the computation layer to reconstruct or estimate the KPI data from summary records or archived data.
25
[0095]
The system further comprises a computing unit [310] configured to compute via the Integrated Performance Management (IPM) module [100a], based on the retrieved data, the set of KPIs. This computing unit [310] is responsible for processing the data that has been retrieved by the retrieving unit [306] and performing the necessary calculations to generate the KPIs requested by the user. 30 For example, a network administrator has requested the KPI "average network
31
latency" over a specific time period. After the relevant data is retrieved, the
computing unit [310] uses this data to calculate the average latency. The IPM module [100a] manages this computation, ensuring that the data is aggregated correctly over the specified time frame and necessary filters or conditions are applied. The result is an accurate KPI that reflects the network's performance during 5 the period of interest.
[0096]
In an exemplary aspect, the computing unit [310] via Integrated Performance Management (IPM) module [100a] is responsible for computing and data processing of the set of KPIs based on the retrieved data. It receives 10 instructions from retrieving unit [306] via the integrated performance management IPM module [100a] and operates on the data stored in the database such as but not limited to Distributed Data Lake (DDL) and the Distributed File System (DFS).
[0097]
The system [300] comprises a processing unit [312] configured to 15 generate based on the received request, an output dataset comprising the computed set of KPIs. The processing unit [312] is configured for assembling the output dataset that reflects the results of the computations performed by the computing unit [310]. Once the set of KPIs have been calculated, the processing unit [312] organizes this data into a structured format that can be easily interpreted and used 20 by the user. For example, if a network administrator has requested KPIs such as "average response time" and "error rate" for the past 24 hours, the processing unit [312] takes the computed values and generates an output dataset that includes these KPIs. The output dataset may be organized in a way that aligns with the user’s original request, perhaps presenting the data in a time-series format or as 25 summarized statistics. The output dataset may also include visual elements, such as graphs or charts, to enhance the user's ability to analyse the results quickly. The output dataset comprises a report indicating a behavioural trend of the computed set of KPIs over a predefined duration of time. The output dataset may include a detailed report that highlights the behavioural trends of the computed KPIs over a 30 predefined/specified duration of time. As used herein, behavioural trends refers to
32
the patterns or changes in key performance indicators (KPIs) over time. These
trends show how KPIs fluctuate, improve, or decline, providing insights into performance dynamics and operational effectiveness. By analysing these trends, the user is able to identify underlying factors influencing performance of KPIs, such as operational changes, or market conditions, allowing for more informed decision-5 making and strategic planning in dealing with network performance related issues.
[0098]
The report is indicated in one or more predefined formats, and wherein the output dataset is configured to be manipulated to indicate the computed set of KPIs according to at least a set of aggregation parameters related to the set of KPIs. 10 The set of aggregation parameters can be either pre-defined within the system or specified as part of the user’s request. The aggregation parameters are one or more time hierarchy parameters related to the predefined duration of time. The report generated by the processing unit [312] is indicated in one or more predefined formats, allowing flexibility in how the KPI data is viewed or displayed to the user. 15 In an exemplary aspect, predefined formats are such as but not limited to charts, tables, graphs etc.
[0099]
The report is presented in one or more predefined formats, such as tables, graphs, charts, or dashboards, depending on the user's preferences or the system's 20 configuration. Additionally, the output dataset is designed to be flexible, allowing users to customise the report according to various aggregation parameters related to the KPIs. The users can adjust the granularity of the data, for instance, viewing it aggregated by hour, day, week, or month, depending on the level of detail they require for their analysis. Such customization enables users to explore different 25 perspectives of the KPI data, helping them identify patterns, trends, or anomalies across different time periods, and ultimately supporting more informed decision-making.
[0100]
The output dataset can be manipulated according to a specified set of 30 aggregation parameters, which include time hierarchy parameters linked to the
33
predefined duration. This means that the KPI data can be grouped or summarized
based on various time periods, such as daily, weekly, or monthly, providing a clearer view of performance trends across different time scales.
[0101]
The processing unit [312] further comprises a learning engine [312a] 5 comprising an artificial intelligence (AI)/machine learning (ML) model, and wherein the learning engine [312a] is configured to perform at least one of translate the received request to a predefined format compatible with at least one of the CL, and the database [316] and generate, based on the received request, an output dataset comprising the computed set of KPIs. The learning engine using the AI/ML 10 model processes the user's queries or requests in natural language and translates it into a predefined format to make it compatible with at least one of the computation layer (CL) and the database.
[0102]
The learning engine [312a] is configured to interpret user requests that 15 may come in various formats, including natural language queries, command-based inputs, or predefined templates. For example, a user might input a natural language query such as, "Give me the daily sales data for the last quarter, broken down by region." The AI/ML model within the learning engine [312a] is trained to understand such queries and translate them into structured commands that can be 20 processed by the underlying systems. This may involve converting the query into a SQL query for direct database retrieval or formulating a request compatible with the computation layer (CL) for more complex calculations. In an exemplary aspect, the learning engine [312a] also has the capability to disambiguate vague requests, apply context-aware processing, and optimize queries for performance to enable the 25 system to retrieve the relevant data.
[0103]
The learning engine [312a] is further configured to generate the output dataset. The output dataset is composed of the computed KPIs organized according to the user’s specifications. For example, if the user requested a time-series analysis 30 of sales data, the learning engine [312a] might generate a dataset that includes the
34
sales figures organized by day, with additional columns representing regional
performance. The AI/ML model can also apply advanced data formatting techniques, such as sorting, filtering, and aggregating data based on predefined rules or learned user preferences. The output dataset can then be formatted for visualization in graphs, tables, or downloadable reports, depending on how the user 5 intends to interact with the results.
[0104]
The AI/ML model integrated within the learning engine [312a] is trained on a dataset comprising historical user queries, system performance logs, and a vast array of domain-specific data relevant to the KPIs being analysed. This training data 10 includes examples of natural language queries, command-based requests, and various patterns of user interactions with the system, allowing the models to recognize common query structures and anticipate user needs. Additionally, the AI/ML model is trained on large datasets of historical KPI values and their relationships, enabling them to understand the context and importance of various 15 metrics. The training process also involves learning from real-time system usage data, which helps the models continuously refine their ability to interpret ambiguous requests, optimize query performance, and generate accurate, relevant output datasets.
20
[0105]
The transmitting unit [314] is further configured to transmit to the UI [304], the generated output dataset. The transmitting unit [314] transmits the generated output dataset to the UI [304] such that the output dataset could be further analysed by the user to facilitate in identifying underlying factors influencing performance of KPIs, such as operational changes, network conditions, etc. 25 allowing for more informed decision-making and strategic planning in dealing with network performance related issue. The transmitting unit [314] enables the user to access the output dataset which were generated by the processing unit [312], through the UI [304]. The transmitting unit [314] adapts to various data formats, depending on the user's preferences. For example, if the user requested 30 visualizations such as a line chart showing network performance over a specific
35
period, the transmitting unit [314] delivers the dataset in a format that supports this
visual representation on the UI [304]. Similarly, if the user needs a downloadable CSV file with aggregated sales data, the transmitting unit [314] formats the data accordingly and transmits it in a way that allows the user to download or further analyse the information. 5
[0106]
In an exemplary aspect, in cases where performance is a concern, such as transmitting large datasets or real-time data streams, the transmitting unit [314] manages the data transfer efficiently. For example, during real-time monitoring of the set of KPIs, the transmitting unit [314] may split the data into manageable 10 portions or apply data compression techniques to facilitate a smooth and uninterrupted transmission to the UI [304].
[0107]
The system [300] further comprises a displaying unit [318] configured to provide, at the UI [304], a dashboard, and wherein the dashboard is configured to 15 display the output dataset. The displaying unit [318] at the UI [304] provides the dashboard for displaying the output dataset, which includes a set of aggregation parameters related to the selected Key Performance Indicators (KPIs). These aggregation parameters allow the data to be summarized and visualized over specific time intervals, facilitating the implementation of drill-down and roll-up 20 features. In an exemplary aspect, aggregation enables the system to present data in a way that makes trends and patterns more apparent across different timeframes, helping users to analyse performance at varying levels of detail. Drill-down functionality allows users to view data at increasingly detailed levels within a chosen time hierarchy. For instance, if the user initially selects an "hourly" time 25 hierarchy, they can use the drill-down feature to examine the data at more granular intervals, such as "minute" or "second" levels. This capability is particularly useful when a user needs to pinpoint specific events or anomalies that might be obscured in broader timeframes. For example, a network administrator monitoring hourly KPIs might drill down to minute-level data to investigate a sudden spike in latency. 30 Conversely, the roll-up feature provides the option to view data with less
36
granularity, summarizing it at a higher level within the selected
time hierarchy. By using the roll-up option, the user can aggregate data to broader time intervals, such as "daily" or "weekly," which helps in identifying long-term trends or overarching patterns that may not be visible at more detailed levels. For example, after reviewing minute-level data for network performance issues, the user may choose 5 to roll up the data to observe overall daily performance trends.
[0108]
Referring to FIG. 4, an exemplary method flow diagram [400] for analysis of key performance indicators (KPIs) in accordance with exemplary implementations of the present disclosure is shown. In an implementation the 10 method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402].
[0109]
At step [404], the method [400] comprises receiving, by a receiving unit 15 [302] from a user interface (UI) [304], a request for a set of KPIs to be determined. The receiving unit [302] receives the request from a user via the user interface associated with a user device (such as mobile, smart phone, a laptop etc.). The user interface (UI) [304] acts as the point of interaction where users can define their requirements, such as selecting specific KPIs, setting time ranges, and choosing the 20 level of detail needed for their analysis. In an exemplary aspect, when the user inputs the request through the UI [304], the receiving unit [302] processes this received input for determining the set of KPIs that need to be analysed, enabling effective tracking and assessment of performance metrics. The set of KPIs may include, but not limited only to at least one of a call drop rate, a mute call rate, an 25 IP throughput, a cell effective throughput, a handover success rate, a session setup success rate, and an attach signalling failure rate. For example, a network administrator who is responsible for monitoring the performance of a network uses the UI [304] to request data on specific KPIs such as "network latency," "packet loss," and "throughput." The administrator may also specify that these KPIs should 30 be analysed over the last 24 hours with data aggregated at an hourly level. Upon
37
submitting this request via the UI [304], the receiving unit [302] captures all these
details and prepares them for further processing.
[0110]
The input from the user may be of different types. The type of inputs can include at least one of a command, a code, or a natural language query. A natural 5 language input could be something like, "Show me the network performance over the past 24 hours," where the system interprets the user’s request and automatically determines the necessary KPIs, such as "network uptime" and "latency," and the appropriate time granularity.
10
[0111]
At step 406, the method comprises retrieving, by a retrieving unit [306] via an integrated performance management (IPM) module [100a], data related to the requested set of KPIs. The retrieving unit [306] retrieves the data related to the requested set of KPIs via the integrated performance management (IPM) module [100a]. The retrieving unit [306] uses integrated performance management (IPM) 15 module [100a] to fetch/retrieve data related to the requested KPIs. Upon retrieving the user’s KPI request, the retrieving unit [306] interacts with the IPM module [100a] to access and collect the necessary data. This retrieving unit [306] is responsible for accessing the necessary data that corresponds to the KPIs requested by the user through the user interface (UI). The IPM module [100a] acts as an 20 intermediary, facilitating the retrieval process by coordinating with various data sources and ensuring that the appropriate data is fetched for analysis. For example, a network administrator requests data on KPIs such as "network uptime" and "bandwidth utilization" over the past month. Once the request is received by the receiving unit [302], the retrieving unit [306] utilises the IPM module [100a] to 25 locate and retrieve the relevant data from the system’s databases or data lakes. In an exemplary aspect, the retrieving unit [306] also handles scenarios where the data related to the request may need to be processed or filtered before being returned. For example, if the requested KPI involves a specific subset of data, such as "average response time" for users in a particular geographic location, the IPM 30 module [100a] will guide the retrieving unit [306] to apply the necessary filters
38
during the retrieval process. This ensures that the data provided is both relevant and
accurate, tailored specifically to the parameters set by the user.
[0112]
In an exemplary aspect, the data related to the requested KPIs is at least stored in a database [316]. The method [400] further comprises the step of 5 retrieving, by the retrieving unit [306], data related to the requested set of KPIs comprises determining, by the retrieving unit [306], a time extent of the requested set of KPIs. The time extent of the requested set of KPIs may be analysed at different levels of granularity in terms of time, such as, seconds, minutes, hourly, daily, weekly, or monthly. 10
[0113]
In an exemplary aspect, if the time extent is less than or equal to a retention period of the database, the method comprises retrieving, by the retrieving unit [306], from the database [316], the data related to the requested set of KPIs. After analysing the time extent (e.g., seconds, minutes, hours, weekly, etc.), the 15 retrieving unit [306] via integrated performance management (IPM) module [100a] determines if the extent is less than or equal to a retention period of the database. In an exemplary aspect, the retrieving unit [306] fetches data related to the set of KPI by first assessing the time extent specified in the request. If this time extent falls within the database's retention period, the retrieving unit [306] retrieves the 20 relevant data directly from the database [316]. This ensures that the data is current and within the permissible storage duration, facilitating accurate KPI analysis.
[0114]
In an exemplary aspect, if the time extent is greater than the retention period of the database, the computing unit [310] is configured to compute, via a 25 computation layer (CL), the data related to the requested set of KPIs. The time extent refers to the specific duration over which the data is to be retrieved, such as hours, days, or weeks. In an exemplary aspect, if the selected time frame is greater than the retention period of the database or in case complex queries are involved making the time extent greater that the retention period of the database, the 30 retrieving unit [306] via the IPM module [100a] may then forward the request to
39
the computing unit [310] via the computation layer (CL). In an exemplary aspect,
the computing unit [310] via CL may compute the request and send the results back to the IPM module [100a]. For instance, if a user requests KPIs covering a six-month period, but the database only retains detailed data for the past three months, the retrieving unit [306] will pass the request to the computing unit [310], which 5 will then use the computation layer to reconstruct or estimate the KPI data from summary records or archived data.
[0115]
At step 408, the method [400] comprises computing, by a computing unit [310] via an Integrated Performance Management (IPM) module [100a], based 10 on the retrieved data, the set of KPIs. The computing unit [310] is responsible for processing the data that has been retrieved by the retrieving unit [306] and performing the necessary calculations to generate the KPIs requested by the user. For example, a network administrator has requested the KPI "average network latency" over a specific time period. After the relevant data is retrieved, the 15 computing unit [310] uses this data to calculate the average latency. The IPM module [100a] manages this computation, ensuring that the data is aggregated correctly over the specified time frame and necessary filters or conditions are applied. The result is an accurate KPI that reflects the network's performance during the period of interest. In an exemplary aspect, the computing unit [310] via 20 Integrated Performance Management (IPM) module [100a] is responsible for computing and data processing of the set of KPIs based on the retrieved data. It receives instructions from retrieving unit [306] via the integrated performance management IPM [100a] and operates on the data stored in the database such as but not limited to Distributed Data Lake (DDL) and the Distributed File System (DFS). 25
[0116]
At step 410, the method [400] comprises generating, by a processing unit [312], based on the received request, an output dataset comprising the computed set of KPIs. The processing unit [312] is configured for assembling the output dataset that reflects the results of the computations performed by the computing unit [310]. 30 Once the set of KPIs have been calculated, the processing unit [312] organizes this
40
data into a structured format that can be easily interpreted and used by the user. For
example, if a network administrator has requested KPIs such as "average response time" and "error rate" for the past 24 hours, the processing unit [312] takes the computed values and generates an output dataset that includes these KPIs. The output dataset may be organized in a way that aligns with the user’s original request, 5 perhaps presenting the data in a time-series format or as summarized statistics. The output dataset may also include visual elements, such as graphs or charts, to enhance the user's ability to analyse the results quickly. The output dataset comprises a report indicating a behavioural trend of the computed set of KPIs over a predefined duration of time. The output dataset may include a detailed report that 10 highlights the behavioural trends of the computed KPIs over a predefined/specified duration of time. As used herein, behavioural trends refers to the patterns or changes in key performance indicators (KPIs) over time. These trends show how KPIs fluctuate, improve, or decline, providing insights into performance dynamics and operational effectiveness. By analysing these trends, the user is able to identify 15 underlying factors influencing performance of KPIs, such as operational changes, or market conditions, allowing for more informed decision-making and strategic planning in dealing with network performance related issues.
[0117]
The report is indicated in one or more predefined formats, and wherein 20 the output dataset is configured to be manipulated to indicate the computed set of KPIs according to at least a set of aggregation parameters related to the set of KPIs. The aggregation parameters are one or more time hierarchy parameters related to the predefined duration of time. The report generated by the processing unit [312] is indicated in one or more predefined formats, allowing flexibility in how the KPI 25 data is viewed or displayed to the user. In an exemplary aspect, predefined formats are such as but not limited to charts, tables, graphs etc.
[0118]
The report is presented in one or more predefined formats, such as tables, graphs, charts, or dashboards, depending on the user's preferences or the system's 30 configuration. Additionally, the output dataset is designed to be flexible, allowing
41
users to customise the report according to various aggregation parameters related
to the KPIs. The users can adjust the granularity of the data, for instance, viewing it aggregated by hour, day, week, or month, depending on the level of detail they require for their analysis. Such customization enables users to explore different perspectives of the KPI data, helping them identify patterns, trends, or anomalies 5 across different time periods, and ultimately supporting more informed decision-making.
[0119]
The output dataset can be manipulated according to a specified set of aggregation parameters, which include time hierarchy parameters linked to the 10 predefined duration. This means that the KPI data can be grouped or summarized based on various time periods, such as daily, weekly, or monthly, providing a clearer view of performance trends across different time scales. .
[0120]
The processing unit [312] comprises a learning engine [312a] comprising 15 an artificial intelligence (AI)/ machine learning (ML) model, and wherein the method comprises at least one of translating, by the learning engine [312a], the received request to a predefined format compatible with at least one of the CL, and the database [316]; and generating, by the learning engine [312a], based on the received request, an output dataset comprising the computed set of KPIs. The 20 learning engine using the AI/ML model processes the user's queries or requests in natural language and translates it into a predefined format to make it with at least one of the computation layer (CL) and the database. The learning engine [312a] is configured to interpret user requests that may come in various formats, including natural language queries, command-based inputs, or predefined templates. For 25 example, a user might input a natural language query such as, "Give me the daily sales data for the last quarter, broken down by region." The AI /ML model within the learning engine [312a] is trained to understand such queries and translate them into structured commands that can be processed by the underlying systems. This may involve converting the query into a SQL query for direct database retrieval or 30 formulating a request compatible with the computation layer (CL) for more
42
complex calculations. In an exemplary aspect, the learning engine [312a] also has
the capability to disambiguate vague requests, apply context-aware processing, and optimize queries for performance to enable the system to retrieve the relevant data.
[0121]
The learning engine [312a] is further configured to generate the output 5 dataset. The output dataset is composed of the computed KPIs organized according to the user’s specifications. For example, if the user requested a time-series analysis of sales data, the learning engine [312a] might generate a dataset that includes the sales figures organized by day, with additional columns representing regional performance. The AI/ ML model can also apply advanced data formatting 10 techniques, such as sorting, filtering, and aggregating data based on predefined rules or learned user preferences. The output dataset can then be formatted for visualization in graphs, tables, or downloadable reports, depending on how the user intends to interact with the results.
15
[0122]
The AI/ ML model integrated within the learning engine [312a] is trained on a dataset comprising historical user queries, system performance logs, and a vast array of domain-specific data relevant to the KPIs being analysed. This training data includes examples of natural language queries, command-based requests, and various patterns of user interactions with the system, allowing the models to 20 recognize common query structures and anticipate user needs. Additionally, the AI/ML model is trained on large datasets of historical KPI values and their relationships, enabling them to understand the context and importance of various metrics. The training process also involves learning from real-time system usage data, which helps the models continuously refine their ability to interpret 25 ambiguous requests, optimize query performance, and generate accurate, relevant output datasets.
[0123]
At step 412, the method [400] comprises transmitting, by a transmitting unit [314] to the UI [304], the generated output dataset. The transmitting unit [314] 30 transmits the generated output dataset to the UI [304] such that the output dataset
43
could be further analysed by the user to facilitate in identifying underlying factors
influencing performance of KPIs, such as operational changes, network conditions, etc. allowing for more informed decision-making and strategic planning in dealing with network performance related issue. The transmitting unit [314] enables the user to access the output dataset which were generated by the processing unit [312], 5 through the UI [304]. The transmitting unit [314] adapts to various data formats, depending on the user's preferences. For example, if the user requested visualizations such as a line chart showing network performance over a specific period, the transmitting unit [314] delivers the dataset in a format that supports this visual representation on the UI [304]. Similarly, if the user needs a downloadable 10 CSV file with aggregated sales data, the transmitting unit [314] formats the data accordingly and transmits it in a way that allows the user to download or further analyse the information.
[0124]
In an exemplary aspect, in cases where performance is a concern, such 15 as transmitting large datasets or real-time data streams, the transmitting unit [314] manages the data transfer efficiently. For example, during real-time monitoring of the set of KPIs, the transmitting unit [314] may split the data into manageable portions or apply data compression techniques to facilitate a smooth and uninterrupted transmission to the UI [304]. 20
[0125]
The method [400] further comprises providing, by a displaying unit [318] at the UI [304], a dashboard, and wherein the dashboard is configured to display the output dataset. The displaying unit [318] at the UI [304] provides the dashboard for displaying the output dataset, which includes a set of aggregation parameters 25 related to the selected Key Performance Indicators (KPIs). These aggregation parameters allow the data to be summarized and visualized over specific time intervals, facilitating the implementation of drill-down and roll-up features. In an exemplary aspect, aggregation enables the system to present data in a way that makes trends and patterns more apparent across different timeframes, helping users 30 to analyse performance at varying levels of detail. Drill-down functionality allows
44
users to view data at increasingly detailed levels within a chosen Time Hierarchy.
For instance, if the user initially selects an "hourly" Time Hierarchy, they can use the drill-down feature to examine the data at more granular intervals, such as "minute" or "second" levels. This capability is particularly useful when a user needs to pinpoint specific events or anomalies that might be obscured in broader 5 timeframes. For example, a network administrator monitoring hourly KPIs might drill down to minute-level data to investigate a sudden spike in latency. Conversely, the roll-up feature provides the option to view data with less granularity, summarizing it at a higher level within the selected Time Hierarchy. By using the roll-up option, the user can aggregate data to broader time intervals, such as "daily" 10 or "weekly," which helps in identifying long-term trends or overarching patterns that may not be visible at more detailed levels. For example, after reviewing minute-level data for network performance issues, the user may choose to roll up the data to observe overall daily performance trends.
15
[0126]
At step 414, the method [400] terminates.
[0127]
Referring to FIG. 5, an exemplary block diagram of a system architecture [500] for analysis of key performance indicators, is shown, in accordance with the exemplary implementations of the present disclosure. The system architecture [500] 20 comprises at least one user interface [304], at least one load balancer [502], at least one integrated performance management module [100a], at least one artificial intelligence/machine learning module [504], at least one computational layer [506], at least one distributed data lake [508], and at least one distributed file system [510].
25
[0128]
In an exemplary aspect, the user interface (UI) [304] sends a request to the integrated performance management module [100a] via a load balancer [502] for managing connections. The load balancer [502] is adapted to distribute the incoming network requests across multiple servers or components to ensure optimal resource utilization and high availability. Particularly, the load balancer [502] is 30 commonly employed to evenly distribute incoming requests across multiple
45
instances of the IPM module [100a] providing scalability and fault tolerance to the
system architecture [500]. Overall, these connections and the inclusion of the load balancer [502] help facilitate effective communication, data transfer, and resource management within the system, enhancing its performance and reliability.
5
[0129]
After this, a connection between the User Interface (UI) [304] and the integrated performance management (IPM) module [100a] is established using an HTTP connection. HTTP (Hypertext Transfer Protocol) is a widely used protocol for communication between web browsers and servers. It allows the UI [304] to send requests and configurations to the IPM module [100a] and receive responses 10 or acknowledgments.
[0130]
The IPM module [100a] connects to the Artificial Intelligence/Machine Learning (AI/ML) module [504] using TCP, a reliable transport protocol for establishing connections between network devices. TCP is a reliable and 15 connection-oriented protocol that ensures the integrity and ordered delivery of data packets. This connection allows the IPM module [100a] to interact with the AI/ML module [504] for processing requests, translating requests in a format understandable by computational layer CL [506], processing the computed results, and obtaining intelligent insights. 20
[0131]
The connection between the AI/ML module [504] and the Distributed Data Lake (DDL) [508] is established using a TCP (Transmission Control Protocol) connection, similar to the connection between IPM module [100a] and AI/ML module [504]. By using TCP, the AI/ML module [504] can save and retrieve 25 relevant data from the DDL [508].
[0132]
The connection between the AI/ML module [504] and the Computational Layer (CL) [506] is also established using an HTTP connection. Similar to the UI [304] to IPM module [100a] connection, this HTTP connection allows the AI/ML 30 module [504] to forward requests and computations from IPM module [100a] to the
46
CL [506]. The CL [506] processes the received instructions and returns the results
or intermediate data to the IPM module [100a] via AI/ML module [504].
[0133]
The connection between the Computational Layer (CL) [506] and the Distributed File System (DFS) [510] is established using a File IO connection. File 5 IO typically refers to the operations performed on files, such as reading from or writing to files. In this case, the CL [506] utilizes File IO operations to store and manage large files used in computations within the DFS [510]. This connection allows the CL [506] to efficiently access and manipulate the required files.
10
[0134]
In operation, the user creates an interactive dashboard request on the User Interface (UI) [304] by selecting KPIs and a time hierarchy for data aggregation, for the purpose of monitoring Key Performance Indicators (KPIs) in a network performance management system. Thereafter, if the integrated performance management (IPM) module [100a] can handle the request, it processes the 15 dashboard request and computes the KPIs using data stored in a Distributed Data Lake (DDL) [508]. Otherwise, the dashboard request is sent to the Computation layer via the AI / ML module [504]. In an exemplary aspect, if the time extent is less than the DDL’s [508] retention period, the retrieving unit [306] retrieves the relevant data directly from the DDL [508]. In an exemplary aspect, if the selected 20 time frame is greater than the retention period or complex queries are involved, IPM module [100a] will then forward the request to CL [506]. The computed result is then sent to the UI [304] for display on the interactive dashboard, with options for drill-down or roll-up actions within the time hierarchy. The dashboard updates in real-time, allowing users to explore data patterns and trends at different time 25 intervals efficiently.
[0135]
Referring to FIG. 6, an exemplary process flow diagram [600] for analysis of key performance indicators (KPIs) in accordance with exemplary 30 implementations of the present disclosure is shown. In an implementation the
47
process [600] is performed by the system architecture [500]
and/or the system [300]. Further, in an implementation, the system architecture [500] may be present in a server device to implement the features of the present disclosure.
[0136]
At step S1, the user [602] inputs requests into the UI [304] (also known 5 as user interface [304]. Here the user creates an on demand dashboard and selects the KPIs that need to be monitored on the UI [304]. For these KPIs, users also need to make sure to aggregate these KPIs on time hierarchy (second, minute, hour, weekly, etc. buckets) so that drill down/roll up feature can be availed.
10
[0137]
At step S2, the UI [304] further sends the request to load balancer [502]. The load balancer [502] is adapted to distribute the incoming network requests across multiple servers or components to provide optimal resource utilization and high availability.
15
[0138]
At step S3, the load balancer [502] sends an acknowledgement back to the UI [304] if it successfully receives the request.
[0139]
At step S4, the load balancer [502] forward requests across one or more instances to the IPM modules [100a]. IPM module [100a] receives the requests from 20 UI [304] and starts computing the KPIs for the time aggregation chosen and buckets mentioned in the request.
[0140]
At step S5, AI/ML module [504] processes the request received from the IPM module [100a]. In an exemplary aspect, the retrieving unit [306] fetches data 25 related to the set of KPI by first assessing the time extent specified in the request. If the time extent is less than the database's retention period, the unit retrieves the relevant data directly from the database [316].
[0141]
At step S6, the AI/ML module [504] sends a query to the DDL [508] for 30 fetching the details relating to the specified KPIs from the DDL [508].
48
[0142]
At step S7, the DDL sends the specified KPI details back to the AI/ML module [504].
[0143]
At step S8, AI/ML module [504] sends a query to the CL [506] for 5 computing and fetching details relating to the specified KPIs from CL [506]. In an exemplary aspect, if the selected time frame is greater than the retention period or complex queries are involved, IPM module [100a] will then forward the request to CL [506].
10
[0144]
At step S9, the CL [506] sends the computed and fetched data related to specified KPIs back to the AI/ML module [504]. The KPI details are then summarized and processed by AI/ML module [504] to summarize, create graphs, for better analysis.
15
[0145]
At step S10, the AI/ML module [504] processes the retrieved KPI data and generates an output dataset in the form of visualizations, such as graphs, charts, and tables. These visual representations are created based on the KPI details to facilitate better analysis and interpretation. The generated visualizations are then sent back to the Integrated Performance Management (IPM) module [100a]. 20
[0146]
At step S11, the (IPM) module [100a] receives the generated visualizations and prepares the final output dataset, which now includes these visualizations (e.g., graphs, dashboards). The IPM module [100a] then sends the dataset along with a notification that the Computational Layer (CL) [506] was 25 utilized, back to the load balancer [502].
[0147]
At step S12, the load balancer [502] receives the final output dataset containing the visualizations (e.g., graphs, dashboards) and transmits it back to the UI [304]. Upon successful transmission, the user interface [304] displays the 30
49
visualized data (e.g., graphs, dashboards) to the user, providing them with
actionable insights based on the computed KPIs.
[0148]
At step S13, the user [602] gets notified of the received KPI details. The user will be able to see the option to drill down and the roll up option on the 5 dashboard displayed on the displaying unit [318]. User [602] will be able to choose on which field he /she wants to either drill down or roll up the time hierarchy and the resultant values will be produced in real time. By selecting the drill-down feature, the user can view on the dashboard, data at a lower level of granularity within the chosen time hierarchy. For example, if the user initially selected a time 10 hierarchy of "hourly," they can drill down to view on the dashboard using the displaying unit [318], data at a more granular level, such as "minute" or "second." Furthermore, the user may choose the roll-up option to aggregate the data at a higher level within the selected time hierarchy. By selecting the roll-up option, the user can view the data summarized at a broader level, such as "daily" or "weekly". 15
[0149]
The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for analysis of key performance indicators (KPIs), the instructions include executable code which, when executed by one or more units of a system, causes: a receiving unit [302] to receive from a 20 user interface (UI) [304], a request for a set of KPIs to be determined; a retrieving unit [306] to retrieve via an integrated performance management (IPM) module [100a], data related to the requested set of KPIs; a computing unit [310] to compute via the Integrated Performance Management (IPM) module [100a], based on the retrieved data, the set of KPIs; a processing unit [312] to generate based on the 25 received request, an output dataset comprising the computed set of KPI; a transmitting unit [314] to transmit to the UI [304], the generated output dataset.
[0150]
As is evident from the above, the present disclosure provides a technically advanced solution for the analysis of Key Performance Indicators 30 (KPIs), offering several key advantages. Enhanced data exploration is achieved by
50
allowing users to examine data at different levels of granularity, enabling both
detailed analysis and the identification of broader trends, which leads to better decision-making. Real-time insights are provided through dynamic visualizations and updates, empowering users to access the latest data and make timely, informed decisions. The drill-down functionality facilitates efficient problem identification 5 by enabling users to focus on specific data points to uncover the root causes of anomalies or issues. The roll-up feature supports strategic planning by summarizing data at higher levels, giving a clearer view of long-term trends and performance metrics. The system’s scalability, enabled by distributed data storage and computation, allows for efficient handling of large data volumes and complex 10 queries. The interactive dashboard further enhances flexibility and customization, allowing users to tailor their data analysis by selecting relevant KPIs, adjusting time hierarchies, and utilizing drill-down or roll-up features to suit their specific needs.
[0151]
It would be appreciated by the person skilled in the art that the system 15 with an interactive dashboard and drill-down/roll-up options provide valuable data analysis capabilities to users. It facilitates data-driven decision-making, empowers the users to explore data at different levels, and fosters collaboration within organizations. The system can significantly improve operational efficiency, strategic planning, and overall business performance. 20
[0152]
Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various 25 configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope 30 of the present disclosure.
51
[0153]
While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations 5 of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
We Claim:
1. A method for analysis of key performance indicators (KPIs), the method comprising:
-
receiving, by a receiving unit [302] from a user interface (UI) [304], a request for a set of KPIs to be determined;
-
retrieving, by a retrieving unit [306] via an integrated performance management (IPM) module [100a], data related to the requested set of KPIs;
-
computing, by a computing unit [310] via an integrated performance management (IPM) module [100a], based on the retrieved data, the set of KPIs;
-
generating, by a processing unit [312], based on the received request, an output dataset comprising the computed set of KPIs; and
-
transmitting, by a transmitting unit [314] to the UI [304], the generated output dataset.
2. The method as claimed in claim 1, wherein, the data related to the requested KPIs is at least stored in a database [316], and wherein the step of retrieving, by the retrieving unit [306], data related to the requested set of KPIs comprises:
-
determining, by the retrieving unit [306], a time extent of the requested set 20 of KPIs,
wherein, if:
-
the time extent is greater than a retention period of the database, the method comprises computing, by the computing unit [310] via a computation layer (CL), the data related to the requested set of KPIs, 25 and
-
the time extent is less than or equal to the retention period of the database, the method comprises retrieving, by the retrieving unit [306], from the database [316], the data related to the requested set of KPIs.
3. The method as claimed in claim 2, wherein the processing unit [312] comprises a learning engine [312a] comprising an artificial intelligence (AI)/ machine learning (ML) model, and wherein the method comprises at least one of:
-
translating, by the learning engine [312a], the received request to a predefined format compatible with at least one of the CL, and the database 5 [316]; and
-
generating, by the learning engine [312a], based on the received request, an output dataset comprising the computed set of KPIs.
4. The method as claimed in claim 1, wherein the output dataset comprises a report indicating a behavioural trend of the computed set of KPIs over a predefined duration of time, wherein the report is indicated in one or more predefined formats, and wherein the output dataset is configured to be manipulated to indicate the computed set of KPIs according to at least a set of aggregation parameters related to the set of KPIs.
5. The method as claimed in claim 4, wherein the aggregation parameters are one or more time hierarchy parameters related to the predefined duration of time.
6. The method as claimed in claim 4, wherein the method comprises providing, by a displaying unit [318] at the UI [304], a dashboard, and wherein the dashboard is configured to display the output dataset.
7. A system for analysis of key performance indicators (KPIs), the system comprising:
-
a receiving unit [302] configured to receive from a user interface (UI) [304], a request for a set of KPIs to be determined;
-
a retrieving unit [306] configured to retrieve via an integrated performance management (IPM) module [100a], data related to the requested set of KPIs;
-
a computing unit [310] configured to compute via the Integrated Performance Management (IPM) module [100a], based on the retrieved data, the set of KPIs;
-
a processing unit [312] configured to generate based on the received request, an output dataset comprising the computed set of KPI; 5
-
a transmitting unit [314] configured to transmit to the UI [304], the generated output dataset.
8. The system as claimed in claim 7, wherein the data related to the requested KPIs is at least stored in a database [316], and wherein, to retrieve data related 10 to the requested set of KPIs, the retrieving unit [306] is configured to:
-
determine a time extent of the requested set of KPIs,
wherein, if:
-
the time extent is greater than a retention period of the database, the computing unit [310] is configured to compute, via a computation 15 layer (CL), the data related to the requested set of KPIs, and
-
the time extent is less than or equal to the retention period of the database, the retrieving unit [306] is configured to retrieve, from the database [316], the data related to the requested set of KPIs.
9. The system as claimed in claim 8, wherein the processing unit [312] comprises a learning engine [312a] comprising an artificial intelligence (AI) / machine learning (ML) model, and wherein the learning engine [312a] is configured to at least one of:
-
translate the received request to a predefined format compatible with at 25 least one of the CL, and the database [316]; and
-
generate, based on the received request, an output dataset comprising the computed set of KPIs.
10. The system as claimed in claim 7, wherein the output dataset comprises a report indicating a behavioural trend of the computed set of KPIs over a predefined
duration of time, wherein the report is indicated in one or more predefined
formats, and wherein the output dataset is configured to be manipulated to indicate the computed set of KPIs according to at least a set of aggregation parameters related to the set of KPIs.
11. The system as claimed in claim 10, wherein the aggregation parameters are one or more time hierarchy parameters related to the predefined duration of time.
12. The system as claimed in claim 10, wherein a displaying unit [318] is configured to provide, at the UI [304], a dashboard, and wherein the dashboard 10 is configured to display the output dataset.
Dated this the 31st Day of August, 2023
| # | Name | Date |
|---|---|---|
| 1 | 202321058433-STATEMENT OF UNDERTAKING (FORM 3) [31-08-2023(online)].pdf | 2023-08-31 |
| 2 | 202321058433-PROVISIONAL SPECIFICATION [31-08-2023(online)].pdf | 2023-08-31 |
| 3 | 202321058433-FORM 1 [31-08-2023(online)].pdf | 2023-08-31 |
| 4 | 202321058433-FIGURE OF ABSTRACT [31-08-2023(online)].pdf | 2023-08-31 |
| 5 | 202321058433-DRAWINGS [31-08-2023(online)].pdf | 2023-08-31 |
| 6 | 202321058433-FORM-26 [05-09-2023(online)].pdf | 2023-09-05 |
| 7 | 202321058433-Proof of Right [10-01-2024(online)].pdf | 2024-01-10 |
| 8 | 202321058433-ORIGINAL UR 6(1A) FORM 1 & 26-300124.pdf | 2024-02-03 |
| 9 | 202321058433-FORM-5 [23-08-2024(online)].pdf | 2024-08-23 |
| 10 | 202321058433-ENDORSEMENT BY INVENTORS [23-08-2024(online)].pdf | 2024-08-23 |
| 11 | 202321058433-DRAWING [23-08-2024(online)].pdf | 2024-08-23 |
| 12 | 202321058433-CORRESPONDENCE-OTHERS [23-08-2024(online)].pdf | 2024-08-23 |
| 13 | 202321058433-COMPLETE SPECIFICATION [23-08-2024(online)].pdf | 2024-08-23 |
| 14 | 202321058433-Request Letter-Correspondence [30-08-2024(online)].pdf | 2024-08-30 |
| 15 | 202321058433-Power of Attorney [30-08-2024(online)].pdf | 2024-08-30 |
| 16 | 202321058433-FORM 3 [30-08-2024(online)].pdf | 2024-08-30 |
| 17 | 202321058433-Form 1 (Submitted on date of filing) [30-08-2024(online)].pdf | 2024-08-30 |
| 18 | 202321058433-Covering Letter [30-08-2024(online)].pdf | 2024-08-30 |
| 19 | 202321058433-CERTIFIED COPIES TRANSMISSION TO IB [30-08-2024(online)].pdf | 2024-08-30 |
| 20 | Abstract 1.jpg | 2024-09-02 |