Abstract: The present disclosure relates to a method and a system for real-time analysis of Key Performance Indicators (KPI) deviations. The method includes receiving, by a transceiver unit [304], a KPI analysis request from a UI [306]. The KPI analysis request comprises selected KPIs pertaining to a network. Each of the selected KPIs further comprises nested KPIs. The selected KPIs further comprises a time period. Furthermore, the method includes computing, by a processing unit [302] via an IPM [100a], values for the selected KPIs based on a time period associated with KPI analysis request. The method further includes generating, by a generation unit [308] via the IPM [100a], a performance report based on the computed values for the selected KPIs. Further, the method includes displaying, by the processing unit [302], the performance report for the selected KPIs on the UI [306]. [FIG. 4]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR REAL-TIME ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs) DEVIATIONS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR REAL-TIME ANALYSIS OF KEY PERFORMANCE INDICATORS (KPIs) DEVIATIONS
FIELD OF THE DISCLOSURE
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to real-time analysis of Key Performance Indicators (KPI) deviations.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Network performance management systems typically track network elements and data with the help of network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. An Integrated Performance Management (IPM) system provides the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
[0004] An IPM system monitors and collects data from various network elements and monitoring tools. These monitored values or KPI values are configured for different parameters of network elements to measure performance of network elements and when their values deviate from the usual operational values or threshold values, it becomes important to figure out the underlying factors and root cause problems of performance degradation. These KPIs are nested and are usually composed of counters or other KPIs. The existing solutions have various limitations for KPI measurements and finding the root cause factors, such as complex configurations, required manual efforts, lack of interoperability, and inefficient data integration across diverse systems for dynamic and real-time analysis, etc.
[0005] Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions.
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0007] An aspect of the present disclosure may relate to a method for real-time analysis of Key Performance Indicators (KPI) deviations. The method includes receiving, by a processing unit, a KPI analysis request from a User Interface (UI), wherein the KPI analysis request comprises one or more selected KPIs pertaining to a network, wherein each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period. The method further includes computing, by the processing unit via an integrated process management (IPM) [100a], values for the one or more selected KPIs based on the time period associated with the KPI analysis request. Furthermore, the method encompasses generating, by the processing unit via the
IPM [100A], a performance report based on the computed values for the one or more selected KPIs. The method further includes causing to display, by the processing unit, the performance report for the one or more selected KPIs on the UI.
[0008] In an exemplary aspect of the present disclosure, upon generation of the performance report, the method further comprises, transmitting a message to the UI and receiving a response from the UI, wherein the response is indicative of the user associated with the UI requesting for the performance report and based on the received response, transmitting the performance report to the UI for display to the user.
[0009] In an exemplary aspect of the present disclosure, the causing to display the generated performance report comprises causing to display, at the UI, the performance report using at least one of tabular representation and graphical representation, wherein the generated performance report is to provide KPI deviations associated with one or more nested KPIs of the one or more selected KPIs.
[0010] In an exemplary aspect of the present disclosure, the method comprising computing, by the processing unit via the IPM, the one or more nested KPIs associated with the one or more selected KPIs.
[0011] In an exemplary aspect of the present disclosure, the performance report comprises the computed values for the one or more nested KPIs associated with the one or more selected KPIs.
[0012] In an exemplary aspect of the present disclosure, the method includes transmitting, by the processing unit via the IPM, KPI analysis request to a Distributed Data Lake. The method further includes fetching, by the processing unit via the IPM, a set of data pertaining to the one or more KPIs from the Distributed
Data Lake. Furthermore, the method includes computing, by the processing unit via the integrated process management (IPM), the one or more selected KPIs based on the set of data.
[0013] In an exemplary aspect of the present disclosure, the method includes transmitting, by the processing unit via the IPM, the KPI analysis request to a computation layer for further processing when the time period exceeds a predetermined retention period. The method further includes receiving, by the processing unit via the IPM, computed results for the one or more selected KPIs from the computation layer, wherein, based on the received computed results, the processing unit generates the performance report.
[0014] In an exemplary aspect of the present disclosure, the computation layer utilizes a natural language processing (NLP) model to compute results for the one or more selected KPIs when the time period is less than the predetermined retention period. The NLP model further comprises an Artificial Intelligence (AI)/Machine Learning (ML) layer [508] to process the KPI analysis request.
[0015] In an exemplary aspect of the present disclosure, the KPI analysis request is received from the UI via a load balancer, wherein the load balancer is communicatively coupled with the processing unit.
[0016] In an exemplary aspect of the present disclosure, the KPI analysis request from the UI is generated by a user, and wherein the user is to select one or more KPIs from a plurality of KPIs displayed on the UI and create an on-demand dashboard for monitoring of the selected one or more KPIs.
[0017] Another aspect of the present disclosure may relate to a system for real-time analysis of Key Performance Indicators (KPI) deviations. The system includes a processing unit. The processing unit is configured to receive a KPI analysis request from a User Interface (UI), wherein the KPI analysis request comprises one or more
selected KPIs pertaining to a network, wherein each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period. Furthermore, the processing unit is configured to compute values for the one or more selected KPIs based on the time period associated with the KPI analysis request. The processing unit is further configured to generate a performance report based on the computed values for the one or more selected KPIs. Furthermore, the system is configured to cause to display the performance report for the one or more selected KPIs on the UI.
[0018] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for real-time analysis of Key Performance Indicators (KPI) deviations, the instructions include executable code which, when executed by one or more units of a system, causes: a processing unit of the system to receive a KPI analysis request from a User Interface (UI), wherein the KPI analysis request comprises one or more selected KPIs pertaining to a network, wherein each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period. Further, the instructions include executable code which, when executed, causes the processing unit of the system to compute values for the one or more selected KPIs based on the time period associated with the KPI analysis request. Further, the instructions include executable code which, when executed by, causes the processing unit of the system to generate a performance report based on the computed values for the one or more selected KPIs. Further, the instructions include executable code which, when executed, causes the processing unit of the system to cause to display the performance report for the one or more selected KPIs on the UI.
OBJECTS OF THE INVENTION
[0019] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0020] It is an object of the present disclosure to provide a method and a system for providing a drill down into constituent counters or KPIs for performance monitoring of network elements at a granular level and identify the cause.
[0021] It is another object of the present disclosure to provide a method and a system for KPI drill down in real time to resolve the underlying issues quickly else user would have to track down the data of parent KPI and its composite elements which could be tedious when performed manually.
[0022] It is another object of the present disclosure to provide a method and system for providing KPI drill down using natural language processing (NLP) model for better user interaction.
[0023] It is another object of the present disclosure to allow a user to directly input his/her query for the KPIs and view the displayed result including a detailed analysis in form of tables, graphs, trends and/or anomalies, and the like, of the composite KPIs and counters.
DESCRIPTION OF THE DRAWINGS
[0024] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0025] FIG. 1 illustrates an exemplary block diagram of a network performance management system.
[0026] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0027] FIG. 3 illustrates an exemplary block diagram of a system for real-time analysis of Key Performance Indicators (KPI) deviations, in accordance with exemplary implementations of the present disclosure.
[0028] FIG. 4 illustrates a method flow diagram for real-time analysis of Key Performance Indicators (KPI) deviations in accordance with exemplary implementations of the present disclosure.
[0029] FIG. 5 illustrates an exemplary architecture of a system for real-time analysis of Key Performance Indicators (KPI) deviations, in accordance with exemplary implementations of the present disclosure.
[0030] FIG. 6 illustrates an implementation of the exemplary process for real-time analysis of Key Performance Indicators (KPI) deviations, in accordance with exemplary implementations of the present disclosure.
[0031] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0032] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
5 details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
10 [0033] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and
15 arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0034] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
20 ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
25 [0035] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process
30 is terminated when its operations are completed but could have additional steps not
included in a figure.
9
[0036] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
5 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes”, “has”, “contains” and other similar words are used in either the detailed
10 description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any additional or other elements.
[0037] As used herein, a “processing unit” or “processor” or “operating processor”
15 includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific
20 Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
25
[0038] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device
30 or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
10
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
5 a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
[0039] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a
10 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective
15 functions.
[0040] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
20 communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be called.
[0041] All modules, units, components used herein, unless explicitly excluded
25 herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
30 circuits (FPGA), any other type of integrated circuits, etc.
11
[0042] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system. 5
[0043] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the above-
mentioned and other existing problems in this field of technology by providing
method and system of real-time analysis of Key Performance Indicators (KPI)
10 deviations.
[0044] FIG. 1 illustrates an exemplary block diagram of a network performance management system [100], in accordance with the exemplary embodiments of the present invention. Referring to Fig. 1, the network performance management
15 system [100] comprises various sub-systems such as: an integrated performance
management system [100a], a normalization layer [100b], a computation layer [100d], an anomaly detection layer [100o], a streaming engine [100l], a load balancer [100k], an operations and management system [100p], an API gateway system [100r], an analysis engine [100h], a parallel computing framework [100i], a
20 forecasting engine [100t], a distributed file system [100j], a mapping layer [100s],
a distributed data lake [100u], a scheduling layer [100g], a reporting engine [100m], a message broker [100e], a graph layer [100f], a caching layer [100c], a service quality manager [100q] and a correlation engine[100n]. Exemplary connections between these subsystems are also shown in FIG. 1. However, it will be appreciated
25 by those skilled in the art that the present disclosure is not limited to the connections
shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
[0045] Following are the various components of the system [100], as shown in FIG.
30 1:
12
[0046] Integrated Performance Management (IPM) system [100a] comprises a 5G performance engine [100v] and a 5G Key Performance Indicator (KPI) Engine [100u].
5 [0047] 5G Performance Management Engine [100v]: The 5G Performance
Management engine [100v] is a crucial component of the IPM system [100a], responsible for collecting, processing, and managing performance counter data from various data sources within the network. The counter data includes metrics such as connection speed, latency, data transfer rates, and many others. The counter
10 data is then processed and aggregated as required, forming a comprehensive
overview of network performance. The processed information is then stored in the Distributed Data Lake [100u]. The Distributed data lake [100u] is a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The 5G Performance Management engine [100v] also enables the reporting and
15 visualization of the performance counter data, thus providing network
administrators with a real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability. An operator in the IPM system [100a] may be an individual, a device,
20 an administrator, and the like who may interact with or manage the network.
[0048] 5G Key Performance Indicator (KPI) Engine [100u]: The 5G Key Performance Indicator (KPI) Engine [100u] is a dedicated component tasked with managing the KPIs of all the network elements. The 5G Key Performance Indicator
25 (KPI) Engine [100w] uses the performance counters, which are collected and
processed by the 5G Performance Management engine [100v] from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [100u] to calculate essential KPIs. These KPIs may include at least one of: data throughput, latency, packet loss rate, and more. Once the KPIs are
30 computed, the KPIs are segregated based on the aggregation requirements, offering
a multi-layered and detailed understanding of the network performance. The
13
processed KPI data is then stored in the Distributed Data Lake [100u], ensuring a
highly accessible, centralized, and scalable data repository for further analysis and
utilization. Similar to the 5G Performance Management engine [100v], the 5G KPI
engine [100w] is also responsible for reporting and visualization of KPI data. This
5 functionality allows network administrators to gain a comprehensive, visual
understanding of the network's performance, thus supporting informed decision-making and efficient network management.
[0049] Ingestion layer: The Ingestion layer (not shown in FIG. 1) forms a key part
10 of the IPM system [100a]. The ingestion layer primarily performs the function to
establish an environment capable of handling diverse types of incoming data. This
data may include Alarms, Counters, Configuration parameters, Call Detail Records
(CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial
for maintaining and optimizing the network's performance. Upon receiving this
15 data, the Ingestion layer processes the data by validating the data integrity and
correctness to ensure that the data is fit for further use. Following the validation,
the data is routed to various components of the IPM system [100a], including the
Normalization layer [100b], Streaming Engine [100l], Streaming Analytics, and
Message Brokers [100e]. The destination is chosen based on where the data is
20 required for further analytics and processing. By serving as the first point of contact
for incoming data, the Ingestion layer plays a vital role in managing the data flow within the system, thus supporting comprehensive and accurate network performance analysis.
25 [0050] Normalization layer [100b]: The Normalization Layer [100b] serves to
standardize, enrich, and store data into the appropriate databases. It takes in data that has been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in
30 various databases like the Distributed Data Lake [100u], Caching Layer [100c], and
Graph Layer [100f], depending on its intended use. The choice of storage
14
determines how the data can be accessed and used in the future. Additionally, the
Normalization Layer [100b] produces data for the Message Broker [100e], a system
that enables communication between different parts of the integrated performance
management system [100a] through the exchange of data messages. Moreover, the
5 Normalization Layer [100b] supplies the standardized data to several other
subsystems. These include the Analysis Engine [100h] for detailed data
examination, the Correlation Engine [100n] for detecting relationships among
various data elements, the Service Quality Manager [100q] for maintaining and
improving the quality of services, and the Streaming Engine [100l] for processing
10 real-time data streams. These subsystems depend on the normalized data to perform
their operations effectively and accurately, demonstrating the Normalization Layer's [100b] critical role in the entire system.
[0051] Caching layer [100c]: The Caching Layer [100c] in the IPM system [100a]
15 plays a significant role in data management and optimization. During the initial
phase, the Normalization Layer [100b] processes incoming raw data to create a
standardized format, enhancing consistency and comparability. The Normalizer
Layer then inserts this normalized data into various databases. One such database
is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed data storage
20 layer which temporarily holds data that is likely to be reused, to improve speed and
performance of data retrieval. By storing frequently accessed data in the Caching
Layer [100c], the system significantly reduces the time taken to access this data,
improving overall system efficiency and performance. Further, the Caching Layer
[100c] serves as an intermediate layer between the data sources and the sub-
25 systems, such as the Analysis Engine, Correlation Engine [100n], Service Quality
Manager, and Streaming Engine. The Normalization Layer [100b] is responsible
for providing these sub-systems with the necessary data from the Caching Layer
[100c].
30 [0052] Computation layer [100d]: The Computation Layer [100d] in the IPM
system [100a] serves as the main hub for complex data processing tasks. In the
15
initial stages, raw data is gathered, normalized, and enriched by the Normalization
Layer [100b]. The Normalizer Layer [100b] then inserts this standardized data into
multiple databases including the Distributed Data Lake [100u], Caching Layer
[100c], and Graph Layer [100f], and also feeds it to the Message Broker [100e].
5 Within the Computation Layer [100d], several powerful sub-systems such as the
Analysis Engine [100h], Correlation Engine [100n], Service Quality Manager [100q], and the Streaming Engine [100l], utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine [100h] performs in-depth data analytics to generate insights from the data. The Correlation
10 Engine [100n] identifies and understands the relations and patterns within the data.
The Service Quality Manager [100q] assesses and ensures the quality of the services. And the Streaming Engine [100l] processes and analyses the real-time data feeds. In essence, the Computation Layer [100d] is where all major computation and data processing tasks occur. It uses the normalized data provided by the
15 Normalization Layer [100b], processing it to generate useful insights, ensure
service quality, understand data patterns, and facilitate real-time data analytics.
[0053] Message broker [100e]: The Message Broker [100e], an integral part of the IPM system [100a], operates as a publish-subscribe messaging system. It
20 orchestrates and maintains the real-time flow of data from various sources and
applications. At its core, the Message Broker [100e] facilitates communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message
25 Broker [100e] demonstrates immense flexibility in managing data streams.
Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [100e] is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the
30 Message Broker [100e] forms a critical component in managing and delivering real-
time data in the system.
16
[0054] Graph layer [100f]: The Graph Layer [100f] plays a pivotal role in the IPM
system [100a]. It can model a variety of data types, including alarm, counter,
configuration, CDR data, Infra-metric data, 5G Probe Data, and Inventory data.
5 Equipped with the capability to establish relationships among diverse types of data,
The Graph Layer [100f] acts as a Relationship Modeler that offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the Relationship Modeler should adapt at processing steps provided in the model and delivering the
10 results to the system requested, whether it be a Parallel Computing system,
Workflow Engine, Query Engine, Correlation Engine [100n], 5G Performance Management Engine, or 5G KPI Engine [100w]. With its powerful modelling and processing capabilities, the Graph Layer [100f] forms an essential part of the system, enabling the processing and analysis of complex relationships between
15 various types of network data.
[0055] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key element of the IPM System [100a], endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an
20 activity performing a service call, an API call to another microservice, the execution
of an Elastic Search query, and storing its output in the Distributed Data Lake [100u] or Distributed File System or sending it to another micro-service. The micro-service refers to a single system architecture to provide multiple functions. Some of the microservices in communication are API calls and remote procedure calls.
25 The versatility of the Scheduling Layer [100g] extends to facilitating graph
traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance. In sum, the Scheduling Layer [100g]
30 orchestrates the systematic and periodic execution of tasks, making it an integral
part of the efficient functioning of the entire system.
17
[0056] Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part
of the IPM System [100a], designed to provide an environment where users can
configure and execute workflows for a wide array of use-cases. This facility aids in
5 the debugging process and facilitates a better understanding of call flows. With the
Analysis Engine [100h], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When
10 these policies detect abnormal behaviour or policy breaches, the system sends
notifications, ensuring swift and responsive action. In essence, the Analysis Engine [100h] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby contributing significantly to the system's overall performance management.
15
[0057] Parallel Computing Framework [100i]: The Parallel Computing Framework [100i] is a key aspect of the Integrated Performance Management System [100a], providing a user-friendly yet advanced platform for executing computing tasks in parallel. The parallel computing framework [100i] showcases
20 both scalability and fault tolerance, crucial for managing vast amounts of data.
Users can input data via Distributed File System (DFS) [100j] locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub¬System. Each task in a workflow is executed sequentially, but multiple chains can
25 be executed simultaneously, optimizing processing time. To accommodate varying
task requirements, the service supports the allocation of specific host lists for different computing tasks. The Parallel Computing Framework [100i] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management
30 capabilities.
18
[0058] Distributed File System [100j]: The Distributed File System (DFS) [100j]
is a critical component of the Integrated Performance Management System [100a],
enabling multiple clients to access and interact with data seamlessly. The
Distributed File system [100j] is designed to manage data files that are partitioned
5 into numerous segments known as chunks. In the context of a network with vast
data, the DFS [100j] effectively allows for the distribution of data across multiple
nodes. This architecture enhances both the scalability and redundancy of the
system, ensuring optimal performance even with large data sets. DFS [100j] also
supports diverse operations, facilitating the flexible interaction with and
10 manipulation of data. This accessibility is paramount for a system that requires
constant data input and output, as is the case in a robust performance management system.
[0059] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital
15 component of the Integrated Performance Management System [100a], designed to
efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced latency, and improved overall system performance. The LB [100k] implements various routing strategies
20 to manage traffic. The LB [100k] includes round-robin scheduling, header-based
request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching routes requests based on data contained within
25 the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based
dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable,
30 and prompt handling of requests, contributing to the robustness and resilience of
the overall performance management system.
19
[0060] Streaming Engine [100l]: The Streaming Engine [100l], also referred to as
Stream Analytics, is a critical subsystem in the Integrated Performance
Management System [100a]. This engine is specifically designed for high-speed
5 data pipelining to the User Interface (UI). Its core objective is to ensure real-time
data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [100l]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming
10 Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker
[100e], and Caching Layer [100c] to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve data from the Distributed Data
15 Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the
requirement and deliver it to the UI in real-time. The streaming engine's [100l] is configured to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the Integrated Performance Management System [100a].
20 [0061] Reporting Engine [100m]: The Reporting Engine [100m] is a key
subsystem of the Integrated Performance Management System [100a]. The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary
25 interface for creating custom reports based on the data visualized through the
client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and compile data from various interfaces. The main output of the Reporting Engine [100m] is a detailed report generated in Excel format. The Reporting Engine’s
30 [100m] unique capability to parse data from different subsystem interfaces, process
it according to the client's specifications and requirements, and generate a
20
comprehensive report makes it an essential component of this performance
management system. Furthermore, the Reporting Engine [100m] integrates
seamlessly with the Notification Engine to ensure timely and efficient delivery of
reports to clients via email, ensuring the information is readily accessible and
5 usable, thereby improving overall client satisfaction and system usability.
[0062] Further, a computing device on which the units of the integrated performance management system [100a] may be implemented is illustrated in FIG. 2. FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon
10 which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure. In an implementation, the computing device [200] may also implement a method for a configuration-based management of a procedure request at a network function (NF) utilising the system. In another implementation, the computing device [200] itself implements the
15 method for a configuration-based management of a procedure request at a network
function (NF) using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
20 [0063] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include a main memory [206], such as a random-
25 access memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
30 accessible to the processor [204], render the computing device [200] into a special-
purpose machine that is customized to perform the operations specified in the
21
instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
5 [0064] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
10 displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204]. Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction
15 information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
20 [0065] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the
25 computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210]. Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the
30 process steps described herein. In alternative implementations of the present
22
disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0066] The computing device [200] also may include a communication interface
5 [218] coupled to the bus [202]. The communication interface [218] provides a two-
way data communication coupling to a network link [220] that is connected to a local network [222]. For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of
10 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing
15 various types of information.
[0067] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218]. In the Internet example, a server [230] might
20 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], the host [224], and the communication interface [218]. The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
25
[0068] The computing device [200] may reside in a system as explained in FIG. 3. In one implementation, the computing device [200] may be associated with the system of FIG. 3.
30 [0069] Referring to FIG. 3, an exemplary block diagram of a system [300] for real-
time analysis of Key Performance Indicators (KPI) deviations, is shown, in
23
accordance with the exemplary implementations of the present disclosure. The
system [300] comprises at least one processing unit [302], at least one transceiver
unit [304], at least one generation unit [308], at least one computation layer [100d],
at least one distributed data lake (DDL) [100u], at least one Integrated Performance
5 Management (IPM) [100a], and at least one NLP Model [310]. The system [300]
may be in communication with at least one user interface [306] and at least one load
balancer [504]. Also, all of the components/ units of the system [300] are assumed
to be connected to each other unless otherwise indicated below. As shown in the
FIG. 3 all units shown within the system should also be assumed to be connected
10 to each other. Also, in FIG. 3 only a few units are shown, however, the system [300]
may comprise multiple such units or the system [300] may comprise any such number of said units, as required to implement the features of the present disclosure. In an implementation, the system [300] may reside in a server or a network entity.
15 [0070] Further, the exemplary block diagram of the system [300] as shown in FIG.
3 is intended to be read in conjunction with the exemplary block diagram of a network performance management system [100] as shown in FIG. 1 and an exemplary architecture of a system [500] for real-time analysis of Key Performance Indicators (KPI) deviations as shown in FIG. 5. The systems in FIG. 1, FIG. 3 and
20 FIG. 5 complement each other.
[0071] The system [300] is configured for real-time analysis of Key Performance Indicators (KPI) deviations, with the help of the interconnection between the components/units of the system [300]. Continuous monitoring of counters and KPIs
25 reduces the risk of failure while improving business outcomes. The KPI deviation
may be a positive deviation or a negative deviation. The positive deviation refers that the KPI is higher than a target KPI, whereas the negative deviation refers that the KPI is lower than the target KPI. The system [300] stores the output data aggregated from KPIs and counters in distributed data lakes or caching layers for
30 further processing.
24
[0072] The system [300] includes a transceiver unit [304]. The transceiver unit
[304] is configured to receive a KPI analysis request from a User Interface (UI)
[306], wherein the KPI analysis request comprises one or more selected KPIs
5 pertaining to a network, wherein each of the one or more selected KPIs comprises
one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period. The network may be a 5th generation core network, a 6th generation network, a 4th generation network or any other future generations of network. The KPIs may include a packet loss, a throughput, a jitter, a latency,
10 and the like. The packet loss refers to when small packets fail to reach their
destination. The throughput refers to the duration of time in which the packets of data that may be sent to the destination. The jitter refers to variation in the time during which data packets may reach their destination. The latency refers to the time taken by a data packet to from one place to another destination. The time period
15 included in the KPI analysis request defines the time period for which analysis is to
be performed, for instance KPI analysis may be requested for a period of 5 days. In an implementation of the present disclosure, the KPI analysis request also includes a condition. The condition defines a level at which the user wants to compute the KPIs, such as but not limited to a circle, a cluster, a blade etc.
20
[0073] The KPI analysis request is received from the User Interface (UI) [306] via a load balancer [504], wherein the load balancer [504] is communicatively coupled with the transceiver unit [304]. The KPI analysis request from the UI [306] is generated by a user, and wherein the user selects one or more KPIs from a plurality
25 of KPIs. The user may be at least one of an individual, a group of individuals, an
administrator, and the like. The user may interact with the one or more units of the system [300] via the network.
[0074] In an implementation of the present disclosure, the transceiver unit [304]
30 receives the KPI analysis request from the UI [306]. The KPI analysis request may
include a single or multiple KPIs selected by the user [502]. The selected KPIs
25
include a nested KPI and the time period for which the KPI is to be analysed. The
nested KPI refers to KPI created with two or more other KPIs and also refers to a
sub-metrics that contribute to the overall performance of a primary KPI. For
instance, if the selected KPI is network traffic, then the nested KPIs may be unique
5 users accessing the network, total number of accesses by the users which include a
single user accessing the network multiple times, and a latency rate.
[0075] The KPIs are selected through the UI [306] by the user. A list of available
KPIs may be displayed to the user on the UI [306]. The user may be allowed to
10 select the KPI’s required for the analysis. The user may also create an on-demand
dashboard or a configurable dashboard and select the nested KPIs to monitor via the UI [306]. The KPI analysis request is sent to a load balancer [504] to manage the request in accordance with the load.
15 [0076] The time period included in the KPI analysis request defines the time period
for which analysis is to be performed, for instance KPI analysis may be requested for a period of 5 days.
[0077] A processing unit [302] of the system [300] via the IPM [100a] is configured
20 to compute values for the one or more selected KPIs based on the time period
associated with the KPI analysis request. For instance, if the time period in the KPI
analysis request is 5 days, the processing unit [302] may compute the one or more
selected KPIs for 5 days. The 5G KPI Engine [100w] of the IPM [100a] is
responsible for managing the KPIs of all network elements. Counters collected and
25 processed by the 5G Performance Engine [100v] through different data sources are
used by the KPI engine to calculate the KPI, segregate it based on the aggregation required, and store the KPI data in a distributed data lake [100u]. This component is responsible for all the reporting and visualization of KPI data.
30 [0078] Furthermore, the processing unit [302] is configured to receive computed
results for the one or more selected KPIs from the IPM [100a]. Further, based on
26
the received computed results, the generation unit [308] generates the KPI analysis report.
[0079] Further, the processing unit [302] displays the generated KPI analysis report
5 on the UI [306].
[0080] In an exemplary aspect of the present disclosure, the processing unit [302] via the IPM [100a] is configured to compute the one or more nested KPIs associated with the one or more selected KPIs based on the time period associated with the
10 KPI analysis request. The IPM [100a] processes the KPI analysis request when the
time period is less than a predetermined retention period. The predetermined retention period relates to historical data and managing the data storage periods in a network performance management system. A user or an operator of the network can maintain the database (or the DDL [100u]) by configuring the retention period
15 to keep the required data. Further, the IPM [100a] utilizes a natural language
processing (NLP) model [310] based on Artificial Intelligence (AI)/Machine Learning (ML) layer [508] (as shown in FIG. 5) to process the KPI analysis request. NLP model [310] is used for better user interaction so that the user can directly input his/her query about the KPIs and the resultant detailed analysis (tables,
20 graphs, trends & anomalies) of the composite KPIs & counters will be displayed to
the user.
[0081] In an implementation of the present disclosure, the KPIs will be computed and analysed on the basis of the time period mentioned by the user in the KPI
25 analysis request. For instance, if the time period mentioned is 5 days, the KPI data
will be computed for a period of 5 days. If the predetermined retention period is more than the time period mentioned in the KPI analysis request, the KPI analysis request may be sent to the DDL [100u] to fetch the KPI data. If the predetermined retention time period is less than the time period mentioned in the KPI analysis
30 request, the KPI analysis request will be further forwarded to the computation layer
[100d]. The computation layer [100d] will provide the KPIs beyond the
27
predetermined retention period. For instance, if the KPI analysis request is for a
time period of 5 days, but the predetermined retention period is 3 days. Then, since
the time period mentioned in the KPI analysis request exceeds the predetermined
retention period, the KPI analysis request may be redirected to the computation
5 layer [100d]. The computation layer [100d] comprises the pre-computed data. In an
implementation, the computation layer [100d] may precompute the one or more
selected KPIs from the data stored in the Distributed File System [514]. Therefore,
the computation layer [100d] may execute the KPI analysis request in a faster
manner in case the time period in the KPI analysis request exceeds the
10 predetermined retention period. The predetermined retention period may be
changed by the user.
[0082] In another instance, if the KPI analysis request is for a period of 2 days, then
the time period is within the predetermined retention period (3 days), the KPI
15 analysis request may not be transmitted to the computation layer [510]. The IPM
[100a] processes the KPI analysis request using the NLP model [310] and queries the DDL [100u] to fetch the KPIs data.
[0083] The transceiver unit [304] is further configured to transmit the KPI analysis
20 request to a DDL [100u]. Further, the transceiver unit [304] is configured to fetch
a set of data pertaining to the one or more KPIs from the Distributed data lake
[100u]. The set of data refers to attributes associated with the one or more selected
KPIs. The processing unit [302] is configured to compute the one or more selected
KPIs based on the fetched set of data. The set of data may be fetched by sending a
25 query to the DDL [100u].
[0084] A generation unit [308] is further configured to generate a performance
report based on the computed values for the one or more selected KPIs. The
performance report comprises the computed values for the one or more nested KPIs
30 associated with the one or more selected KPIs. The performance report is generated
by the generation unit [308] and displayed through the UI [306]. The UI [306] may
28
display a performance report generation form, where the user may select the one or more selected KPIs.
[0085] The generation unit [308] is further configured to, upon generation of the
5 performance report, transmit a message to the UI [306] and receive a response from
the UI [306], wherein the response is indicative of the user associated with the UI
[306] requesting for the performance report. The message transmitted by the
generation unit [308] is displayed on the UI [306]. The user associated with the UI
[306] provides a response to the message which indicates his consent to receive the
10 performance report.
[0086] The processing unit [302] based on the received response, transmits the
performance report to the UI [306] to be viewed and consumes by the user. Upon
receiving the response from the user, the processing unit [302] transmits the
15 performance report to the UI [306] which is displayed to the user.
[0087] In an implementation of the present disclosure, once the IPM [100a]
receives the KPI data from the DDL [100u], the generation unit [308] of the IPM
[100a], generates the KPI analysis report. Where the computation layer [100d] is
20 used to fetch the KPI data from the DDL [100u], the fetched KPI data is transmitted
by the computation layer [100d] to the IPM [100a]. Thereafter, the KPI analysis report or dashboard is generated by the processing unit [302] of the IPM [100a] and transmitted to the UI [306].
25 [0088] The processing unit is further configured to display the performance report
for the one or more selected KPIs. The processing unit [302] displays the generated performance report on the UI [306] using tabular representation and graphical representation, wherein the generated performance report is to provide KPI deviations associated with one or more nested KPIs of the one or more selected
30 KPIs.
29
[0089] In an implementation of the present disclosure, the processing unit [302]
further displays the KPI analysis report to the user on the UI [306]. In the report,
user may be able to see the values of counters or sub counters of the KPIs they had
chosen to monitor. In an implementation the user may see the report through the
5 on-demand dashboard created on the UI [306]. The user may monitor and analyse
the KPIs of all available parameters on-demand as well as on a scheduled basis. To monitor the KPIs on-demand, the user can create an on-demand dashboard. The on-demand dashboard enables an easy navigation and provides a network topological view with animations for easily monitoring KPIs associated with network elements.
10 The on-demand dashboard is a real-time dashboard which indicates details of each
network function that is currently monitored. It provides functionality to create a user-level dashboard using counter and KPIs and save the created dashboard. User can execute these dashboards too, and roll up or drill down on 15 min, 1 hour, 6 hours, daily, weekly, monthly, and yearly basis. The operational status of the
15 network functions is available to guide the user to take necessary actions. The on-
demand dashboard may present a KPI chart on the UI [306].
[0090] Referring to FIG. 4, an exemplary method flow diagram [400] for real-time analysis of Key Performance Indicators (KPI) deviations, in accordance with
20 exemplary implementations of the present disclosure is shown. In an
implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402].
25
[0091] At step [404], the method comprises receiving, by a transceiver unit [304], a KPI analysis request from a User Interface (UI) [306], wherein the KPI analysis request comprises one or more selected KPIs pertaining to a network. Each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one
30 or more selected KPIs further comprises at least a time period. The time period
included in the KPI analysis request defines the time period for which analysis is to
30
be performed, for instance KPI analysis may be requested for a period of 5 days.
The time period, the KPIs and other data mentioned are for illustrative purposes
only. They may differ or may be changed by the user. In an implementation of the
present disclosure, the KPI analysis request also includes a condition. The condition
5 defines a level at which the user wants to compute the KPIs, such as but not limited
to a circle, a cluster, a blade etc. The KPI analysis request is received from the User
Interface (UI) [306] via a load balancer [504], wherein the load balancer [504] is
communicatively coupled with the transceiver unit [304]. The KPI analysis request
from the UI [306] is generated by a user, wherein the user selects one or more KPIs
10 from a plurality of KPIs from the UI [306]. The method further includes computing,
by the processing unit [302] via the IPM [100a], the one or more nested KPIs associated with the one or more selected KPIs.
[0092] In an implementation of the present disclosure, the transceiver unit [304]
15 receives the KPI analysis request from the UI [306]. The KPI analysis request may
include a single or multiple KPIs selected by the user [502]. The selected KPIs may include a nested KPI and the time period for which the KPIs are to be analysed. The nested KPI refers to KPI created with two or more other KPIs and also refers to a sub-metrics that contribute to the overall performance of a primary KPI. The KPI’s
20 are selected through the user interface (UI) [306] by the user. A list of available
KPIs may be displayed to the user. The user may be allowed to select the KPI’s required for the analysis. The user may also create an on-demand dashboard or configurable dashboard and selects the nested KPIs to monitor via the UI [306]. The user may monitor and analyse the KPIs of all available parameters on-demand
25 as well as on a scheduled basis. To monitor the KPIs on-demand, the user can create
an on-demand dashboard. The on-demand dashboard enables an easy navigation and provides a network topological view with animations for easily monitoring KPIs associated with network elements. The on-demand dashboard is a real-time dashboard which indicates details of each network function that is currently
30 monitored. Along with it, the dashboard helps to analyze the KPIs in real-time and
helps to reach to an element specific information on counters, alarms, configuration,
31
and more. The operational status of the network functions is available to guide the user to take necessary actions. The on-demand dashboard may present a KPI chart on the UI [306]. Further, the KPI analysis request is sent to a load balancer [504] to manage the request in accordance with the load. 5
[0093] For instance, the selected KPI is network traffic, the nested KPIs may be
unique users accessing the network, total number of accesses by the users which
includes a single user accessing the network multiple times, and a latency rate. The
KPI analysis request further includes the time period for which analysis is to be
10 performed, for instance it is requested to be analysed for a period of 5 days.
[0094] Next, at step [406], the method comprises computing, by a processing unit [302] via an integrated process management (IPM) [100a], values for the one or more selected KPIs based on the time period associated with the KPI analysis
15 request. The 5G KPI Engine [100w] of the IPM [100a] is responsible for managing
the KPIs of all network elements. Counters collected and processed by the 5G Performance Engine [100v] through different data sources are used by the KPI engine to calculate the KPI, segregate it based on the aggregation required, and store the KPI data in a distributed data lake [100u]. This component is responsible
20 for all the reporting and visualization of KPI data.
[0095] The method further includes transmitting, by the processing unit [302] to the IPM [100a], the KPI analysis request for further processing. The IPM [100a] processes the KPI analysis request when the time period is less than a predetermined
25 retention period. The predetermined retention period relates to historical data and
managing the data storage periods in a network performance management system. A user or an operator of the network can maintain the database (or the DDL [100u]) by configuring the retention period to keep the required data. Further, the IPM [100a] utilizes a natural language processing (NLP) model [310] based on Artificial
30 Intelligence (AI)/Machine Learning (ML) model to process the KPI analysis
request.
32
[0096] The method further includes transmitting, by the NLP model [310], the KPI
analysis request to a DDL [100u]. Further, the method includes fetching, by the
processing unit [302] via the IPM [100a], a set of data pertaining to the one or more
5 KPIs from the distributed data lake [100u]. In an implementation of the present
disclosure, the NLP model [310] of the IPM [100a] fetches the KPI data from the
DDL [100u]. Furthermore, the method includes computing, by the processing unit
[302] via the IPM [100a], the one or more selected KPIs based on the set of data.
Furthermore, the processing unit [302] is configured to receive computed results
10 for the one or more selected KPIs from the IPM [100a].
[0097] In an implementation of the present disclosure, the KPIs will be computed
and analysed on the basis of the time period mentioned by the user in the KPI
analysis request. For instance, if the time period mentioned is 5 days, the KPI data
15 will be computed for a period of 5 days.
[0098] In an implementation of the present disclosure, a predetermined retention
period is configured at the IPM [100a]. In an implementation, the predetermined
retention period may be 3 days. The predetermined retention period may be changed
20 by the user.
[0099] In an instance, if the KPI analysis request is for a period of 2 days, and if
the predetermined retention period is set at 3 days, then the time period is within
the predetermined retention period (3 days), so the KPI analysis request may not be
25 transmitted to the computation layer [100d]. The IPM [100a] processes the KPI
analysis request using the NLP model [310] and queries the DDL [100u] to fetch the KPIs data.
[0100] In an implementation of the present disclosure, if the predetermined
30 retention time period is less than the time period mentioned in the KPI analysis
request, the KPI analysis request will be further forwarded to the computation layer
33
[100d]. The computation layer [100d] will provide the KPIs beyond the
predetermined retention period. Then, since the time period mentioned in the KPI
analysis request exceeds the predetermined retention period, the KPI analysis
request may be redirected to the Computation layer [100d]. The computation layer
5 [100d] comprises the pre-computed data. The computation layer [100d] may
precompute the one or more selected KPI’s from the data stored in the Distributed File System [514]. Therefore, the computation layer [100d] may execute the KPI analysis request in a faster manner in case the time period in the KPI analysis request exceeds the predetermined retention period.
10
[0101] At step [408], the method comprises generating, by a generation unit [308] via the IPM [100a], a performance report based on the computed values for the one or more selected KPIs. The performance report comprises the computed values for the one or more nested KPIs associated with the one or more selected KPIs. Upon
15 generation of the performance report, the generation unit [308] transmits a message
to the UI [306] via the transceiver unit [304] to be displayed on the UI [306] for the
user. Further, the generation unit [308] receives a response via the transceiver unit
[304] from the UI [306], wherein the response is indicative of the user associated
with the UI [306] requesting for the performance report.
20
[0102] In an implementation of the present disclosure, once the IPM [100a] via the
processing unit [302], receives the KPI data, the generation unit [308] may generate
the KPI analysis report. Where the computation layer [100d] is used to fetch the
KPI data, a notification may be generated by the processing unit [302] of the IPM
25 [100a] and transmitted to the load balancer [504]. The load balancer [504] via the
transceiver unit [304], then displays the notification on the UI [306] for user’s response. Furthermore, based on the received response from the user, the generation unit [308] via the transceiver unit [304] transmits the performance report to the UI [306] to be displayed to the user. The message/notification is displayed on the UI
30 [306]. The user associated with the UI [306] provides a response to the
message/notification which indicates his consent to receive the performance report.
34
[0103] At step [410], the method comprises causing to display, by the processing
unit [302], the performance report for the one or more selected KPIs. The generated
5 performance report is transmitted by the transceiver unit [304] to the UI [306]. The
performance report is generated by the generation unit [308] and displayed through
the UI [306]. The UI [306] may display a performance report generation form,
where the user may select the one or more selected KPIs. The generated report
comprises at least one of tabular representation and graphical representation,
10 wherein the generated performance report provides KPI deviations associated with
one or more nested KPIs of the one or more selected KPIs. The displayed report may be visible to the user through the on-demand dashboard created by the user.
[0104] In the report, user may be able to see the values of counters or sub counters
15 of the KPIs they had chosen to monitor.
[0105] Thereafter, the method terminates at [412].
[0106] The system [300] and method [400] as explained in FIG.3 and FIG. 4, will
20 be clearer from a detailed exemplary architecture as described in FIG. 5.
[0107] Further, the exemplary architecture [500] as described in FIG. 5 is intended
to be read in conjunction with the exemplary process flow [600] for real-time
analysis of Key Performance Indicators (KPI) deviations as shown in FIG. 6. The
25 FIG.5 and FIG. 6 complement each other.
[0108] Referring to FIG. 5, it illustrates an exemplary architecture of a system for real-time analysis of Key Performance Indicators (KPI) deviations.
30 [0109] The system [500] includes the User Interface (UI) [306], a Load Balancer
[504], the IPM [100a], an AI/ML layer [508], the Distributed Data Lake (DDL) [100u], the Computational Layer [100d], and the Distributed File System [514].
35
The functions performed by the components of the system [500] are described
below as follows-
[0110] As shown in FIG. 5, the system includes the UI [306]. A user selects one or
5 more KPIs comprising one or more nested KPIs to be monitored on the UI [306] to
create a KPI analysis request. The one or more KPIs selected may be monitored by an on-demand dashboard created by the user on the UI [306]. The user may monitor and analyse the KPIs of all available parameters on-demand as well as on a scheduled basis. To monitor the KPIs on-demand, the user can create an on-demand
10 dashboard. The The on-demand dashboard enables an easy navigation and provides
a network topological view with animations for easily monitoring KPIs associated with network elements. The on-demand dashboard is a real-time dashboard which indicates details of each network function that is currently monitored. Along with it, the dashboard helps to analyze the KPIs in real-time and helps to reach to an
15 element specific information on counters, alarms, configuration, and more. The
operational status of the network functions is available to guide the user to take necessary actions. The on-demand dashboard may present a KPI chart on the UI [306]. The KPI analysis request from the user is transmitted by the UI [306] to a load balancer [504]. The KPI analysis request includes a time period for which the
20 KPI is to be monitored and analysed. In an implementation of the present disclosure,
the KPI analysis request also includes a condition. The condition defines a level at which the user wants to compute the KPIs, such as but not limited to a circle, a cluster, a blade etc.
25 [0111] The load balancer [504] receives the KPI analysis request from the UI [306]
and manages the requests as per load. IPM [100a] receives the requests from the UI [306] via the load balancer [504] and starts computing the KPIs based on the time period and conditions mentioned in the request. If the time period in the KPI analysis report is less than a predetermined retention period, then the IPM [100a]
30 utilizes the AI/ML Layer [508] to process the KPI analysis request. The
predetermined retention period relates to historical data and managing the data
36
storage periods in a network performance management system. The user or an operator of the network can maintain the database (or the DDL [100u]) by configuring the retention period to keep the required data.
5 [0112] Further, the system [500] includes an AI/ML layer [508]. The AI/ML layer
[508] of the IPM [100a] fetches the KPI data from the DDL [100u]. Further, the
IPM [100a], computes the one or more selected KPIs based on the set of data
fetched from the DDL [100u]. Furthermore, the load balancer [504] is configured
to receive the computed results for the one or more selected KPIs from the IPM
10 [100a]. It is to be noted that the AI/ML Layer [508] is part of the NLP model [310]
as shown in FIG. 3.
[0113] If the predetermined retention time period is less than the time period mentioned in the KPI analysis request, the KPI analysis request will be further
15 forwarded to the computation layer [100d]. The computation layer [100d] will
provide the KPIs beyond the predetermined retention period. For instance, if the KPI analysis request is for a time period of 5 days, but the predetermined retention period is 3 days. Then, since the time period mentioned in the KPI analysis request exceeds the predetermined retention period, the KPI analysis request may be
20 redirected to the computation layer [100d]. The computation layer [100d] comprises
the pre-computed data. The computation layer [100d] may precompute the one or more selected KPI’s from the data stored in the Distributed File System [514]. Therefore, the computation layer [100d] may execute the KPI analysis request in a faster manner in case the time period in the KPI analysis request exceeds the
25 predetermined retention period. The predetermined retention period may be
changed by the user.
[0114] The exemplary system architecture as shown in FIG.5, will be clearer from
an illustrative method implementation of the system [500] as explained in FIG. 6.
30 Referring to FIG. 6, it illustrates an implementation of the exemplary process for
real-time analysis of Key Performance Indicators (KPI) deviations.
37
[0115] In an implementation the method [600] is performed by the system [500]. Particularly, the method encompasses enabling the following steps by the system [500]. 5
[0116] At step 1, a user request is sent by the user [602] to the UI [306] comprising a KPI analysis request. The KPI analysis request may include a single or multiple KPIs selected by the user [602]. The selected KPIs may include a nested KPI and the time period for which the KPIs are to be analysed. The nested KPI refers to KPI
10 created with two or more other KPIs and also refers to a sub-metrics that contributes
to the overall performance of a primary KPI. For instance, the selected KPI is network traffic, the nested KPIs may be unique users accessing the network, total number of accesses by the users which includes a single user accessing the network multiple times, and a latency rate. The KPI analysis request further includes the
15 time period for which analysis is to be performed, for instance it is requested to be
analysed for a period of 5 days.
[0117] At step 2, the UI server [604] forwards the KPI analysis request to the load
balancer [504]. It is to be noted that UI server [604] performs the same function as
20 performed by the UI [306] as shown in FIG. 3 and FIG 5.
[0118] At step 3, the load balancer [504] forwards the request to the IPM [100a].
[0119] At step 4, the IPM [100a] processes the request. the KPI analysis request.
25 The IPM [100a] checks if the time period as mentioned in the KPI analysis request
is less than a predetermined retention period. If the time period is less than the predetermined retention period, then IPM [100a] utilises an Artificial Intelligence (AI)/Machine Learning (ML) [508]AI/ML layer [508] to preprocess the request using NLP model [310]. The predetermined retention period relates to historical
30 data and managing the data storage periods in a network performance management
system. A user or an operator of the network can maintain the database (or the DDL
38
[100u]) by configuring the retention period to keep the required data. Further, the preprocessing of the request by the AI/ML Layer [508] involves extracting features of the request. It is to be noted that the AI/ML Layer [508] is part of the NLP model [310] as shown in FIG. 3. 5
[0120] At step 5, the AI/ML Layer [508] after preprocessing, sends a query to the DDL [100u] to fetch the KPI data.
[0121] At step 6, the fetched KPI data is then forwarded to the IPM [100a] by the
10 AI/ML Layer [508]. The KPI data is then sent to a generation unit [308] by the IPM
[100a] for generating a report for the user. The generated report is then sent to the UI server [604] via the Load Balancer [504] to be viewed by the user through an on-demand dashboard created by the user.
15 [0122] Further, at step 7, in an implementation of the present disclosure, if the time
period included in the KPI analysis report is greater than the predetermined retention period, then the IPM [100a] sends the request to a computation layer [100d].
20 [0123] At step 8, the computation layer [100d] further fetches a pre-computed KPI
data from the DDL [100u] and further computes the KPI data based on the pre-computed KPI data.
[0124] At step 9, the computation layer [100d] then sends the KPI data to the IPM
25 [100a], which then forwards the KPI data to the Load Balancer [504] to be further
displayed on the UI server [604].
[0125] At step 10, the IPM [100a] generates the KPI analysis report and sends the
KPI analysis report to the load balancer [504]. In case, KPI data is received from
30 the computation layer [100d], a notification is also transmitted to the load balancer
[504].
39
[0126] At step 11, the load balancer forwards the KPI analysis report to the UI [306].
5 [0127] At step 12, the UI server [604] displays the KPI analysis report to the user
[602] along with the notification where the computation layer [100d] is used. The
notification is sent to the user to get the user’s response to fetch the KPI analysis
report. Based on the user’s response, the KPI analysis report is sent to the UI server
[604] to be consumed by the user [602].
10
[0128] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for real-time analysis of Key Performance
Indicators (KPI) deviations, the instructions include executable code which, when
executed by a one or more units of a system, causes: a transceiver unit [304] of the
15 system to receive a KPI analysis request from a User Interface (UI) [306], wherein
the KPI analysis request comprises one or more selected KPIs pertaining to a network, wherein each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period. Further, the instructions include executable code which, when
20 executed, causes a processing unit [302] of the system to compute values for the
one or more selected KPIs based on the time period associated with the KPI analysis request. Further, the instructions include executable code which, when executed, causes a generation unit [308] of the system to generate a performance report based on the computed values for the one or more selected KPIs; and the processing unit
25 [302] of the system to cause to display the performance report for the one or more
selected KPIs.
[0129] As is evident from the above, the present disclosure provides a technically
advanced solution for real-time analysis of Key Performance Indicators (KPI)
30 deviations. The present disclosure provides KPI drill down solution which resolves
the problem of decomposing these KPIs into counters or KPIs from which they are
40
generated and leading to the root cause of the variation from normal values and resolve the network issues in real time through interactive tables and graphs. Further, KPIs may be created using other KPIs which creates nested KPIs, which further help in going deeper to the root cause. Using the drill down solution user can directly reach to the bottom of the detail and pinpoint the issue which would be very tedious if done manually. The KPI response report may have root cause factor and any anomaly detection information, which can be generated based on user request or automatically based on an on-demand dashboard. The present solution provides the user the ability to drill down into constituent counters or KPIs, thus assisting in monitoring performance at a granular level and also identify the root cause of problems in the network.
[0130] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.
[0131] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
We Claim:
1. A method for real-time analysis of Key Performance Indicator (KPI) deviations,
the method comprising steps of:
- receiving, by a transceiver unit [304], a KPI analysis request from a User Interface (UI) [306], wherein the KPI analysis request comprises one or more selected KPIs pertaining to a network, wherein each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period;
- computing, by a processing unit [302] via an integrated process management (IPM) [100a], values for the one or more selected KPIs based on the time period associated with the KPI analysis request;
- generating, by a generation unit [308] via the IPM [100a], a performance report based on the computed values for the one or more selected KPIs; and
- causing to display, by the processing unit [302], the performance report for the one or more selected KPIs on the UI [306].
2. The method as claimed in claim 1, further comprising:
upon generation of the performance report, transmitting a message to the UI
[306];
receiving a response from the UI [306], wherein the response is indicative of
the user associated with the UI [306] requesting for the performance report; and
based on the received response, transmitting the performance report to the UI
[306] for display to the user.
3. The method as claimed in claim 1, wherein causing to display the generated
performance report comprises causing to display, via the UI [306], the
performance report using at least one of tabular representation and graphical
representation, wherein the generated performance report is to provide KPI
deviations associated with one or more nested KPIs of the one or more selected KPIs.
4. The method as claimed in claim 1, further comprising computing, by the processing unit [302] via the IPM [100a], the one or more nested KPIs associated with the one or more selected KPIs.
5. The method as claimed in claim 4, wherein the performance report comprises the computed values for the one or more nested KPIs associated with the one or more selected KPIs.
6. The method as claimed in claim 1, further comprising:
- transmitting, by the processing unit [302] via the IPM [100a], the KPI analysis request to a distributed data lake [100u];
- fetching, by the processing unit [302] via the IPM [100a], a set of data pertaining to the one or more KPIs from the distributed data lake [100u]; and
- computing, by the processing unit [302] via the integrated process management (IPM) [100a], the one or more selected KPIs based on the set of data.
7. The method as claimed in claim 1, wherein the method further comprising:
- transmitting, by the processing unit [302] via the IPM [100a], the KPI analysis request to a computation layer [100d] for further processing when the time period exceeds a predetermined retention period; and
- receiving, by the processing unit [302] via the IPM [100a], computed results for the one or more selected KPIs from the computation layer [100d], wherein, based on the received computed results, the processing unit generates the performance report.
8. The method as claimed in claim 7, wherein the computation layer [512] utilizes
one of a natural language processing (NLP) model [310] and a machine learning
model to compute results for the one or more selected KPIs when the time period is less than the predetermined retention period.
9. The method as claimed in claim 1, wherein the KPI analysis request is received from the User Interface (UI) [306] via a load balancer [504], wherein the load balancer [504] is communicatively coupled with the processing unit [302].
10. The method as claimed in claim 1, wherein the KPI analysis request from the UI [306] is generated by a user, and wherein the user is to select one or more KPIs from a plurality of KPIs and create an on-demand dashboard for monitoring of the selected one or more KPIs.
11. A system [300] for real-time analysis of Key Performance Indicator (KPI) deviations, the system comprising:
- a transceiver unit [304] configured to receive a KPI analysis request from a User Interface (UI) [306], wherein the KPI analysis request comprises one or more selected KPIs pertaining to a network, wherein each of the one or more selected KPIs comprises one or more nested KPIs, and wherein the one or more selected KPIs further comprises at least a time period;
- a processing unit [302] configured to compute values for the one or more selected KPIs based on the time period associated with the KPI analysis request;
- a generation unit [308] configured to generate a performance report based on the computed values for the one or more selected KPIs; and
- the processing unit [302] configured to cause to display the performance report for the one or more selected KPIs on the UI [306].
12. The system [300] as claimed in claim 11, wherein the processing unit [302] is
further configured to:
- transmit a message to the UI [306] upon generation of the performance report;
- receive a response from the UI [306], wherein the response is indicative of the user the UI [306] requesting for the performance report; and
- based on the received response, transmit the performance report to the UI [306] for display to the user.
13. The system [300] as claimed in claim 11, wherein the processing unit [302] is configured to cause to display the generated performance report, via the UI [306], using tabular representation and graphical representation, wherein the generated performance report is to provide KPI deviations associated with one or more nested KPIs of the one or more selected KPIs.
14. The system [300] as claimed in claim 11, wherein the processing unit [302] is further configured to compute the one or more nested KPIs associated with the one or more selected KPIs.
15. The system [300] as claimed in claim 14, wherein the performance report comprises the computed values for the one or more nested KPIs associated with the one or more selected KPIs.
16. The system [300] as claimed in claim 11, wherein the processing unit [302] is further configured to:
- transmit KPI analysis request to a distributed data lake [100u];
- fetch a set of data pertaining to the one or more KPIs from the distributed data lake [100u]; and
- compute the one or more selected KPIs based on the set of data.
17. The system [300] as claimed in claim 11, wherein the processing unit [302] is
further configured to:
- transmit the KPI analysis request to a computation layer [100d] for further processing when the time period exceeds a predetermined retention period; and
- receive computed results for the one or more selected KPIs from the computation layer [100d] wherein, based on the received computed results, the processing unit [302] is to generates the performance report.
18. The system [300] as claimed in claim 17, wherein the computation layer [100d] utilizes one of a natural language processing (NLP) model [310] and a machine learning model to compute results for the one or more selected KPIs when the time period is less than the predetermined retention period.
19. The system [300] as claimed in claim 11, wherein the KPI analysis request is received from the UI [306] via a load balancer [504], wherein the load balancer [504] is communicatively coupled with the processing unit [302].
20. The system [300] as claimed in claim 11, wherein the KPI analysis request at the UI [306] is generated by a user, wherein the user is to select one or more KPIs from a plurality of KPIs and create an on-demand dashboard for monitoring of the selected one or more KPIs.
| # | Name | Date |
|---|---|---|
| 1 | 202321047802-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf | 2023-07-15 |
| 2 | 202321047802-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf | 2023-07-15 |
| 3 | 202321047802-FORM 1 [15-07-2023(online)].pdf | 2023-07-15 |
| 4 | 202321047802-FIGURE OF ABSTRACT [15-07-2023(online)].pdf | 2023-07-15 |
| 5 | 202321047802-DRAWINGS [15-07-2023(online)].pdf | 2023-07-15 |
| 6 | 202321047802-FORM-26 [18-09-2023(online)].pdf | 2023-09-18 |
| 7 | 202321047802-Proof of Right [23-10-2023(online)].pdf | 2023-10-23 |
| 8 | 202321047802-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf | 2023-12-08 |
| 9 | 202321047802-FORM-5 [12-07-2024(online)].pdf | 2024-07-12 |
| 10 | 202321047802-ENDORSEMENT BY INVENTORS [12-07-2024(online)].pdf | 2024-07-12 |
| 11 | 202321047802-DRAWING [12-07-2024(online)].pdf | 2024-07-12 |
| 12 | 202321047802-CORRESPONDENCE-OTHERS [12-07-2024(online)].pdf | 2024-07-12 |
| 13 | 202321047802-COMPLETE SPECIFICATION [12-07-2024(online)].pdf | 2024-07-12 |
| 14 | 202321047802-FORM 3 [01-08-2024(online)].pdf | 2024-08-01 |
| 15 | Abstract-1.jpg | 2024-08-14 |
| 16 | 202321047802-Request Letter-Correspondence [16-08-2024(online)].pdf | 2024-08-16 |
| 17 | 202321047802-Power of Attorney [16-08-2024(online)].pdf | 2024-08-16 |
| 18 | 202321047802-Form 1 (Submitted on date of filing) [16-08-2024(online)].pdf | 2024-08-16 |
| 19 | 202321047802-Covering Letter [16-08-2024(online)].pdf | 2024-08-16 |
| 20 | 202321047802-CERTIFIED COPIES TRANSMISSION TO IB [16-08-2024(online)].pdf | 2024-08-16 |
| 21 | 202321047802-FORM 18A [10-03-2025(online)].pdf | 2025-03-10 |
| 22 | 202321047802-FER.pdf | 2025-03-21 |
| 23 | 202321047802-FER_SER_REPLY [14-05-2025(online)].pdf | 2025-05-14 |
| 24 | 202321047802-SER.pdf | 2025-06-18 |
| 25 | 202321047802-FER_SER_REPLY [02-07-2025(online)].pdf | 2025-07-02 |
| 26 | 202321047802-PatentCertificate31-07-2025.pdf | 2025-07-31 |
| 27 | 202321047802-IntimationOfGrant31-07-2025.pdf | 2025-07-31 |
| 1 | 202321047802_SearchStrategyNew_E_KPIE_20-03-2025.pdf |