Abstract: The present disclosure relates to a method and a system for generating and provisioning a Key Performance Indicator (KPI). The method includes receiving, by a transceiver unit [302] from a User Equipment (UE) [306], a Key Performance Indicator (KPI) provisioning request. The method includes extracting, by a processing unit [304], at least one of a plurality of KPI parameters from the received list of KPI parameters. The method includes generating, by the processing unit [304], a plurality of updated KPI parameters based on the extracted at least one of the plurality of KPI parameters. The plurality of updated KPI parameters is generated based on a set of pre-defined network policies applied to the extracted at least one of the plurality of KPI parameters, the set of pre-defined policies comprising one of an inverse function, a mode function, and an erlang function. [FIG. 4]
FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR GENERATING AND PROVISIONING A
KEY PERFORMANCE INDICATOR (KPI)”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat,
India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR GENERATING AND PROVISIONING A
KEY PERFORMANCE INDICATOR (KPI)
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure generally relate to network
performance management systems. More particularly, embodiments of the present
disclosure relate to generating and provisioning Key Performance Indicators (KPIs)
of a network.
10
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art 15 that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
20 [0003] Network performance management systems typically track network
elements and data from network monitoring tools and then combine and process
such data to determine key performance indicators (KPI) of the network. Further,
integrated Performance Management Systems provide the means to visualize the
network performance data so that network operators and other relevant stakeholders
25 are able to identify the service quality of the overall network, and individual/
grouped network elements. By having an overall as well as detailed view of the
network performance, the network operators can detect, diagnose and remedy
actual service issues, as well as predict potential service issues or failures in the
network and take precautionary measures accordingly.
30
3
[0004] The integrated performance management system comprises an integrated
performance management engine and a key performance indicator (KPI) engine.
The integrated performance management system is designed to efficiently gather
and process performance counter data from various data sources. Depending on the
required aggregation, the network performance 5 data is stored in a Distributed Data
Lake. This system is responsible for the comprehensive reporting and visualization
of the performance counter data, providing valuable insights into the network's
performance. Additionally, the Integrated Performance Management System takes
charge of managing the KPIs for all network elements. The Performance
10 Management Engine collects and processes counters from different data sources,
which are then utilized by the KPI Engine to calculate the KPIs. The KPIs are
segregated based on the necessary aggregation and stored in the Distributed Data
Lake. This component of the system is responsible for the reporting and
visualization of the KPI data, enabling effective monitoring and analysis of the
15 network's key performance indicators.
[0005] The KPIs provide metrics such as call drop rate, call set up time and voice
and video quality. To provide complex and advanced KPI metrics such as average
holding time in duration of call and to check pattern of KPIs in a month (Aging
20 KPI), there is a need to add operations and functions in the formula of KPIs to obtain
an advanced KPI. The KPIs known in the art which were made without using
operations and functions, lead to an offline report generation which is time
consuming and lacks automation in report generation.
25 [0006] Thus, there exists an imperative need in the art to provide a solution that can
overcome these and other limitations of the existing solutions.
SUMMARY
30 [0007] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
4
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
[0008] An aspect of the present disclosure may relate to a method for generating
and provisioning a Key Performance 5 Indicator (KPI). The method comprises
receiving, by a transceiver unit from a User Equipment (UE), a Key Performance
Indicator (KPI) provisioning request. The KPI provisioning request comprises a list
of KPI parameters associated with a network. The method further comprises
extracting, by a processing unit, at least one of a plurality of KPI parameters from
10 the received list of KPI parameters. Furthermore, the method comprises generating,
by the processing unit, a plurality of updated KPI parameters based on the extracted
at least one of the plurality of KPI parameters. The plurality of updated KPI
parameters is generated based on a set of pre-defined network policies applied to
the extracted at least one of the plurality of KPI parameters, the set of pre-defined
15 policies comprising one of an inverse function, a mode function, and an erlang
function.
[0009] In an exemplary aspect of the present disclosure, the method further
comprises receiving, by the transceiver unit, the KPI provisioning request from the
20 UE via a load balancer.
[0010] In an exemplary aspect of the present disclosure, the load balancer is
configured to receive the KPI provisioning request from at least one of a plurality
of UEs in a round-robin scheduling.
25
[0011] In an exemplary aspect of the present disclosure, the method further
includes receiving, by the transceiver unit, the KPI provisioning request during one
of a plurality of available time intervals of the system, wherein the plurality of
available time intervals is determined by the load balancer.
30
5
[0012] In an exemplary aspect of the present disclosure, the plurality of time
intervals is determined by the load balancer based on at least one or more network
events associated with the network, wherein the one or more network events
comprise at least one of a call drop rate event, a call set up time event, a voice
5 quality event and a video quality event.
[0013] In an exemplary aspect of the present disclosure, based on at least one of the
plurality of generated updated KPI parameters, the method further includes
generating, by the processing unit, an updated KPI list. The method further includes
10 transmitting, by the transceiver unit, the updated KPI list to at least one of the
plurality of UEs.
[0014] Another aspect of the present disclosure may relate to a system for
generating and provisioning a Key Performance Indicator (KPI). The system
15 comprises a transceiver unit. The transceiver unit is configured to receive, from a
User Equipment (UE), a Key Performance Indicator (KPI) provisioning request.
The KPI provisioning request comprises a list of KPI parameters associated with a
network. The system further comprises a processing unit connected at least with the
transceiver unit. The processing unit is configured to extract at least one of a
20 plurality of KPI parameters from the received list of KPI parameters. The
processing unit is further configured to generate a plurality of updated KPI
parameters based on the extracted at least one of the plurality of KPI parameters.
The plurality of updated KPI parameters is generated based on a set of pre-defined
network policies applied to the extracted at least one of the plurality of KPI
25 parameters, the set of pre-defined policies comprising one of an inverse function, a
mode function, and an erlang function.
[0015] Yet another aspect of the present disclosure may relate to a user equipment
(UE). The UE comprises a transceiver unit configured to transmit a Key
30 Performance Indicator (KPI) provisioning request. The KPI provisioning request
comprises a list of KPI parameters associated with a network to a system. The
6
transceiver unit of the UE to further receive, from the system, a plurality of updated
KPI parameters. The plurality of updated KPI parameters is generated by the system
based on the extracted at least one of the plurality of KPI parameters from the list
of KPI parameters included in the KPI provisioning request. The plurality of
updated KPI parameters is generated by the system 5 based on a set of pre-defined
network policies applied to the extracted at least one of the plurality of KPI
parameters, the set of pre-defined policies comprising one of an inverse function, a
mode function, and an erlang function.
10 [0016] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instruction for generating and
provisioning a Key Performance Indicator (KPI), the instructions include
executable code which, when executed by one or more units of a system causes a
transceiver unit to receive, from a User Equipment (UE), a Key Performance
15 Indicator (KPI) provisioning request. The KPI provisioning request comprises a list
of KPI parameters associated with a network. The instruction when executed further
causes a processing unit to extract at least one of a plurality of KPI parameters from
the received list of KPI parameters. The instruction when executed further causes
the processing unit to generate a plurality of updated KPI parameters based on the
20 extracted at least one of the plurality of KPI parameters. The plurality of updated
KPI parameters is generated based on a set of pre-defined network policies applied
to the extracted at least one of the plurality of KPI parameters, the set of pre-defined
policies comprising one of an inverse function, a mode function, and an erlang
function.
25
OBJECTS OF THE INVENTION
[0017] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
30
7
[0018] It is an object of the present disclosure to automate the analysis and
generation of the KPI parameters by providing advanced KPI formulas.
[0019] It is an object of the present disclosure to increase network effectiveness as
the advanced KPI formula 5 may help to know how much traffic can be handled in a
network.
[0020] It is another object of the present disclosure to provide an automated
analysis of the network as a KPI made by using complex operations such as an
10 erlang function and an inverse function, to provide flexibility of automated report
generation which was previously done offline.
[0021] It is an object of the present disclosure to optimize network performance
and resource allocation.
15
DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
20 and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
25 according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
30 [0023] FIG. 1 illustrates an exemplary block diagram of a network performance
management system.
8
[0024] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
[0025] FIG. 3 illustrates an exemplary block diagram 5 of a system for generating
and provisioning a Key Performance Indicator (KPI), in accordance with exemplary
implementations of the present disclosure.
[0026] FIG. 4 illustrates a method flow diagram for generating and provisioning a
10 Key Performance Indicator (KPI), in accordance with exemplary implementations
of the present disclosure.
[0027] FIG. 5 illustrates an exemplary implementation of the method for generating
and provisioning a Key Performance Indicator (KPI), in accordance with exemplary
15 implementations of the present disclosure.
[0028] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
20 DETAILED DESCRIPTION
[0029] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
25 embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the
problems discussed above.
30
9
[0030] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes 5 may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0031] Specific details are given in the following description to provide a thorough
10 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
15
[0032] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
20 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
[0033] The word “exemplary” and/or “demonstrative” is used herein to mean
25 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
30 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
10
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0034] As used herein, a “processing unit” or “5 processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a (Digital
10 Signal Processing) DSP core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
15 processing unit is a hardware processor.
[0035] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
20 communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
tablet computer, wearable device or any other computing device which is capable
25 of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
30 [0036] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
11
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units 5 of the system to perform their respective
functions.
[0037] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components of a system exchange information
10 or data. The interface may also be refer to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
15 [0038] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
20 Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0039] As used herein the transceiver unit includes at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
25 information or a combination thereof between units/components within the system
and/or connected with the system.
[0040] As discussed in the background section, the current known solutions have
several shortcomings. The KPIs provide metrics such as call drop rate, call set up
30 time and voice and video quality. To provide complex and advanced KPI metrics
such as average holding time in duration of call and to check pattern of KPI in a
12
month (Aging KPI), there is a need to add operations and functions in the formula
of KPI to obtain an advanced KPI. The KPIs known in the art which were made
without using operations and functions, lead to an offline report generation which
is time consuming and lacks automation in report generation. The present disclosure
aims to overcome 5 the above-mentioned and other existing problems in this field of
technology by providing a method and a system of generating and provisioning a
Key Performance Indicator (KPI).
[0041] FIG. 1 illustrates an exemplary block diagram of a network performance
10 management system [100], in accordance with the exemplary embodiments of the
present invention. Referring to FIG. 1, the network performance management
system [100] comprises various sub-systems such as: an integrated performance
management (IPM) system [102], a normalization layer [104], a computation layer
[106], an anomaly detection layer [108], a streaming engine [110], a load balancer
15 [112], an operation and management system [114], an API gateway system [116],
an analysis engine [118], a parallel computing framework [120], a forecasting
engine [122], a distributed file system [124], a mapping layer [126], a distributed
data lake [128], a scheduling layer [130], a reporting engine [132], a message broker
[134], a graph layer [136], a caching layer [138], a service quality manager [140],
20 and a correlation engine [142]. Exemplary connections between the abovementioned
subsystems are also as shown in FIG. 1. However, it will be appreciated
by those skilled in the art that the present disclosure is not limited to the connections
shown in the diagram, and any other connections between the different subsystems
that are needed to realize the effects of the network performance management
25 system [100] are within the scope of this disclosure.
[0042] Further, the integrated performance management (IPM) system [102]
comprises a performance management engine [150], a Key Performance Indicator
(KPI) Engine [152], and an ingestion layer [154].
30
13
[0043] The following section describes some of the different sub-systems of the
system [100]:
[0044] Performance Management Engine [150]: The Performance Management
engine [150] is a crucial component 5 of the integrated performance management
system [102], and is responsible for collecting, processing, and managing
performance counter data from various data sources within the network. The
gathered data includes metrics such as, connection speed, latency, data transfer
rates, etc. This raw data is then processed and aggregated as required, forming a
10 comprehensive overview of network performance. The processed information is
then stored in the Distributed Data Lake [128], which is a centralized, scalable, and
flexible storage medium, allowing for easy access and further analysis. The
Performance Management engine [150] also enables the reporting and visualization
of this performance counter data, thus providing network administrators with a real15
time, insightful view of the network's operation. Through these visualizations,
operators can monitor the network's performance, identify potential issues, and
make informed decisions to enhance network efficiency and reliability.
[0045] Key Performance Indicator (KPI) Engine [152]: The Key Performance
20 Indicator (KPI) Engine [152] is a dedicated component tasked with managing the
KPIs of all the network elements. It uses the performance counters, which are
collected and processed by the Performance Management engine [150] from
various data sources. These counters, which indicate crucial performance data, are
harnessed by the KPI engine [152] to calculate essential KPIs. These KPIs may
25 include, without limitations, data throughput, latency, packet loss rate, etc. Once
the KPIs are computed, they are segregated based on the aggregation requirements,
offering a multi-layered and detailed understanding of network performance. The
processed KPI data is then stored in the Distributed Data Lake [128], ensuring a
highly accessible, centralized, and scalable data repository for further analysis and
30 utilization. Similar to the Performance Management engine [150], the KPI engine
[152] is also responsible for reporting and visualization of KPI data. This
14
functionality allows network administrators to gain a comprehensive, visual
understanding of the network's performance, thus supporting informed decisionmaking
and efficient network management.
[0046] Ingestion layer [154]: The Ingestion layer 5 [154] forms a key part of the
Integrated Performance Management system [102], and functions to establish an
environment capable of handling diverse types of incoming data. This data may
include, without limitations, Alarms, Counters, Configuration parameters, Call
Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of
10 which are crucial for maintaining and optimizing the performance of the network.
Upon receiving the data, the Ingestion layer [154] validates integrity and
correctness of the data to ensure that the data is fit for further processing. Following
the step of validation, the data is routed to various components of the system [100],
including the Normalization layer [104], the Streaming Engine [110], the analysis
15 engine [118], and the Message Broker [134]. The destination is chosen based on
where the data is required for further analytics and/or processing. By serving as the
first point of contact for incoming data, the Ingestion layer [154] plays a vital role
in managing the data flow within the system [100], thus supporting comprehensive
and accurate network performance analysis.
20
[0047] Normalization layer [104]: The Normalization Layer [104] serves to
standardize, enrich, and store data into the appropriate databases. The normalization
layer [104] receives data from the ingestion layer [154] and adjusts it to a common
standard, making it easier to compare and analyse. This process of "normalization"
25 reduces redundancy and improves data integrity. Upon completion of
normalization, the data is stored in various databases like the Distributed Data Lake
[128], Caching Layer [138], and Graph Layer [136], depending on the intended use
for the data. The choice of storage determines how the data can be accessed and
used in the future. Additionally, the Normalization Layer [104] produces data for
30 the Message Broker [134], which is configured to enable communication between
different parts of the network performance management system [100] through the
15
exchange of data messages. Moreover, the Normalization Layer [104] supplies the
standardized data to several other subsystems. These include the Analysis Engine
[118] for detailed data examination, the Correlation Engine [142] for detecting
relationships among various data elements, the Service Quality Manager [140] for
maintaining and improving the quality of 5 services, and the Streaming Engine [110]
for processing real-time data streams. These subsystems depend on the normalized
data to perform their operations effectively and accurately.
[0048] Caching layer [138]: The Caching Layer [138] plays a significant role in
10 data management and optimization in the network performance management
system [100]. During the initial phase, the Normalization Layer [104] processes
incoming raw data to create a standardized format, enhancing consistency and
comparability. The Normalizer Layer [104] then inserts this normalized data into
various databases, such as the Caching Layer [138]. The Caching Layer [138] is a
15 high-speed data storage layer, which temporarily holds data that is likely to be
reused, to improve speed and performance of data retrieval. By storing frequently
accessed data in the Caching Layer [138], the network performance management
system [100] significantly reduces the time taken to access this data, improving
overall efficiency and performance of the network performance management
20 system [100]. Further, the Caching Layer [138] serves as an intermediate layer
between the data sources and other sub-systems, such as the Analysis Engine [118],
the Correlation Engine [142], the Service Quality Manager [140], and the Streaming
Engine [110]. The Normalization Layer [104] is responsible for providing these
sub-systems with the necessary data from the Caching Layer [138].
25
[0049] Computation layer [106]: The Computation Layer [106] serves as the main
hub for complex data processing tasks. In the initial stages, raw data is gathered,
normalized, and enriched by the Normalization Layer [104]. The Normalization
Layer [104] then inserts this normalized data into multiple databases including the
30 Distributed Data Lake [128], the Caching Layer [138], and the Graph Layer [136],
and also feeds it to the Message Broker [134]. Within the Computation Layer [106],
16
several powerful sub-systems such as the Analysis Engine [118], the Correlation
Engine [142], the Service Quality Manager [140], and the Streaming Engine [110],
utilize the normalized data. These systems are designed to execute various data
processing tasks. The Analysis Engine [118] performs in-depth data analytics to
generate insights from the 5 data. The Correlation Engine [142] identifies and
understands the relations and patterns within the data. The Service Quality Manager
[140] assesses and ensures the quality of the services. The Streaming Engine [110]
processes and analyses the real-time data feeds. In essence, the Computation Layer
[106] is where all major computation and data processing tasks occur. It uses the
10 normalized data provided by the Normalization Layer [104], processing it to
generate useful insights, ensure service quality, understand data patterns, and
facilitate real-time data analytics.
[0050] Message broker [134]: The Message Broker [134] operates as a publish15
subscribe messaging system. It orchestrates and maintains the real-time flow of data
from various sources and applications. At its core, the Message Broker [134]
facilitates communication between data producers and consumers through
message-based topics. This creates an advanced platform for contemporary
distributed applications. With the ability to accommodate a large number of
20 permanent or ad-hoc consumers, the Message Broker [134] demonstrates immense
flexibility in managing data streams. Moreover, the message broker [134] leverages
the filesystem for storage and caching, boosting its speed and efficiency. The design
of the Message Broker [134] is centred around reliability and is engineered to be
fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the
25 data. With its robust design and capabilities, the Message Broker [134] forms a
critical component in managing and delivering real-time data in the network
performance management system [100].
[0051] Graph layer [136]: The Graph Layer [136], serving as the Relationship
30 Modeler, plays a pivotal role in the network Performance Management system
[100]. It can model a variety of data types, including alarm, counter, configuration,
17
CDR data, Infra-metric data, Probe Data, and Inventory data. Equipped with the
capability to establish relationships among diverse types of data, the graph layer
[136] offers extensive modelling capabilities. For instance, the graph layer [136]
can model Alarm and Counter data, Vprobe and Alarm data, elucidating their
interrelationships. Moreover, the graph 5 layer [136]is adept at processing steps
provided in the model and delivering the results to the sub-system requested, such
as the Parallel Computing framework [120], Workflow Engine, Query Engine, the
Correlation engine [142], Performance Management Engine [150], or KPI Engine
[152]. With its powerful modelling and processing capabilities, the Graph Layer
10 [136] forms an essential part of the network performance management system
[100], enabling the processing and analysis of complex relationships between
various types of network data.
[0052] Scheduling layer [130]: The Scheduling Layer [130] is endowed with the
15 ability to execute tasks at predetermined intervals set according to user preferences.
A task might be an activity, such as performing a service call, an API call to another
microservice, the execution of an Elastic Search query, and storing its output in the
Distributed Data Lake [128] or Distributed File System [124] or sending it to
another micro-service. The versatility of the Scheduling Layer [130] extends to
20 facilitating graph traversals via the Mapping Layer [126] to execute tasks. This
crucial capability enables seamless and automated operations within the network
performance management system [100], ensuring that various tasks and services
are performed on schedule, without manual intervention, thereby enhancing the
efficiency and performance of the network performance management system [100].
25 Thus, the Scheduling Layer [130] orchestrates the systematic and periodic
execution of tasks.
[0053] Analysis Engine [118]: The Analysis Engine [118] is adapted to provide an
environment where users can configure and execute workflows for a wide array of
30 use-cases. This facility aids in the debugging process and facilitates a better
understanding of call flows. With the Analysis Engine [118], users can perform
18
queries on data sourced from various subsystems or external gateways. This
capability allows for an in-depth overview of data and aids in pinpointing issues.
The flexibility of the analysis engine [118] allows users to configure specific
policies aimed at identifying anomalies within the data. When these policies detect
abnormal behaviour or policy breaches, 5 the analysis engine [118] sends
notifications, ensuring swift and responsive action. In essence, the Analysis Engine
[118] provides a robust analytical environment for systematic data interrogation,
facilitating efficient problem identification and resolution.
10 [0054] Parallel Computing Framework [120]: The Parallel Computing
Framework [120] is adapted to provide a user-friendly yet advanced platform for
executing computing tasks in parallel. The parallel computing framework [120]
showcases both scalability and fault tolerance, crucial for managing vast amounts
of data. Users can input data via the Distributed File System (DFS) [124] or
15 Distributed Data Lake (DDL) [128]. The parallel computing framework [120]
supports the creation of task chains by interfacing with the Service Configuration
Management (SCM) Sub-System. Each task in a workflow is executed sequentially,
but multiple chains can be executed simultaneously, optimizing processing time.
To accommodate varying task requirements, the parallel computing framework
20 [120] supports the allocation of specific host lists for different computing tasks. The
Parallel Computing Framework [120] is an essential tool for enhancing processing
speeds and efficiently managing computing resources.
[0055] Distributed File System [124]: The Distributed File System (DFS) [124] is
25 adapted to enable multiple clients to access and interact with data seamlessly. The
DFS [124] is designed to manage data files that are partitioned into numerous
segments known as chunks. In the context of a network with vast data, the DFS
[124] effectively allows for the distribution of data across multiple nodes. The DFS
[124] architecture enhances both the scalability and redundancy of the network
30 performance management system [100], ensuring optimal performance even with
19
large data sets. The DFS [124] also supports diverse operations, facilitating the
flexible interaction with and manipulation of data.
[0056] Load balancer [112]: The Load Balancer (LB) [112] is configured to
efficiently distribute incoming network traffic across 5 a multitude of backend servers
or microservices. The LB [112] ensures even distribution of data requests, leading
to optimized server resource utilization, reduced latency, and improved overall
performance of the network performance management system [100]. The LB [112]
implements various routing strategies to manage traffic, including round-robin
10 scheduling, header-based request dispatch, and context-based request dispatch.
Round-robin scheduling is a simple method of rotating requests evenly across
available servers. In contrast, header and context-based dispatching allow for more
intelligent, request-specific routing. Header-based dispatching routes requests
based on data contained within the headers of the Hypertext Transfer Protocol
15 (HTTP) requests. Context-based dispatching routes traffic based on the contextual
information about the incoming requests. For example, in an event-driven
architecture, the LB [112] manages event and event acknowledgments, forwarding
requests or responses to the specific microservice that has requested the event.
20 [0057] Streaming Engine [110]: The Streaming Engine [110], also referred to as
Stream Analytics, is a critical subsystem configured for high-speed data pipelining
to the User Interface (UI). The objective of the streaming engine [110] is to ensure
real-time data processing and delivery. Data is received from various connected
subsystems and processed in real-time by the Streaming Engine [110]. After
25 processing, the data is streamed to the UI, fostering rapid decision-making and
responses. The Streaming Engine [110] cooperates with the Distributed Data Lake
[128], the Message Broker [134], and the Caching Layer [138] to provide seamless,
real-time data flow. The streaming engine [110] is designed to perform required
computations on incoming data instantly, ensuring that the most relevant and up30
to-date information is always available at the UI. Furthermore, the streaming engine
[110] can also retrieve data from the Distributed Data Lake [128], the Message
20
Broker [134], and the Caching Layer [138] as per the requirement and deliver it to
the UI in real-time. The goal of the streaming engine [110] is to provide fast,
reliable, and efficient data streaming.
[0058] Reporting Engine [132]: The 5 Reporting Engine [132] is configured to
dynamically create report layouts of API data, catered to individual client
requirements, and deliver these reports via the Notification Engine. The reporting
engine [132] serves as the primary interface for creating custom reports based on
the data visualized through the client's dashboard. The dashboard, created by the
10 client through the User Interface (UI), provides the basis for the reporting engine
[132] to process and compile data from various interfaces. The main output of the
Reporting Engine [132] is a detailed report generated in Excel format. The capacity
of the Reporting Engine [132] to parse data from different subsystem interfaces,
process it according to the client's specifications and requirements, and generate a
15 comprehensive report makes it an essential component of the network performance
management system [100]. Furthermore, the Reporting Engine [132] integrates
seamlessly with the Notification Engine to ensure timely and efficient delivery of
reports to clients via email.
20 [0059] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method for
generating and provisioning a Key Performance Indicator (KPI) utilising the
25 system. In another implementation, the computing device [200] itself implements
the method generating and provisioning a Key Performance Indicator (KPI) using
one or more units configured within the computing device [200], wherein said one
or more units are capable of implementing the features as disclosed in the present
disclosure.
30
21
[0060] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
computing device [200] may also include 5 a main memory [206], such as a randomaccess
memory (RAM), or other dynamic storage device, coupled to the bus [202]
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
10 processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a specialpurpose
machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
15 information and instructions for the processor [204].
[0061] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
20 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
25 [204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. The input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
30 the device to specify positions in a plane.
22
[0062] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
According to one implementation, the techniques 5 herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
10 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
15 [0063] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a twoway
data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
20 a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
25 electromagnetic or optical signals that carry digital data streams representing
various types of information.
[0064] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
30 communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
23
ISP [226], the local network [222], host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
5
[0065] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various the components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof 10 are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are considered to be encompassed within the scope
15 of the present disclosure.
[0066] The present disclosure is implemented by a system [300] (as shown in FIG.
3). In an implementation, the system [300] may be implemented on the computing
device [200] (as shown in FIG. 2). It is further noted that the computing device
20 [200] is able to perform the steps of a method [400] (as shown in FIG. 4).
[0067] Referring to FIG. 3, an exemplary block diagram of a system [300] for
generating and provisioning a Key Performance Indicator (KPI), is shown, in
accordance with the exemplary implementations of the present disclosure. The
25 system [300] comprises at least one transceiver unit [302], at least one processing
unit [304] and a Distributed File System (DFS) [124]. The system [300] is in
communication with a User Equipment [306]. Also, all of the components/ units of
the system [300] are assumed to be connected to each other unless otherwise
indicated below. As shown in the figures all units shown within the system should
30 also be assumed to be connected to each other. Also, in FIG. 3 only a few units are
shown, however, the system [300] may comprise multiple such units or the system
24
[300] may comprise any such numbers of said units, as required to implement the
features of the present disclosure. Further, in an implementation, the system [300]
may be present in a user device to implement the features of the present disclosure.
The system [300] may be a part of the user device / or may be independent of but
in communication with the user device (may also 5 referred herein as a UE). In
another implementation, the system [300] may reside in a server or a network entity.
In yet another implementation, the system [300] may reside partly in the server/
network entity and partly in the user device.
10 [0068] The system [300] is configured for generating and provisioning a Key
Performance Indicator (KPI), with the help of the interconnection between the
components/units of the system [300].
[0069] The transceiver unit [302] of the system [300] is configured to receive, from
15 a User Equipment (UE) [306], a Key Performance Indicator (KPI) provisioning
request. The KPI provisioning request comprises a list of KPI parameters associated
with a network. The KPI provisioning request may be related to a request for
generation of an advanced KPI formula/updated KPI parameter using functions like
the erlang function, the mode function, the inverse function, and the like. The
20 advanced KPI formula/updated KPI parameter may enable automated analysis of
the KPIs. The KPI provisioning request may be for a 5th Generation network, a 4th
Generation network, a 6th Generation network, and any other future generations of
network. In an implementation of the present disclosure, the list of KPI parameters
may include latency, packet loss, network availability, and the like. The latency
25 refers to delay in time between sending a request and receiving a response. The
packet loss refers to the number of data packets lost in a communication. The
network availability refers to the duration of time when the network is available or
accessible to the user.
30 [0070] The transceiver unit [302] is further configured to receive the KPI
provisioning request from the UE [306] via a load balancer [308]. The load balancer
25
[308] receives the KPI provisioning request from at least one of a plurality of UEs
in a round-robin scheduling. The round-robin scheduling refers to when the load
balancer [308] receives the KPI provisioning request from at least one of the
plurality of UEs in a sequential manner. The round-robin scheduling assists in even
distribution of the K 5 PI provisioning request to the load balancer [308].
[0071] The transceiver unit [302] is further configured to receive the KPI
provisioning request during one of a plurality of available time intervals of the
system. The plurality of available time intervals is determined by the load balancer
10 [308]. The plurality of time intervals is determined by the load balancer [308] based
on at least one or more network events associated with the network. The one or
more network events comprise at least one of a call drop rate event, a call set up
time event, a voice quality event and a video quality event. The call drop rate
event refers to the number of times a call is cut off before either party has ended the
15 call. The call set up time event refers to the duration of time required to establish
the call between the user and the network terminal. The voice quality event refers
to checking of the characteristics of voice like lagging, high frequency, low
frequency, and the like. The video quality event refers to checking of the quality of
video during a video call, like video quality, colour accuracy, frame rates, etc.
20
[0072] The system [300] further includes a processing unit [304] connected at least
with the transceiver unit [302]. The processing unit [304] is configured to extract at
least one of a plurality of KPI parameters from the received list of KPI parameters.
To extract, the processing unit [304] may check the purpose of the KPIs based on
25 the KPI provisioning request and extract the KPIs based on the relevance to the
purpose. For instance, the KPI provisioning request may be for the purpose of
increasing network availability, and accordingly, the processing unit [304] may
extract the network availability parameter KPIs. The processing unit [304] may
store the KPI parameters at the Distributed File System (DFS) [124] in the network.
30 In an implementation of the present solution, an Adaptive Management (AM) unit
[506] as shown in FIG. 5, may initiate storing a set of data associated with the KPI
26
parameters at the DFS [124]. The AM unit [506] platform leverages machine
learning to detect anomalous network patterns and create reports and alerts based
on these patterns. The troubleshooting helps in proactive root cause analysis and
resolution before the network symptoms start affecting operations.
5
[0073] The processing unit [304] is further configured to generate a plurality of
updated KPI parameters based on the extracted at least one of the plurality of KPI
parameters. The plurality of updated KPI parameters is generated based on a set of
pre-defined network policies applied to the extracted at least one of the plurality of
KPI parameters. The set of pre-10 defined network policies comprises one of an
inverse function, a mode function, an erlang function. In an exemplary embodiment,
the plurality of updated KPI parameters may be generated based on a logical
function, an inverse function, a supporting function, and the like. The mode
function refers to a function that may provide statistical operation support to
15 compute mode of the KPI parameters. Further, erlang is a unique formula in which
computation of data is performed by a numerical method. The erlang function is
used to calculate the total number of servers that may be required for a specific
volume of traffic. The logical function refers to a logical condition which may check
whether a specific condition is true or false. For a true and false condition, a further
20 action may be defined in the KPI parameter. The supporting function refers to a
scenario where the KPI is computed for a particular time duration which was
computed using other supporting KPIs. The inverse KPI refers to monitoring an
average holding time in duration of a call and calculating the pattern of the KPI in
a predefined duration. In an exemplary implementation, the predefined duration
25 may be a month. In other exemplary implementations, the predefined duration may
be any, such as, a week, a fortnight, a month, three-months, a quarter, a year, etc.
[0074] Based on at least one of the plurality of generated updated KPI parameters,
the processing unit [304] is configured to generate an updated KPI list. Further, the
30 transceiver unit [302] is configured to transmit the updated KPI list to at least one
of the plurality of UEs [306].
27
[0075] In an embodiment of the present disclosure, the plurality of updated KPI
parameters may be the advanced KPI formula. The advanced KPI formula includes
at least one of the set of policies added to the plurality of KPI parameters. For
instance, the advanced KPI formula comprises 5 a function like the inverse function
in the KPI parameter. The inverse function may provide advanced KPI metrics for
average holding time in a call duration and compute the data of pattern of the KPI
in the predefined duration.
10 [0076] Referring to FIG. 4, an exemplary method flow diagram [400] for
generating and provisioning a Key Performance Indicator (KPI), in accordance with
exemplary implementations of the present disclosure is shown. In an
implementation the method [400] is performed by the system [300]. Further, in an
implementation, the system [300] may be present in a server device to implement
15 the features of the present disclosure. Also, as shown in FIG. 4, the method [400]
starts at step [402].
[0077] At step [404], the method includes receiving, by a transceiver unit [302]
from a User Equipment (UE) [306], a Key Performance Indicator (KPI)
20 provisioning request. The KPI provisioning request comprises a list of KPI
parameters associated with a network. The KPI provisioning request may be related
to a request for generation of an advanced KPI formula/updated KPI parameter
using functions like the erlang function, the mode function, the inverse function,
and the like. The advanced KPI formula/updated KPI parameter may enable
25 automated analysis of the KPIs. The KPI provisioning request may be for a 5th
Generation network, a 4th Generation network, a 6th Generation network, and any
other future generations of network. In an implementation of the present disclosure,
the list of KPI parameters may include latency, packet loss, network availability,
and the like. The latency refers to delay in time between sending a request and
30 receiving a response. The packet loss refers to the number of data packets lost in a
communication. The network availability refers to the duration of time when the
28
network is available or accessible to the user. The method further includes
receiving, by the transceiver unit [302], the KPI provisioning request during one of
a plurality of available time intervals of the system. The plurality of available time
intervals is determined by the load balancer [308]. The plurality of time intervals is
determined by the load balancer [308] based 5 on at least one or more network events
associated with the network. The one or more network events comprise at least one
of a call drop rate event, a call set up time event, a voice quality event and a video
quality event. The call drop rate event refers to the number of times a call is cut off
before either party has ended the call. The call set up time event refers to the
10 duration of time required to establish the call between the user and the network
terminal. The voice quality event refers to checking of the characteristics of voice
like lagging, high frequency, low frequency, and the like. The video quality event
refers to checking of the quality of video during a video call, like video quality,
colour accuracy, frame rates, etc.
15
[0078] Next, at step [406], the method comprises extracting, by a processing unit
[304], at least one of a plurality of KPI parameters from the received list of KPI
parameters. To extract, the processing unit [304] may check the purpose of the KPIs
based on the KPI provisioning request and extract the KPIs based on the relevance
20 to the purpose. For instance, if the KPI provisioning request is for the purpose of
increasing network availability, then the processing unit [304] may extract the
network availability parameter KPI. The processing unit [304] may store the KPI
parameter at the Distributed File System (DFS) [124] in the network. In an
implementation of the present disclosure, the Adaptive Management (AM) unit
25 [506] may initiate storing a set of data associated with the KPI parameter at the DFS
[124]. The AM unit [506] platform leverages machine learning to detect anomalous
network patterns and create reports and alerts based on these patterns. The
troubleshooting helps in proactive root cause analysis and resolution before the
network symptoms start affecting operations.
30
29
[0079] Next, at step [408], the method comprises generating, by the processing unit
[304], a plurality of updated KPI parameters based on the extracted at least one of
the plurality of KPI parameters. The plurality of updated KPI parameters is
generated based on a set of pre-defined network policies applied to the extracted at
least one of the plurality of KPI parameters, 5 the set of pre-defined network policies
comprising one of an inverse function, a mode function, and an erlang function. In
an exemplary embodiment, the plurality of updated KPI parameters may be
generated based on a logical function, an inverse function, a supporting function,
and the like. The mode function refers to a function that may provide statistical
10 operation support to compute mode of the KPI parameters. Further, erlang is a
unique formula in which computation of data is performed by a numerical method.
The erlang function is used to calculate the total number of servers that may be
required for a specific volume of traffic. The logical function refers to a logical
condition which may check whether a specific condition is true or false. For a true
15 and false condition, a further action may be defined in the KPI parameter. The
supporting function refers to a scenario where the KPI is computed for a particular
time duration which was computed using other supporting KPIs. The inverse KPI
refers to monitoring an average holding time in duration of a call and calculating
the pattern of the KPI in the predefined duration.
20
[0080] The method further comprises receiving, by the transceiver unit [302], the
KPI provisioning request from the UE [306] via a load balancer [308]. The load
balancer [308] is configured to receive the KPI provisioning request from at least
one of a plurality of UEs in a round-robin scheduling. The round-robin scheduling
25 refers to when the load balancer [308] receives the KPI provisioning request from
at least one of the plurality of UEs in a sequential manner. The round-robin
scheduling assists in even distribution of the KPI provisioning request to the load
balancer [308].
30 [0081] Based on at least one of the plurality of generated updated KPI parameters,
the method further comprises generating, by the processing unit [304], an updated
30
KPI list. Further, based on at least one of the plurality of generated updates KPI
parameters, the method includes transmitting, by the transceiver unit [302], the
updated KPI list to at least one of the plurality of UEs.
[0082] In an embodiment 5 of the present disclosure, the plurality of updated KPI
parameters may include the advanced KPI formula. The advanced KPI formula
comprises at least one of the set of policies added to the plurality of KPI parameters.
For instance, the advanced KPI formula comprises a function like the inverse
function in the KPI parameter. The inverse function may provide advanced KPI
10 metrics for average holding time in a call duration and compute the data of pattern
of the KPI in the predefined duration.
[0083] Thereafter, the method terminates at step [410].
15 [0084] Referring to FIG. 5, it illustrates an exemplary sequence flow [500]
implementation of the method for generating and provisioning a KPI. The sequence
flow [500] includes a User Interface (UI) server [504], the Load Balancer [308], the
Integrated Performance Management System [102], the Adaptive Management
(AM) unit [506], and the Distributed File System [124].
20
[0085] At step 1, a user [502] may send the KPI provisioning request to the UI
Server [504]. The UI Server [504] performs the same function as the User Interface
server [504] shown in FIG. 5. The KPI provisioning request comprises a list of KPI
parameters associated with a network. The KPI provisioning request refers to a
25 request to monitor KPIs, wherein the KPIs are parameters to measure and evaluate
performance of the network. The KPI provisioning request may be for a 5th
Generation network, a 4th Generation network, a 6th Generation network, and any
other future generations of network. In an implementation of the present disclosure,
the list of KPI parameters may include latency, packet loss, network availability,
30 and the like. The latency refers to delay in time between sending a request and
receiving a response. The packet loss refers to the number of data packets lost in a
31
communication. The network availability refers to the duration of time when the
network is available or accessible to the user [502].
[0086] At step 2, the UI server [504] may forward the KPI provisioning request to
the Load Balancer [308]. The load 5 balancer [308] receives the KPI provisioning
request from at least one of a plurality of UEs in a round-robin scheduling. The
round-robin scheduling refers to when the load balancer [308] receives the KPI
provisioning request from at least one of the plurality of UEs in a sequential
manner. The round-robin scheduling assists in even distribution of the KPI
10 provisioning request to the load balancer [308].
[0087] At step 3, the AM unit [506] platform may extract the KPI parameter from
the User Interface Server [504] via the Load Balancer [308]. The AM unit [506]
platform leverages machine learning to detect anomalous network patterns and
15 create reports and alerts based on these patterns. The troubleshooting helps in
proactive root cause analysis and resolution before the network symptoms start
affecting operations. In an implementation of the present disclosure, the UI server
[504] transmits the KPI parameter received from the user [502] to the AM UNIT
[506] via the load balancer [308], wherein the load balancer [308] is configured to
20 identify the available instance for receiving the KPI parameter at the AM unit [506]
platform from the UI Server [504]. In an implementation of the present disclosure,
the UI server [504] transmits the KPI parameter received from the user [502] to the
AM unit [506] via the load balancer [308] in a pre-defined format such as the roundrobin
scheduling. In an implementation of the present disclosure, one or more KPI
25 parameters from the KPI parameter list may be determined at the UI server [504]
in the network based on the KPI provisioning request from the user [502].
[0088] At step 4, the AM unit [506] platform may store the KPI parameters at the
Distributed File System (DFS) [124] in the network. In an implementation of the
30 present disclosure, the AM unit [506] platform may initiate storing a set of data
associated with the KPI parameters at the DFS [124]. Furthermore, in another
32
implementation of the present disclosure, the IPM [102] may store the set of data
associated with the KPI parameter at the DFS [124].
[0089] At step 5, at the AM unit [506] platform, sends a plurality of updated KPI
parameters based on the KPI provisioning request. 5 The plurality of updated KPI
parameters is generated based on a set of pre-defined network policies applied to
the extracted at least one of the plurality of KPI parameters. The set of pre-defined
network policies comprise one of an inverse function, a mode function, and an
erlang function. In an exemplary embodiment, the plurality of updated KPI
10 parameters may be generated based on a logical function, an inverse function, a
supporting function, and the like. The mode function refers to a function that may
provide statistical operation support to compute mode of the KPI parameters.
Further, erlang is a unique formula in which computation of data is performed by a
numerical method. The erlang function is used to calculate the total number of
15 servers that may be required for a specific volume of traffic. The logical function
refers to a logical condition which may check whether a specific condition is true
or false. For a true and false condition, a further action may be defined in the KPI
parameter. The supporting function refers to a scenario where the KPI is computed
for a particular time duration which was computed using other supporting KPIs.
20 The inverse KPI refers to monitoring an average holding time in duration of a call
and calculating the pattern of the KPI in the predefined duration.
[0090] At step 6, the AM unit [506] may send an acknowledgment to the UI Server
[504] of an updated KPI parameter list. Further the plurality of updated KPI
25 parameters may be provisioned by the UI Server [504] and displayed to the user
[502] in the network, based on at least the acknowledgement message received at
the UI Server [504].
[0091] In an embodiment of the present disclosure, the plurality of updated KPI
30 parameters may be the advanced KPI formula. The advanced KPI formula
comprises at least one of the set of policies added to the plurality of KPI parameters.
33
For instance, the advanced KPI formula comprises a function like the inverse
function in the KPI parameter. The inverse function may provide advanced KPI
metrics for average holding time in a call duration and compute the data of pattern
of the KPI in the predefined duration.
5
[0092] The present disclosure further discloses a user equipment (UE). The UE
comprises a transceiver unit [302] configured to transmit a Key Performance
Indicator (KPI) provisioning request. The KPI provisioning request comprises a list
of KPI parameters associated with a network to a system. The transceiver unit [302]
of the UE to further receive, from the system, a plurality 10 of updated KPI parameters.
The plurality of updated KPI parameters is generated by the system based on the
extracted at least one of the plurality of KPI parameters from the list of KPI
parameters included in the KPI provisioning request. The plurality of updated KPI
parameters is generated by the system based on a set of pre-defined network policies
15 applied to the extracted at least one of the plurality of KPI parameters, the set of
pre-defined policies comprising one of an inverse function, a mode function, and
an erlang function.
[0093] The present disclosure further discloses a non-transitory computer readable
20 storage medium storing instruction for generating and provisioning a Key
Performance Indicator (KPI), the instructions including executable code which,
when executed by one or more units of a system causes a transceiver unit [302] of
the system to receive, from a User Equipment (UE) [306], a Key Performance
Indicator (KPI) provisioning request. The KPI provisioning request comprises a list
25 of KPI parameters associated with a network. The instructions when executed by
the system further cause a processing unit [304] of the system to extract at least one
of a plurality of KPI parameters from the received list of KPI parameters. The
instructions when executed by the system further cause the processing unit [304] of
the system to generate a plurality of updated KPI parameters based on the extracted
30 at least one of the plurality of KPI parameters. The plurality of updated KPI
parameters is generated based on a set of pre-defined network policies applied to
34
the extracted at least one of the plurality of KPI parameters, the set of pre-defined
policies comprising one of an inverse function, a mode function, and an erlang
function.
[0094] As is evident from the above, the present 5 invention has several technical
advantages for. Firstly, it enhances Network Effectiveness by generating reports on
KPIs and counters that measure the health and quality of service of a network. The
incorporation of the Advanced formula KPI, specifically the erlang KPI, enables a
precise estimation of the network's capacity to handle traffic. This information is
10 crucial for optimizing network performance and resource allocation. Secondly, the
present invention introduces Automated Analysis capabilities to the KPI
framework. Complex operations such as erlang and inverse calculations are
seamlessly integrated into the formula, providing enhanced flexibility in automated
report generation. Previously, these tasks were conducted offline, but with the
15 Advanced KPI Formula, the process is streamlined, saving time and effort.
Furthermore, the Advanced KPI Formula excels in Pattern Finding. Its advanced
KPIs enable the identification of complex patterns over specific time periods.
Additionally, statistical parameters like mode can be derived, offering valuable
insights into the data distribution. These pattern-finding capabilities empower users
20 to detect trends, anomalies, and performance fluctuations, facilitating proactive
decision-making and troubleshooting. In summary, the present invention’s
technical advantages lie in the network effectiveness, automated analysis, and
pattern finding features offered by the advanced KPI formula. These advancements
contribute to improved network optimization, streamlined analysis processes, and
25 the ability to identify meaningful patterns within performance data.
[0095] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
30 principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
35
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
We Claim:
1. A method [400] for generating and provisioning a Key Performance
Indicator (KPI), the method [400] comprising:
- receiving, by a transceiver 5 unit [302] from a User Equipment (UE)
[306], a Key Performance Indicator (KPI) provisioning request, wherein
the KPI provisioning request comprises a list of KPI parameters
associated with a network;
- extracting, by a processing unit [304], at least one of a plurality of KPI
parameters from the received list of KPI parameters; and
- generating, by the processing unit [304], a plurality of updated KPI
parameters based on the extracted at least one of the plurality of KPI
parameters,
wherein the plurality of updated KPI parameters is generated based on a set
of pre-defined network policies applied to the extracted at least one of the
plurality of KPI parameters, the set of pre-defined policies comprising one
of an inverse function, a mode function, and an erlang function.
2. The method [400] as claimed in claim 1, wherein the method [400] further
comprises receiving, by the transceiver unit [302], the KPI provisioning
request from the UE [306] via a load balancer [308].
3. The method [400] as claimed in claim 2, wherein the load balancer [308] is
configured to receive the KPI provisioning request from at least one of a
plurality of UEs in a round-robin scheduling.
4. The method [400] as claimed in claim 2, wherein the method [400] further
comprises:
- receiving, by the transceiver unit [302], the KPI provisioning request
during one of a plurality of available time intervals of a system [300],
and wherein the plurality of available time intervals is determined by the
load balancer [308].
5. The method [400] as claimed in claim 4, wherein the plurality of available
time intervals is determined by 5 the load balancer based on at least one or
more network events associated with the network, and wherein the one or
more network events comprise at least one of a call drop rate event, a call
set up time event, a voice quality event, and a video quality event.
6. The method [400] as claimed in claim 1, wherein, based on at least one of
the plurality of generated updated KPI parameters, the method [400] further
comprises:
- generating, by the processing unit [304], an updated KPI list; and
- transmitting, by the transceiver unit [302], the updated KPI list to at least
one of the plurality of UEs.
7. A system [300] for generating and provisioning a Key Performance
Indicator (KPI), the system [300] comprising:
- a transceiver unit [302], wherein the transceiver unit [302] is configured
to:
o receive, from a User Equipment (UE), a Key Performance Indicator
(KPI) provisioning request, wherein the KPI provisioning request
comprises a list of KPI parameters associated with a network; and
- a processing unit [304] connected at least with the transceiver unit [302],
wherein the processing unit [304] is configured to:
o extract at least one of a plurality of KPI parameters from the received
list of KPI parameters; and
o generate a plurality of updated KPI parameters based on the
extracted at least one of the plurality of KPI parameters, wherein the
plurality of updated KPI parameters is generated based on a set of
pre-defined network policies applied to the extracted at least one of
the plurality of KPI parameters, the set of pre-defined policies
comprising one of an inverse function, a mode function, and an
erlang function.
8. The system [300] as claimed 5 in claim 7, wherein the transceiver unit [302]
is further configured to receive the KPI provisioning request from the UE
[306] via a load balancer [308].
9. The system [300] as claimed in claim 8, wherein the load balancer [308]
receives the KPI provisioning request from at least one of a plurality of UEs
in a round-robin scheduling.
10. The system [300] as claimed in claim 8, wherein the transceiver unit [302]
is further configured to:
- receive the KPI provisioning request during one of a plurality of
available time intervals of the system, wherein the plurality of available
time intervals is determined by the load balancer.
11. The system [300] as claimed in claim 10, wherein the plurality of available
time intervals is determined by the load balancer based on at least one or
more network events associated with the network, wherein the one or more
network events comprise at least one of a call drop rate event, a call set up
time event, a voice quality event, and a video quality event.
12. The system [300] as claimed in claim 7, wherein based on at least one of the
plurality of generated updated KPI parameters,
- the processing unit [304] is configured to generate an updated KPI list;
and
- the transceiver unit [302] is configured to transmit the updated KPI list
to at least one of the plurality of UEs [306].
13. A user equipment (UE), comprising:
- a transceiver unit [302] configured to:
o transmit a Key Performance Indicator (KPI) provisioning request,
wherein the KPI provisioning request comprises a list of KPI
parameters associated with a network to a system; and
o receive, from a system, a plurality of updated KPI parameters,
wherein the plurality of updated KPI parameters is generated by the
system based on the extracted at least one of the plurality of KPI
parameters from the list of KPI parameters included in the KPI
provisioning request, and
wherein the plurality of updated KPI parameters is generated by the system
based on a set of pre-defined network policies applied to the extracted at
least one of the plurality of KPI parameters, the set of pre-defined policies
comprising one of an inverse function, a mode function, and an erlang
function.
Dated this 22nd day of August 2023
| # | Name | Date |
|---|---|---|
| 1 | 202321056271-STATEMENT OF UNDERTAKING (FORM 3) [22-08-2023(online)].pdf | 2023-08-22 |
| 2 | 202321056271-PROVISIONAL SPECIFICATION [22-08-2023(online)].pdf | 2023-08-22 |
| 3 | 202321056271-FORM 1 [22-08-2023(online)].pdf | 2023-08-22 |
| 4 | 202321056271-FIGURE OF ABSTRACT [22-08-2023(online)].pdf | 2023-08-22 |
| 5 | 202321056271-DRAWINGS [22-08-2023(online)].pdf | 2023-08-22 |
| 6 | 202321056271-FORM-26 [05-09-2023(online)].pdf | 2023-09-05 |
| 7 | 202321056271-Proof of Right [10-01-2024(online)].pdf | 2024-01-10 |
| 8 | 202321056271-ORIGINAL UR 6(1A) FORM 1 & 26-300124.pdf | 2024-02-03 |
| 9 | 202321056271-FORM-5 [16-08-2024(online)].pdf | 2024-08-16 |
| 10 | 202321056271-ENDORSEMENT BY INVENTORS [16-08-2024(online)].pdf | 2024-08-16 |
| 11 | 202321056271-DRAWING [16-08-2024(online)].pdf | 2024-08-16 |
| 12 | 202321056271-CORRESPONDENCE-OTHERS [16-08-2024(online)].pdf | 2024-08-16 |
| 13 | 202321056271-COMPLETE SPECIFICATION [16-08-2024(online)].pdf | 2024-08-16 |
| 14 | 202321056271-FORM 3 [20-08-2024(online)].pdf | 2024-08-20 |
| 15 | 202321056271-Request Letter-Correspondence [21-08-2024(online)].pdf | 2024-08-21 |
| 16 | 202321056271-Power of Attorney [21-08-2024(online)].pdf | 2024-08-21 |
| 17 | 202321056271-Form 1 (Submitted on date of filing) [21-08-2024(online)].pdf | 2024-08-21 |
| 18 | 202321056271-Covering Letter [21-08-2024(online)].pdf | 2024-08-21 |
| 19 | 202321056271-CERTIFIED COPIES TRANSMISSION TO IB [21-08-2024(online)].pdf | 2024-08-21 |
| 20 | Abstract 1.jpg | 2024-08-28 |