Sign In to Follow Application
View All Documents & Correspondence

Method And System For Monitoring Performance Of Network Elements

Abstract: The present disclosure relates to method and system for monitoring performance of network elements. The present disclosure encompasses: collecting, by a performance management engine (PME) [104], one or more performance parameters from one or more network elements [302]; processing, by the PME [104], the collected one or more performance parameters for the one or more network elements [302]; calculating, by a Key Performance Indicators (KPIs) engine [106], one or more key performance indicators for each of the one or more network elements [302] based on the processed one or more performance parameters; segregating, by the Key Performance Indicators (KPIs) engine [106], the calculated one or more key performance indicators based on one or more criteria; normalizing, by a normalization layer [108], the segregated one or more key performance indicators; and transmitting, by the normalization layer [108], the normalized one or more key performance indicators to one or more subsystems. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
15 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR MONITORING PERFORMANCE OF
NETWORK ELEMENTS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR MONITORING PERFORMANCE OF
NETWORK ELEMENTS
FIELD OF THE INVENTION
[0001] Embodiments of the present disclosure generally relate to the field of network management systems. More particularly, the embodiments of the present disclosure relate to method and system for monitoring performance of network elements.
BACKGROUND
[0002] The following description of related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0003] Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.

[0004] Existing systems often struggled with providing real-time performance data. There was a significant delay in gathering, processing, and visualizing data, which could lead to slower response times when dealing with network issues. Traditional methods might have been less efficient at handling vast amounts of data generated by networks, resulting in slower performance and reduced accuracy in analysing the data. Existing network management systems might not have been designed to handle the increased volume and variety of data that come with larger, more complex networks, making them less scalable. The analytical capabilities of existing systems were likely less sophisticated, limiting the depth and breadth of analysis that could be performed on network performance data. Prior systems might have had a more rudimentary approach to defining, calculating, and managing KPIs. A unified view across all nodes might not have been possible, making it more difficult to understand network performance holistically. The complexity of maintaining existing, layered architectures could have been high, especially when trying to ensure seamless integration between different subsystems. Further, existing systems might not have been as flexible or agile, making it more difficult to adapt to changing network requirements or operations. Prior systems often operated in silos, making it difficult to get a comprehensive view of the network and limiting the effectiveness of data analysis. There was typically less automation in prior network management systems, which could have led to increased manual workload and greater potential for human error.
[0005] Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks.
[0006] Thus, there exists an imperative need in the art to provide a method and system for monitoring performance of network elements. The proposed Integrated Performance Management system seeks to address these issues by providing real-time monitoring, efficient data management, advanced analytical capabilities, integrated KPI management, and more.

SUMMARY
[0007] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0008] An aspect of the present disclosure may relate to a method for monitoring performance of network elements. The method includes collecting, by a performance management engine, one or more performance parameters from one or more network elements. Next, the method includes processing, by the performance management engine, the collected one or more performance parameters for the one or more network elements. Next, the method includes calculating, by a Key Performance Indicators (KPIs) engine, one or more key performance indicators for each of the one or more network elements based on the processed one or more performance parameters. Next, the method includes segregating, by the Key Performance Indicators (KPIs) engine, the calculated one or more key performance indicators based on one or more criteria. Next, the method includes normalizing, by a normalization layer , the segregated one or more key performance indicators. Thereafter, the method includes transmitting, by the normalization layer , the normalized one or more key performance indicators to one or more subsystems.
[0009] In an exemplary aspect of the present disclosure, the method further comprises aggregating, by an aggregation unit, the normalized one or more key performance indicators associated with one or more network elements to form an aggregated KPI output data.

[0010] In an exemplary aspect of the present disclosure, the method further comprises storing, by a storage unit, the aggregated KPI output data in a database.
[0011] In an exemplary aspect of the present disclosure, the method further comprises executing, via a workflow engine, one or more tasks based on an analysis of the aggregated KPI output data.
[0012] In an exemplary aspect of the present disclosure, the method further comprises providing, on a displaying unit, visualization of the one or more performance parameters and the one or more key performance indicators of the one or more network elements in real-time.
[0013] In an exemplary aspect of the present disclosure, the method further comprises implementing, by a scheduling unit, a technique for monitoring performance of the one or more network elements at a predefined time interval.
[0014] In an exemplary aspect of the present disclosure, the method further comprises troubleshooting, by an analysis engine, the one or more network elements based on the one or more key performance indicators.
[0015] In an exemplary aspect of the present disclosure, the method further comprises distributing, by an elastic load balancer, one or more incoming request to the one or more network elements based on the one or more key performance indicators.
[0016] In an exemplary aspect of the present disclosure, the normalization layer uses a publish-subscribe message broker system to transmit the normalized one or more key performance indicators to one or more subsystems.

[0017] In an exemplary aspect of the present disclosure, the one or more performance parameters comprises at least one of a data radio bearer (DRB), a radio resource control (RRC), a radio resource utilization (RRU), a registration management (RM), a user equipment (UE) context, a session management (SM), a bandwidth usage, a latency, and a packet loss.
[0018] In an exemplary aspect of the present disclosure, the method further comprises storing, in Distributed Data Lake, the processed one or more performance parameters for the one or more network elements.
[0019] In an exemplary aspect of the present disclosure, the method further comprises normalizing the segregated one or more performance parameters further comprises converting the one or more performance parameters into a predefined standardized format.
[0020] In an exemplary aspect of the present disclosure, the one or more criteria comprises time, number of aggregation levels, a type of node, node instance, and a location.
[0021] Another aspect of the present disclosure may relate to a system for monitoring performance of network elements. The system comprising: a performance management engine configured to: collect one or more performance parameters from one or more network elements and process the collected one or more performance parameters for the one or more network elements. The system further comprising: a key performance indicator (KPIs) engine configured to: calculate one or more key performance indicators for each of the one or more network elements based on the processed one or more performance parameters and segregate the calculated one or more key performance indicators based on one or more criteria. The system further comprising: a normalization layer configured to:

normalize the segregated one or more key performance indicators and transmit the normalized one or more key performance indicators to one or more subsystems.
[0022] Yet another aspect of the present disclosure may relate to a non-transitory
5 computer readable storage medium storing instructions for monitoring performance
of network elements, the instructions include executable code which, when executed by one or more units of a system, causes: a performance management engine of the system to collect one or more performance parameters from one or more network elements, and process the collected one or more performance
10 parameters for the one or more network elements; a key performance indicator
(KPIs) engine of the system to calculate one or more key performance indicators for each of the one or more network elements based on the processed one or more performance parameters, and segregate the calculated one or more key performance indicators based on one or more criteria; a normalization layer of the system to
15 normalize the segregated one or more key performance indicators, and transmit the
normalized one or more key performance indicators to one or more subsystems.
[0023] Yet another aspect of the present disclosure comprises a user equipment (UE). The UE comprising a processor configured to: receive a normalized one or
20 more key performance indicators; wherein the one or more key performance
indicators are normalized based on: collecting one or more performance parameters from one or more network elements; processing the collected one or more performance parameters for the one or more network elements; calculating one or more key performance indicators for each of the one or more network elements
25 based on the processed one or more performance parameters; segregating the
calculated one or more key performance indicators based on one or more criteria; normalizing the segregated one or more key performance indicators; and transmitting the normalized one or more key performance indicators to one or more subsystems.
30
7

OBJECTS OF THE INVENTION
[0024] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below. 5
[0025] It is an object of the present disclosure to provide a method and system for monitoring performance of network elements.
[0026] It is another object of the present disclosure to provide a method and system for monitoring and analysing performance counters of network elements. 10
[0027] It is yet another object of the present disclosure to provide a method and system for monitoring and analysing performance counters of network elements that aims to provide real-time monitoring of network performance across all elements/nodes, enabling quicker identification and resolution of potential issues. 15
[0028] It is yet another object of the present disclosure to provide a method and system for monitoring and analysing performance counters of network elements that handles vast amounts of data efficiently, rapidly processing and analysing performance counter data from a variety of sources. 20
[0029] It is yet another object of the present disclosure to provide a method and system for monitoring and analysing performance counters of network elements that is designed such that to be highly scalable, capable of managing the data volume and variety associated with larger, more complex networks. 25
[0030] It is yet another object of the present disclosure to provide a method and
system for monitoring and analysing performance counters of network elements
that aims to offer more sophisticated data analysis, providing valuable insights into
network performance based on features like the Analysis Engine and Parallel
30 Computing Framework.
8

[0031] It is yet another object of the present disclosure to provide a method and
system for monitoring and analysing performance counters of network elements
that offers Comprehensive KPI Management. The 5G Key Performance Indicator
5 (KPI) Engine is intended to manage all the KPIs of all network elements effectively,
allowing for more detailed and flexible performance measurement.
[0032] It is yet another object of the present disclosure to provide a method and
system for monitoring and analysing performance counters of network elements
10 that aims to offer an integrated view of network performance, making it easier to
understand the overall state of the network and to identify any potential issues.
[0033] It is yet another object of the present disclosure to provide a method and
system for monitoring and analysing performance counters of network elements
15 that is easy to maintain. By using a layered architecture and microservices approach,
this system aims to be easier to maintain and update than traditional monolithic systems.
[0034] It is yet another object of the present disclosure to provide a method and
20 system for monitoring and analysing performance counters of network elements
that is designed to be flexible and adaptable, able to adjust to changing network operations and requirements.
[0035] It is yet another object of the present disclosure to provide a method and
25 system for monitoring and analysing performance counters of network elements
that by automating various tasks like KPI calculations and scheduling, aims to reduce the workload of network operators and minimize the potential for human error.
9

[0036] It is yet another object of the present disclosure to provide a method and
system for monitoring and analysing performance counters of network elements
that by storing data in a Distributed Data Lake and enabling cross-system
communication, aims to prevent data silos, thereby improving the effectiveness of
5 data analysis.
DESCRIPTION OF DRAWINGS
[0037] The accompanying drawings, which are incorporated herein, and constitute
10 a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Some drawings may indicate the components using block diagrams and
15 may not represent the internal circuitry of each component. It will be appreciated
by those skilled in the art that disclosure of such drawings includes disclosure of electrical components, electronic components or circuitry commonly used to implement such components.
20 [0038] FIG. 1 illustrates an exemplary block diagram of an architecture of a
network performance management system, in accordance with the exemplary embodiments of the present invention.
[0039] FIG. 2 illustrates an exemplary block diagram of a computing device upon
25 which the features of the present disclosure may be implemented in accordance with
exemplary implementation of the present disclosure.
[0040] FIG. 3 illustrates an exemplary block diagram of a system for monitoring
performance of network elements, in accordance with exemplary implementations
30 of the present disclosure.
10

[0041] FIG. 4 illustrates a method flow diagram for monitoring performance of network elements, in accordance with exemplary implementations of the present disclosure. 5
[0042] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
10
[0043] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific
15 details. Several features described hereafter can each be used independently of one
another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of
20 the present disclosure are described below, as illustrated in various drawings in
which like reference numerals refer to the same parts throughout the different drawings.
[0044] The ensuing description provides exemplary embodiments only, and is not
25 intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
30 disclosure as set forth.
11

[0045] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
5 specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0046] Also, it is noted that individual embodiments may be described as a process
10 which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
15 included in a figure.
[0047] It should be noted that the terms "mobile device", "user equipment", "user device", “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not
20 intended to limit the scope of the invention or imply any specific functionality or
limitations on the described embodiments. The use of these terms is solely for convenience and clarity of description. The invention is not limited to any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope
25 of the invention as defined herein.
[0048] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
30 specific details. For example, circuits, systems, networks, processes, and other
12

components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. 5
[0049] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations can be performed in parallel or
10 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not included in a figure.
[0050] The word “exemplary” and/or “demonstrative” is used herein to mean
15 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
20 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
25
[0051] As used herein, an “electronic device”, or “portable electronic device”, or
“user device” or “communication device” or “user equipment” or “device” refers
to any electrical, electronic, electromechanical and computing device. The user
device is capable of receiving and/or transmitting one or parameters, performing
30 function/s, communicating with other user devices and transmitting data to the
13

other user devices. The user equipment may have a processor, a display, a memory,
a battery, and an input-means such as a hard keypad and/or a soft keypad. The user
equipment may be capable of operating on any radio access technology including
but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low
5 Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For
instance, the user equipment may include, but not limited to, a mobile phone,
smartphone, virtual reality (VR) devices, augmented reality (AR) devices, laptop,
a general-purpose computer, desktop, personal digital assistant, tablet computer,
mainframe computer, or any other device as may be obvious to a person skilled in
10 the art for implementation of the features of the present disclosure.
[0052] Further, the user device may also comprise a “processor” or “processing unit” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a
15 special purpose processor, a conventional processor, a digital signal processor, a
plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing,
20 .and/or any other functionality that enables the working of the system according to
the present disclosure. More specifically, the processor is a hardware processor.
[0053] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a
25 form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective
30 functions.
14

[0054] As used herein, aggregating refers to the process of combining one or more
normalized key performance indicators (KPIs) associated with various network
elements to form a consolidated KPI output data. The aggregation involves
5 collecting data from distributed sources, processing and merging it according to
predefined criteria such as time intervals, node types, or locations, and then synthesizing this information into a cohesive dataset that provides a comprehensive view of network performance.
10 [0055] As discussed in the background section, existing systems often struggled
with providing real-time performance data. There was a significant delay in gathering, processing, and visualizing data, which could lead to slower response times when dealing with network issues. Traditional methods might have been less efficient at handling vast amounts of data generated by networks, resulting in slower
15 performance and reduced accuracy in analysing the data. Existing network
management systems might not have been designed to handle the increased volume and variety of data that come with larger, more complex networks, making them less scalable. The analytical capabilities of existing systems were likely less sophisticated, limiting the depth and breadth of analysis that could be performed on
20 network performance data. Prior systems might have had a more rudimentary
approach to defining, calculating, and managing KPIs. A unified view across all nodes might not have been possible, making it more difficult to understand network performance holistically. The complexity of maintaining existing, layered architectures could have been high, especially when trying to ensure seamless
25 integration between different subsystems. Further, existing systems might not have
been as flexible or agile, making it more difficult to adapt to changing network requirements or operations. Prior systems often operated in silos, making it difficult to get a comprehensive view of the network and limiting the effectiveness of data analysis. There was typically less automation in prior network management
15

systems, which could have led to increased manual workload and greater potential for human error.
[0056] Thus, there exists an imperative need in the art to provide a method and
5 system for monitoring performance of network elements. The proposed solution of
the present disclosure seeks to address these issues by providing real-time monitoring, efficient data management, advanced analytical capabilities, integrated KPI management, and more.
10 [0057] FIG. 1 illustrates an exemplary block diagram of an architecture [100]
network performance management, in accordance with the exemplary embodiments of the present invention. Referring to FIG. 1, the architecture [100] comprises various sub-systems such as: integrated performance management system [102], normalization layer [108], computation layer [134], anomaly detection layer [136],
15 streaming engine [144], elastic load balancer [112], operations and management
system [140], API gateway system [146], analysis engine [110], parallel computing framework [132], forecasting engine [124], distributed file system [126], mapping layer [128], distributed data lake [130], scheduling layer [114], reporting engine [116], message broker [118], graph layer [120], caching layer [122], service quality
20 manager [138], correlation engine [142] and ingestion layer. The connection
between these subsystems is also as shown in FIG. 1. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this
25 disclosure.
[0058] Following are the various components of the architecture [100], the various components may include:
16

[0059] Performance Management Engine (PME) [104]: The 5G Performance
Management engine is a component of the integrated system, responsible for
collecting, processing, and managing performance counter data from various data
sources within the network. The gathered data includes metrics such as connection
5 speed, latency, data transfer rates, and many others. This raw data is then processed
and aggregated as required, forming a comprehensive overview of network
performance. The processed information is then stored in a Distributed Data Lake,
a centralized, scalable, and flexible storage solution, allowing for easy access and
further analysis. This engine also enables the reporting and visualization of this
10 performance counter data, thus providing network administrators with a real-time,
insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
15 [0060] Key Performance Indicators (KPIs) Engine [106]: The 5G Key
Performance Indicator (KPI) Engine is a component tasked with managing the KPIs of all the network elements. It uses the performance counters, which are collected and processed by the 5G Performance Management engine from various data sources. These counters, encapsulating crucial performance data, are harnessed by
20 the KPI engine to calculate essential KPIs. These KPIs might include data
throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed KPI data is then stored in the Distributed Data Lake, ensuring a highly accessible, centralized, and
25 scalable data repository for further analysis and utilization. Similar to the
Performance Management engine, the KPI engine is also responsible for reporting and visualization of KPI data. In an exemplary aspect, the KPIs includes such as but not limited to average delay DL air-interface, average DL user equipment throughput (UE) throughput in gNodeB (gNB), mean number of radio resource
30 control (RRC) connections, number of protocol data unit (PDU) sessions failed to
17

setup, number of successful/failed handover preparations, average power, minimum power, number of paging records, etc.
[0061] This functionality allows network administrators to gain a comprehensive,
5 visual understanding of the network's performance, thus supporting informed
decision-making and efficient network management.
[0062] Ingestion layer [148]: The Ingestion layer [148] forms a key part of the Integrated Performance Management system. Its primary function is to establish an
10 environment capable of handling diverse types of incoming data. This data may
include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's performance. Upon receiving this data, the Ingestion layer processes it by validating its integrity and correctness to ensure
15 it is fit for further use. Following validation, the data is routed to various
components of the system, including the Normalization layer, Streaming Engine, Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing.
20 [0063] Normalization layer [108]: The Normalization Layer in the Integrated
Performance Management system serves to standardize, enrich, and store data into the appropriate databases. It takes in data that's been ingested and adjusts it to a common standard, making it easier to compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion
25 of normalization, the data is stored in various databases like the Distributed Data
Lake, Caching Layer, and Graph Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the Normalization Layer produces data for the Message Broker, a system that enables communication between different parts of the performance
30 management system through the exchange of data messages. Moreover, the
18

Normalization Layer supplies the standardized data to several other subsystems.
These include the Analysis Engine for detailed data examination, the Correlation
Engine for detecting relationships among various data elements, the Service Quality
Manager for maintaining and improving the quality of services, and the Streaming
5 Engine for processing real-time data streams. The normalizations layer may include
multiple data types such as but not limited to normalization and enrichment
microservice for fault management (FM), normalization and enrichment
microservice for performance management (PM), normalization and enrichment
microservice for configuration management (CM), normalization and enrichment
10 microservice for call detail record (CDR) data, normalization and enrichment
microservice for 5G probe data, normalization and enrichment microservice for inventory data.
[0064] Caching layer [122]: The Caching Layer in the Integrated Performance
15 Management system plays a significant role in data management and optimization.
During the initial phase, the Normalization Layer processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer. The Caching Layer is a high-speed data storage layer
20 which temporarily holds data that is likely to be reused, to improve speed and
performance of data retrieval. By storing frequently accessed data in the Caching Layer, the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer serves as an intermediate layer between the data sources and the sub-systems, such as the
25 Analysis Engine, Correlation Engine, Service Quality Manager, and Streaming
Engine. The Normalization Layer is responsible for providing these sub-systems with the necessary data from the Caching Layer.
[0065] Computation layer [134]: The Computation Layer in the Integrated
30 Performance Management system serves as the main hub for complex data
19

processing tasks. In the initial stages, raw data is gathered, normalized, and enriched
by the Data Normalization Layer. The Normalizer Layer then inserts this
standardized data into multiple databases including the Distributed Data Lake,
Caching Layer, and Graph Layer, and also feeds it to the Message Broker. Within
5 the Computation Layer, several powerful sub-systems such as the Analysis Engine,
Correlation Engine, Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data. The Correlation Engine identifies and understands the relations and
10 patterns within the data. The Service Quality Manager assesses and ensures the
quality of the services. The Streaming Engine processes and analyses the real-time data feeds. In essence, the Computation Layer is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer, processing it to generate useful insights, ensure service
15 quality, understand data patterns, and facilitate real-time data analytics.
[0066] Message broker [118]: The Message Broker, an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from
20 various sources and applications. At its core, the Message Broker facilitates
communication between data producers and consumers through message-based topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker demonstrates immense flexibility in managing
25 data streams. Moreover, it leverages the file-system for storage and caching,
boosting its speed and efficiency. The design of the Message Broker is centred around reliability. It is designed to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data.
20

[0067] Graph layer [120]: The Graph Layer, serving as the Relationship Modeler,
plays a pivotal role in the Integrated Performance Management system. It can model
a variety of data types, including alarm, counter, configuration, CDR data, Infra-
metric data, 5G Probe Data, and Inventory data. Equipped with the capability to
5 establish relationships among diverse types of data, the Relationship Modeller
offers extensive modelling capabilities. For instance, it can model Alarm and
Counter data, V-probe, and Alarm data, elucidating their interrelationships.
Moreover, the Modeler should be adept at processing steps provided in the model
and delivering the results to the system requested, whether it be a Parallel
10 Computing system, Workflow Engine, Query Engine, Correlation System, 5G
Performance Management Engine, or 5G KPI Engine.
[0068] Scheduling layer [114]: The Scheduling Layer serves as a key element of the Integrated Performance Management System, endowed with the ability to
15 execute tasks at predetermined intervals set according to user preferences. A task
might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the Distributed Data Lake or Distributed File System or sending it to another micro-service. The versatility of the Scheduling Layer extends to facilitating graph traversals via the
20 Mapping Layer to execute tasks. This crucial capability enables seamless and
automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention, enhancing the system's efficiency and performance.
25 [0069] Analysis Engine [110]: The Analysis Engine forms a crucial part of the
Integrated Performance Management System, designed to provide an environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine, users can perform queries on data sourced
30 from various subsystems or external gateways. This capability allows for an in-
21

depth overview of data and aids in pinpointing issues. The system's flexibility allows users to configure specific policies aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. 5
[0070] Parallel Computing Framework [132]: The Parallel Computing
Framework is a key aspect of the Integrated Performance Management System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for
10 managing vast amounts of data. Users can input data via Distributed File System
(DFS) locations or Distributed Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time.
15 To accommodate varying task requirements, the service supports the allocation of
specific host lists for different computing tasks. The Parallel Computing Framework is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
20
[0071] Distributed File System [126]: The Distributed File System (DFS) is a
component of the Integrated Performance Management System, enabling multiple
clients to access and interact with data seamlessly. This file system is designed to
manage data files that are partitioned into numerous segments known as chunks. In
25 the context of a network with vast data, the DFS effectively allows for the
distribution of data across multiple nodes. This architecture enhances both the scalability and redundancy of the system, ensuring optimal performance even with large data sets. DFS also supports diverse operations, facilitating the flexible interaction with and manipulation of data.
22

[0072] Elastic Load Balancer [112]: The Elastic Load Balancer (ELB) is a vital
component of the Integrated Performance Management System, designed to
efficiently distribute incoming network traffic across a multitude of backend servers
or microservices. Its purpose is to ensure the even distribution of data requests,
5 leading to optimized server resource utilization, reduced latency, and improved
overall system performance. The ELB implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and
10 context-based dispatching allow for more intelligent, request-specific routing.
Header-based dispatching routes requests based on data contained within the headers of the HTTP requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the ELB manages event and event acknowledgments,
15 forwarding requests or responses to the specific microservice that has requested the
event.
[0073] Streaming Engine [144]: The Streaming Engine, also referred to as Stream Analytics, is a subsystem in the Integrated Performance Management System. This
20 engine is specifically designed for high-speed data pipelining to the User Interface
(UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine. After processing, the data is streamed to the UI, fostering rapid
25 decision-making and responses. The Streaming Engine cooperates with the
Distributed Data Lake, Message Broker, and Caching Layer to provide seamless, real-time data flow. Stream Analytics is designed to perform required computations on incoming data instantly, ensuring that the most relevant and up-to-date information is always available at the UI. Furthermore, this system can also retrieve
30 data from the Distributed Data Lake, Message Broker, and Caching Layer as per
23

the requirement and deliver it to the UI in real-time. The streaming engine's goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
5 [0074] Reporting Engine [116]: The Reporting Engine Manager, or REM, is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the REM is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom
10 reports based on the data visualized through the client's dashboard. These custom
dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine Manager to process and compile data from various interfaces. The main output of the REM is a detailed report generated in Excel format. The REM's unique capability to parse data from different subsystem
15 interfaces, process it according to the client's specifications and requirements, and
generate a comprehensive report makes it an essential component of this performance management system. Furthermore, the REM integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby
20 improving overall client satisfaction and system usability.
[0075] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
(also referred herein as a computer system [200]) upon which the features of the
present disclosure may be implemented in accordance with exemplary
25 implementation of the present disclosure. In an implementation, the computing
device [200] may also implement a method for monitoring performance of network elements utilising the system. In another implementation, the computing device [200] itself implements the method for monitoring performance of network elements using one or more units configured within the computing device [200],
24

wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
[0076] The computing device [200] may include a bus [202] or other
5 communication mechanism for communicating information, and a processor [204]
coupled with bus [202] for processing information. The processor [204] may be, for example, a general purpose microprocessor. The computing device [200] may also include a main memory [206], such as a random access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and
10 instructions to be executed by the processor [204]. The main memory [206] also
may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204]. Such instructions, when stored in non-transitory storage media accessible to the processor [204], render the computing device [200] into a special-purpose machine that is
15 customized to perform the operations specified in the instructions. The computing
device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
20 [0077] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
25 displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204]. Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction
30 information and command selections to the processor [204], and for controlling
25

cursor movement on the display [212]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
5 [0078] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware, and/or program logic which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the
10 computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such instructions may be read into the main memory [206] from another storage medium, such as the storage device [210]. Execution of the sequences of instructions contained in the main memory [206] causes the processor [204] to perform the
15 process steps described herein. In alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
[0079] The computing device [200] also may include a communication interface
20 [218] coupled to the bus [202]. The communication interface [218] provides a two-
way data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
25 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing
30 various types of information.
26

[0080] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
5 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], host [224] and the communication interface [218]. The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution. 10
[0081] The computing device [200] encompasses a wide range of electronic devices capable of processing data and performing computations. Examples of computing device [200] include, but are not limited only to, personal computers, laptops, tablets, smartphones, servers, and embedded systems. The devices may
15 operate independently or as part of a network and can perform a variety of tasks
such as data storage, retrieval, and analysis. Additionally, computing device [200] may include peripheral devices, such as monitors, keyboards, and printers, as well as integrated components within larger electronic systems, showcasing their versatility in various technological applications.
20
[0082] Referring to FIG. 3, an exemplary block diagram of a system [300] for monitoring performance of network elements, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one network elements [302], at least one aggregation unit [304], at least one
25 storage unit [306], and at least one workflow engine [308], and components/units
of the architecture [100]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in FIG. 3 only a few units are shown, however, the
30 system [300] may comprise multiple such units or the system [300] may comprise
27

any such numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [300] may be present in a user
device to implement the features of the present disclosure. The system [300] may
be a part of the user device / or may be independent of but in communication with
5 the user device (may also referred herein as a UE). In another implementation, the
system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
10 [0083] The system [300] is configured for monitoring performance of network
elements, with the help of the interconnection between the components/units of the system [300].
[0084] The system [300] comprises a performance management engine (PME)
15 [104]. The PME [104] is configured to collect one or more performance parameters
from one or more network elements [302] and process the collected one or more performance parameters for the one or more network elements [302]. The PME [104] of the system [300] may collect the one or more performance parameters from one or more network elements [302]. The one or more performance parameters
20 comprises at least one of a data radio bearer (DRB), a radio resource control (RRC),
a radio resource utilization (RRU), a registration management (RM), a user equipment (UE) context, a session management (SM), a bandwidth usage, a latency, a packet loss, connection speed, data transfer rates, and more. The one or more network elements [302] may be such as, but not limited to, server, router, gateway,
25 and switches. In an exemplary aspect, the one or more network elements may be
associated with a communication network such as, 5G network. In an implementation, the network nodes in 5G network may be associated with Access and Mobility Management Function (AMF), Session Management Function (SMF), core network elements such but not limited to 4G, 5G radio access network (RAN)
30 node etc . In an exemplary aspect, the one or more network elements may be
28

associated with a communication network other than 5G network, such as 6G
network and the like. After collecting the performance parameters, the PME [104]
may process the collected one or more performance parameters for the one or more
network elements [302]. In an operation, once the performance parameters or raw
5 performance data is collected, the PME [104] may process the performance
parameters or raw performance data. This processing may involve at least one of
cleaning the performance data, normalizing the performance data, and summarizing
the performance data into useful formats for further analysis. The PME [104] may
send the processed one or more performance parameters to a key performance
10 indicators (KPIs) engine [106] for further processing.
[0085] The system [300] comprises a scheduling layer [114]. The scheduling layer
[114] may implement a technique for monitoring performance of the one or more
network elements [302] at a predefined time interval. The user or network
15 administrator may configure time interval, such as, but not limited to, hours, days
for monitoring the performance of the one or more network elements [302]. Further, user or network administrator may also configure for the monitoring such as, types of network elements [302], location(s) of the network elements [302] and etc.
20 [0086] The system [300] further comprises the key performance indicators (KPIs)
engine [106]. The KPIs engine [106] is configured to calculate one or more key performance indicators (KPIs) for each of the one or more network elements [302] based on the processed one or more performance parameters and segregate the calculated one or more key performance indicators based on one or more criteria.
25 In an operation, the KPIs engine [106] of the system [300] may calculate one or
more KPIs for each of the one or more network elements [302] (e.g., server, server associated with AMF, etc.) based on the processed one or more performance parameters (e.g., bandwidth usage, latency, etc.). These KPIs might include data throughput, latency, packet loss rate, and more. The KPIs are specific measures that
30 indicate how well different parts/units/components of the network are performing.
29

Further, the KPIs engine [106] may segregate the calculated one or more KPIs based
on one or more criteria. Once the KPIs are computed, they are segregated based on
the aggregation requirements, offering a multi-layered and detailed understanding
of network performance. In an implementation during operation, depending on the
5 specific requirements of the network, the calculated KPIs are segregated or
categorized based on different criteria. In an exemplary aspect, the one or more criteria comprises time, number of aggregation levels, a type of node, node instance and a location. The criteria are selected for analysing the performance of the one or more network elements [302]. As used herein, time may be provided by a user or
10 network administrator for a selected duration of time period. Further, the type of
node may be such as, but not limited to, the server or the AMF associated server. Furthermore, the location may be selected by the user or network administrator such as, region, cell, area boundary and etc. for monitoring the performance of network elements [302].
15
[0087] The system [300] comprises a distributed data lake [130]. The distributed
data lake [130] is configured to store the one or more performance parameters and
the one or more key performance indicators (KPIs) for the one or more network
elements [302]. The performance parameters and KPIs data may store for ensuring
20 a highly accessible, centralized, and scalable data repository for further analysis and
utilization.
[0088] The system [300] further comprises a normalization layer [108]. The normalization layer [108] is configured to normalize the segregated one or more
25 key performance indicators and transmit the normalized one or more key
performance indicators to one or more subsystems. The normalization layer [108] may normalize the segregated one or more KPIs. The normalization layer [108] may convert the one or more performance parameters into a predefined standardized format. The normalization layer [108] may normalized and enriched
30 (e.g., adding some information or context) the KPIs data in standard format that can
30

be easily used for analysis by the one or more subsystems. This process of
"normalization" reduces redundancy and improves data integrity. Upon completion
of normalization, the data is stored in various databases like the distributed data
lake [130], Caching Layer [122], and Graph Layer [120], depending on its intended
5 use. The choice of storage determines how the data can be accessed and used in the
future.
[0089] After normalizing the KPIs data, the normalization layer [108] may transmit
the normalized one or more key performance indicators to one or more subsystems,
10 such as, but not limited to, aggregation unit [304], analysis engine [110], elastic
load balancer [112] for further processing and analysing the KPIs data. In an
exemplary aspect, the normalization layer [108] uses a publish-subscribe message
broker [118] to transmit the normalized one or more key performance indicators
(KPIs) to one or more subsystems. The message broker [118] operates as a publish-
15 subscribe messaging system. It orchestrates and maintains the real-time flow of data
from various sources and applications and transmits the normalized one or more
KPIs to one or more subsystems.
[0090] In an exemplary aspect, the system [300] comprises an aggregation unit
20 [304]. The aggregation unit [304] is configured to aggregate the normalized one or
more key performance indicators associated with one or more network elements
[302] to form an aggregated KPI output data. After normalizing the KPIs data from
the normalization layer [108], the aggregation unit [304] may aggregate the
normalized one or more KPIs associated with the one or more network elements
25 [302] (e.g., server, AMF associated with server, etc.) to form an aggregated KPI
output data. In an aspect, the aggregated KPI output data may comprise at least one of: performance measurement, KPIs of the network elements [302], location and time information for monitoring of the network elements [302]. The system [300] comprises a storage unit [306]. The storage unit [306] is configured to store the
31

aggregated KPI output data in a database. The database may be a distributed data lake [130] or caching layer [122].
[0091] In an exemplary aspect, the system [300] further comprises a workflow
5 engine [308]. The workflow engine [308] may execute one or more tasks based on
an analysis of the aggregated KPI output data. For example, breach of KPI may be
determined based on the analysis of the KPI output data. After analysing breach of
the KPI data, the workflow engine [308] may execute one or more tasks for
implementing or recommending remedy action to solve the issue for breaching the
10 KPIs data. In an exemplary aspect, the workflow engine [308] automatically runs
certain tasks at predefined intervals. For instance, it might automatically run a network diagnostic every night at 2 am and save the report to the database.
[0092] In an exemplary aspect, the normalization layer [108] transmits the
15 standardized data to several other network entities or subsystems. These include the
Analysis Engine [110] for detailed data examination, the Correlation Engine [142] for detecting relationships among various data elements, the Service Quality Manager [138] for maintaining and improving the quality of services, and the Streaming Engine [144] for processing real-time data streams. 20
[0093] In an exemplary aspect, the analysis engine [110] of the system [300] may troubleshoot the one or more network elements [302] based on the one or more key performance indicators. The analysis engine [110] analyses the one or more KPIs and if determines that one or more network elements [302] are not performing well,
25 the analysis engine [110] may perform troubleshooting for the affected one or more
network elements [302]. The network performance data is analysed using an Analysis Engine [110], which can help to identify trends, anomalies, and potential issues. For example, if the analysis engine [110] analyses that one or more network elements [302] such as, but not limited to servers at specific location are facing
30 excessive load or bandwidth usage issue, the analysis engine [110] may
32

communicate with the elastic load balancer [112] to balance the excessive load or bandwidth usage issue.
[0094] In an exemplary aspect, the elastic load balancer [112] of the system [300]
5 may distribute one or more incoming request to the one or more network elements
[302] based on the one or more key performance indicators. In an exemplary aspect, the elastic load balancer [112] on receiving one or more requests to the one or more network elements [302] based on the KPIs may distribute the requests for maintaining and managing the performance of the network elements [302]. For
10 example, if one or more incoming request comprising such as, the load balancing
or managing the bandwidth usage, the elastic load balancer [112] may distribute such request to other network elements [302] (e.g., server) so that overall performance and service experience may optimize without compromising the quality of service (QoS). This helps to ensure that no single server is overwhelmed
15 with too much traffic, thereby improving the overall performance and reliability of
the network.
[0095] In an exemplary aspect, the system [300] further comprising a reporting engine [116] configured to provide a visualization of the one or more performance
20 parameters and the one or more key performance indicators of the one or more
network elements [302] in real-time. The reporting engine [116] of the system [300] may provide capabilities for monitoring network performance in real time and visualizing this data in an easily understandable format. This could involve dashboards, graphs, charts, and other types of visualizations. Based on this data, the
25 user or network administrator may take any operational or management decision
for the one or more network elements [302], such that network services may run without any failures and in an optimized manner.
[0096] Referring to FIG. 4, an exemplary method flow diagram [400] for
30 monitoring performance of network elements, in accordance with exemplary
33

implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step [402]. 5
[0097] At step [404], the method [400] implemented by the present disclosure comprises collecting, by a performance management engine (PME) [104], one or more performance parameters from one or more network elements [302]. The method [400] implemented by the PME [104] of the system [300] may collect the
10 one or more performance parameters from one or more network elements [302].
The one or more performance parameters comprises at least one of a data radio bearer (DRB), a radio resource control (RRC), a radio resource utilization (RRU), a registration management (RM), a user equipment (UE) context, a session management (SM), a bandwidth usage, a latency, a packet loss, connection speed,
15 data transfer rates, and more. The one or more network elements [302] may be such
as, but not limited to, server, router, gateway, and switches. In an exemplary aspect, the one or more network elements may be associated with a communication network such as, 5G network. In an implementation, the network nodes in 5G network may be associated with Access and Mobility Management Function
20 (AMF), Session Management Function (SMF) and the like. In an exemplary aspect,
the one or more network elements may be associated with a communication network other than 5G network, such as 6G network and the like. After collecting the performance parameters, the PME [104] may process the collected one or more performance parameters for the one or more network elements [302].
25
[0098] At step [406], the method [400] implemented by the present disclosure
comprises processing, by the PME [104], the collected one or more performance
parameters for the one or more network elements [302]. In an operation, once the
performance parameters or raw performance data is collected, the PME [104] may
30 process the performance parameters or raw performance data. This processing may
34

involve at least one of cleaning the performance data, normalizing the performance data, and summarizing the performance data into useful formats for further analysis. The PME [104] may send the processed one or more performance parameters to a key performance indicators (KPIs) engine [106] for further processing. 5
[0099] In an exemplary aspect, the method [400] further may comprises implementing, by a scheduling layer [114], a technique for monitoring performance of the one or more network elements [302] at a predefined time interval. The system [300] comprises a scheduling layer [114]. The scheduling layer [114] may
10 implement a technique for monitoring performance of the one or more network
elements [302] at a predefined time interval. The user or network administrator may configure time interval, such as, but not limited to, hours, days for monitoring the performance of the one or more network elements [302]. Further, user or network administrator may also configure for the monitoring such as, types of network
15 elements [302], location(s) of the network elements [302] and etc.
[0100] At step [408], the method [400] implemented by the present disclosure comprises calculating, by a Key Performance Indicators (KPIs) engine [106], one or more key performance indicators for each of the one or more network elements
20 [302] based on the processed one or more performance parameters. In an operation,
the KPIs engine [106] of the system [300] may calculate one or more KPIs for each of the one or more network elements [302] (e.g., server, server associated with AMF, etc.) based on the processed one or more performance parameters (e.g., bandwidth usage, latency, etc.). These KPIs might include data throughput, latency,
25 packet loss rate, and more. The KPIs are specific measures that indicate how well
different parts/units/components of the network are performing.
[0101] At step [410], the method [400] implemented by the present disclosure
comprises segregating, by the Key Performance Indicators (KPIs) engine [106], the
30 calculated one or more key performance indicators based on one or more criteria.
35

Further, the method [400] implemented by the KPIs engine [106] may segregate the
calculated one or more KPIs based on one or more criteria. Once the KPIs are
computed, they are segregated based on the aggregation requirements, offering a
multi-layered and detailed understanding of network performance. In an
5 implementation during operation, depending on the specific requirements of the
network, the calculated KPIs are segregated or categorized based on different criteria. In an exemplary aspect, the one or more criteria comprises a time, a type of node, number of aggregation levels, node instance and a location. The criteria are selected for analysing the performance of the one or more network elements [302].
10 As used herein, time may be provided by a user or network administrator for a
selected duration of time period such as, but not limited to, hours, daily, weekly, monthly, and yearly basis. Further, the type of node may be such as, but not limited to, RAN or the server or the AMF associated server. Furthermore, the location may be selected by the user or network administrator such as, region, cell, area boundary
15 and etc. for monitoring the performance of network elements [302].
[0102] In an exemplary aspect, the method [400] implemented by the distributed
data lake [130] may store the one or more performance parameters and the one or
more key performance indicators (KPIs) for the one or more network elements
20 [302]. The performance parameters and KPIs data may store for ensuring a highly
accessible, centralized, and scalable data repository for further analysis and utilization.
[0103] At step [412], the method [400] implemented by the present disclosure
25 comprises normalizing, by a normalization layer [108], the segregated one or more
key performance indicators. The method [400] implemented by the normalization
layer [108] of the system [300] may normalize the segregated one or more KPIs.
The normalization layer [108] may convert the one or more performance parameters
into a predefined standardized format. The normalization layer [108] may
30 normalized and enriched (e.g., adding some information or context) the KPIs data
36

in standard format that can be easily used for analysis by the one or more
subsystems. This process of "normalization" reduces redundancy and improves data
integrity. Upon completion of normalization, the data is stored in various databases
like the distributed data lake [130], Caching Layer [122], and Graph Layer [120],
5 depending on its intended use. The choice of storage determines how the data can
be accessed and used in the future.
[0104] At step [414], the method [400] implemented by the present disclosure comprises transmitting, by the normalization layer [108], the normalized one or
10 more key performance indicators to one or more subsystems. After normalizing the
KPIs data, the normalization layer [108] may transmit the normalized one or more key performance indicators to one or more subsystems, such as, but not limited to, aggregation unit [304], analysis engine [110], elastic load balancer [112] for further processing and analysing the KPIs data. In an exemplary aspect, the normalization
15 layer [108] uses a publish-subscribe message broker [118] to transmit the
normalized one or more key performance indicators (KPIs) to one or more subsystems. The message broker [118] operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications and transmits the normalized one or more KPIs to one or more
20 subsystems.
[0105] In an exemplary aspect, the system [300] comprises an aggregation unit [304]. The aggregation unit [304] is configured to aggregate the normalized one or more key performance indicators associated with one or more network elements
25 [302] to form an aggregated KPI output data. The term aggregation refers to
compilation of normalized one or more KPIs associated with one or more network elements [302] that form compiled KPI output data. After normalizing the KPIs data from the normalization layer [108], the aggregation unit [304] may aggregate the normalized one or more KPIs associated with the one or more network elements
30 [302] (e.g., server, AMF associated with server, etc.) to form an aggregated KPI
37

output data. In an aspect, the aggregated KPI output data may comprise at least one
of: performance measurement, KPIs of the network elements [302], location and
time information for monitoring of the network elements [302]. The system [300]
comprises a storage unit [306]. The storage unit [306] is configured to store the
5 aggregated KPI output data in a database. The database may be a distributed data
lake [130] or caching layer [122].
[0106] In an exemplary aspect, the system [300] further comprises a workflow engine [308]. The workflow engine [308] may execute one or more tasks based on
10 an analysis of the aggregated KPI output data. For example, breach of KPI may be
determined based on the analysis of the KPI output data. After analysing breach of the KPI data, the workflow engine [308] may execute one or more tasks for implementing or recommending remedy action to solve the issue for breaching the KPIs data. In an exemplary aspect, the workflow engine [308] automatically runs
15 certain tasks at predefined intervals. For instance, it might automatically run a
network diagnostic every night at 2 am and save the report to the database.
[0107] In an exemplary aspect, the normalization layer [108] transmits the
standardized data to several other network entities or subsystems. These include the
20 Analysis Engine [110] for detailed data examination, the Correlation Engine [142]
for detecting relationships among various data elements, the Service Quality Manager [138] for maintaining and improving the quality of services, and the Streaming Engine [144] for processing real-time data streams.
25 [0108] In an exemplary aspect, the analysis engine [110] of the system [300] may
troubleshoot the one or more network elements [302] based on the one or more key performance indicators. The analysis engine [110] analyses the one or more KPIs and if determines that one or more network elements [302] are not performing well, the analysis engine [110] may perform troubleshooting for the affected one or more
30 network elements [302]. The network performance data is analysed using an
38

Analysis Engine [110], which can help to identify trends, anomalies, and potential
issues. For example, if the analysis engine [110] analyses that one or more network
elements [302] such as, but not limited to servers at specific location are facing
excessive load or bandwidth usage issue, the analysis engine [110] may
5 communicate with the elastic load balancer [112] to balance the excessive load or
bandwidth usage issue.
[0109] In an exemplary aspect, the elastic load balancer [112] of the system [300] may distribute one or more incoming request to the one or more network elements
10 [302] based on the one or more key performance indicators. In an exemplary aspect,
the elastic load balancer [112] on receiving one or more requests to the one or more network elements [302] based on the KPIs may distribute the requests for maintaining and managing the performance of the network elements [302]. For example, if one or more incoming request comprising such as, the load balancing
15 or managing the bandwidth usage, the elastic load balancer [112] may distribute
such request to other network elements [302] (e.g., server) so that overall performance and service experience may optimize without compromising the quality of service (QoS). This helps to ensure that no single server is overwhelmed with too much traffic, thereby improving the overall performance and reliability of
20 the network.
[0110] In an exemplary aspect, the system [300] further comprising a reporting
engine [116] configured to provide a visualization of the one or more performance
parameters and the one or more key performance indicators of the one or more
25 network elements [302] in real-time. The reporting engine [116] of the system [300]
may provide capabilities for monitoring network performance in real time and visualizing this data in an easily understandable format. This could involve dashboards, graphs, charts, and other types of visualizations. Based on this data, the user or network administrator may take any operational or management decision
39

for the one or more network elements [302], such that network services may run without any failures and in an optimized manner.
[0111] Thereafter, the method [400] terminates at step [416].
5
[0112] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for monitoring performance of network elements, the instructions include executable code which, when executed by one or more units of a system, cause a performance management engine (PME) [104] of
10 the system to collect one or more performance parameters from one or more
network elements [302], and process the collected one or more performance parameters for the one or more network elements [302]. Further, the instructions when executed cause a key performance indicator (KPIs) engine [106] of the system to calculate one or more key performance indicators for each of the one or more
15 network elements [302] based on the processed one or more performance
parameters and segregate the calculated one or more key performance indicators based on one or more criteria. Further, the instructions when executed cause a normalization layer [108] of the system to normalize the segregated one or more key performance indicators and transmit the normalized one or more key
20 performance indicators to one or more subsystems.
[0113] Yet another aspect of the present disclosure comprises a user equipment (UE). The UE comprising: a processor configured to: receive a normalized one or more key performance indicators; wherein the one or more key performance
25 indicators are normalized based on: collecting one or more performance parameters
from one or more network elements; processing the collected one or more performance parameters for the one or more network elements; calculating one or more key performance indicators for each of the one or more network elements based on the processed one or more performance parameters; segregating the
30 calculated one or more key performance indicators based on one or more criteria;
40

normalizing the segregated one or more key performance indicators; and transmitting the normalized one or more key performance indicators to one or more subsystems.
5 [0114] In an example, an Integrated Performance Management system is used for
monitoring and analysing performance counters of network elements in a 5G or 6G
telecommunications network. The Performance Management Engine gathers
performance data from various nodes in the network, which could include devices
like servers, routers, or antennas. For instance, it might collect information such as
10 the amount of data being transferred, the speed of data transfer, the number of active
connections, etc. This collected raw data is then processed, cleaned, and aggregated. For example, it might compile all the data transfer speeds from all the antennas into a single average speed for the entire network.
15 [0115] Based on the processed data, the KPI Engine calculates KPIs for each
network element. For instance, it might calculate the average uptime for each server in the network, or the average latency for each router. The calculated KPIs are then categorized based on the required level of aggregation. For example, uptime KPIs could be segregated by geographical location, allowing network operators to
20 quickly compare server performance across different regions. The system also
receives other data, such as alarm signals or logs. These data are normalized and enriched to ensure they are in a standard format. For instance, an alarm signal might be enriched with additional information like the time it was triggered and the device that triggered it. This enriched data is then sent to different subsystems for further
25 analysis. For example, the alarm signals might be sent to the Correlation Engine to
identify if multiple alarms are related and indicate a larger network issue. The system displays real-time performance data on a dashboard. For instance, network operators can see a live graph of data transfer speeds across the network, or a map showing the status of all servers.
30
41

[0116] The system automatically runs certain tasks at predefined intervals. For
instance, it might automatically run a network diagnostic every night at 2 am and
save the report to a database. The message broker ensures smooth communication
between different applications within the system. For instance, it might forward a
5 command from the dashboard application to the Analysis Engine to run a specific
analysis. Using the Analysis Engine, operators can identify issues or trends in the
network. For example, they might identify a recurring issue where data transfer
speeds drop significantly at the same time every day. Lastly, the Elastic Load
Balancer ensures that incoming network traffic is efficiently distributed across all
10 servers. This prevents any single server from being overwhelmed, improving the
overall performance and reliability of the network.
ADVANTAGES OF THE PRESENT INVENTION
15 [0117] The present disclosure provides a method and system for monitoring and
analysing performance counters of network elements.
[0118] The present disclosure provides a method and system for monitoring and
analysing performance counters of network elements that aims to provide real-time
20 monitoring of network performance across all nodes, enabling quicker
identification and resolution of potential issues.
[0119] The present disclosure provides a method and system for monitoring and
analysing performance counters of network elements that handles vast amounts of
25 data efficiently, rapidly processing and analysing performance counter data from a
variety of sources.
[0120] The present disclosure provides a method and system for monitoring and analysing performance counters of network elements that is designed such that to
42

be highly scalable, capable of managing the data volume and variety associated with larger, more complex networks.
[0121] The present disclosure provides a method and system for monitoring and
5 analysing performance counters of network elements that aims to offer more
sophisticated data analysis, providing valuable insights into network performance based on features like the Analysis Engine and Parallel Computing Framework.
[0122] The present disclosure provides a method and system for monitoring and
10 analysing performance counters of network elements that offers Comprehensive
KPI Management. The 5G Key Performance Indicator (KPI) Engine is intended to manage all the KPIs of all network elements effectively, allowing for more detailed and flexible performance measurement.
15 [0123] The present disclosure provides a method and system for monitoring and
analysing performance counters of network elements that aims to offer an integrated view of network performance, making it easier to understand the overall state of the network and to identify any potential issues.
20 [0124] The present disclosure provides a method and system for monitoring and
analysing performance counters of network elements that is easy to maintain. By using a layered architecture and microservices approach, this system aims to be easier to maintain and update than traditional monolithic systems.
25 [0125] The present disclosure provides a method and system for monitoring and
analysing performance counters of network elements that is designed to be flexible and adaptable, able to adjust to changing network operations and requirements.
[0126] The present disclosure provides a method and system for monitoring and
30 analysing performance counters of network elements that by automating various
43

tasks like KPI calculations and scheduling, aims to reduce the workload of network operators and minimize the potential for human error.
[0127] The present disclosure provides a method and system for monitoring and
5 analysing performance counters of network elements that by storing data in a
Distributed Data Lake and enabling cross-system communication, aims to prevent data silos, thereby improving the effectiveness of data analysis.
[0128] As is evident from the above, the present disclosure provides a technically
10 advanced solution for monitoring the performance of network elements. The
disclosed method and system execute a comprehensive process for collecting,
processing, and analysing performance parameters and key performance indicators
(KPIs) from network elements. The present disclosure allows for real-time
visualization, efficient troubleshooting, and optimal performance management of
15 network elements. The solution of the proposed disclosure significantly reduces the
manual effort required for performance monitoring and ensures a more reliable and
efficient management of network operations. Additionally, the integrated nature of
the system facilitates seamless data normalization, aggregation, and storage,
enhancing the overall performance management process. The present disclosure
20 provides a less labour-intensive and more effective way to monitor and manage the
performance of network elements across a network infrastructure.
[0129] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be
25 implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units, as disclosed in the disclosure, should not be construed as limiting the scope of the present disclosure. Consequently, alternative
30 arrangements and substitutions of units, provided they achieve the intended
44

functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0130] While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

We Claim:
1. A method for monitoring performance of network elements, the method
comprising:
collecting, by a performance management engine (PME) [104], one or more performance parameters from one or more network elements [302];
processing, by the PME [104], the collected one or more performance parameters for the one or more network elements [302];
calculating, by a Key Performance Indicators (KPIs) engine [106], one or more key performance indicators for each of the one or more network elements [302] based on the processed one or more performance parameters;
segregating, by the Key Performance Indicators (KPIs) engine [106], the calculated one or more key performance indicators based on one or more criteria;
normalizing, by a normalization layer [108], the segregated one or more key performance indicators; and
transmitting, by the normalization layer [108], the normalized one or more key performance indicators to one or more subsystems.
2. The method as claimed in claim 1, further comprises aggregating, by an aggregation unit [304], the normalized one or more key performance indicators associated with one or more network elements [302] to form an aggregated KPI output data.
3. The method as claimed in claim 2 further comprises storing, by a storage unit [306], the aggregated KPI output data in a database.
4. The method as claimed in claim 2 further comprises executing, via a workflow engine [308], one or more tasks based on an analysis of the aggregated KPI output data.

5. The method as claimed in claim 1, the method further comprising providing, on a reporting engine [116], visualization of the one or more performance parameters and the one or more key performance indicators of the one or more network elements [302] in real-time.
6. The method as claimed in claim 1, the method further comprising implementing, by a scheduling layer [114], a technique for monitoring performance of the one or more network elements [302] at a predefined time interval.
7. The method as claimed in claim 1, the method further comprising troubleshooting, by an analysis engine [110], the one or more network elements [302] based on the one or more key performance indicators.
8. The method as claimed in claim 1, the method further comprising distributing, by an elastic load balancer [112], one or more incoming request to the one or more network elements [302] based on the one or more key performance indicators.
9. The method as claimed in claim 1, wherein the normalization layer [108] uses a publish-subscribe message broker [118] to transmit the normalized one or more key performance indicators to one or more subsystems.
10. The method as claimed in claim 1, wherein the one or more performance parameters comprises at least one of a data radio bearer (DRB), a radio resource control (RRC), a radio resource utilization (RRU), a registration management (RM), a user equipment (UE) context, a session management (SM), a bandwidth usage, a latency, and a packet loss.

11. The method as claimed in claim 1, the method further comprising storing, in Distributed Data Lake [130], the processed one or more performance parameters for the one or more network elements [302].
12. The method as claimed in claim 1, wherein normalizing the segregated one or more performance parameters further comprises converting the one or more performance parameters into a predefined standardized format.
13. The method as claimed in claim 1, wherein the one or more criteria comprises time, number of aggregation levels, a type of node, node instance, and a location.
14. A system for monitoring performance of network elements, the system comprising:
a performance management engine (PME) [104] configured to:
collect one or more performance parameters from one or more network elements [302], and
process the collected one or more performance parameters for the one or more network elements [302]; a key performance indicator (KPIs) engine [106] configured to:
calculate one or more key performance indicators for each of the one or more network elements [302] based on the processed one or more performance parameters, and
segregate the calculated one or more key performance indicators based on one or more criteria; and a normalization layer [108] configured to:
normalize the segregated one or more key performance indicators, and
transmit the normalized one or more key performance indicators to one or more subsystems.

15. The system as claimed in claim 14, further comprising an aggregation unit [304] configured to aggregate the normalized one or more key performance indicators associated with one or more network elements [302] to form an aggregated KPI output data.
16. The system as claimed in claim 15 further comprising a storage unit [306] configured to store the aggregated KPI output data in a database.
17. The system as claimed in claim 15 further comprising a workflow engine [308] configured to execute one or more tasks based on an analysis of the aggregated KPI output data.
18. The system as claimed in claim 14, the system further comprising a reporting engine [116] configured to provide a visualization of the one or more performance parameters and the one or more key performance indicators of the one or more network elements [302] in real-time.
19. The system as claimed in claim 14, the system further comprising a scheduling layer [114] configured to implement a technique for monitoring performance of the one or more network elements [302] at a predefined time interval.
20. The system as claimed in claim 14, the system further comprising an analysis engine [110] configured to troubleshoot the one or more network elements [302] based on the one or more key performance indicators.
21. The system as claimed in claim 14, the system further comprising an elastic load balancer [112] configured to distribute one or more incoming request to

the one or more network elements [302] based on the one or more key performance indicators.
22. The system as claimed in claim 14, wherein the normalization layer [108] uses a publish-subscribe message broker [118] to transmit the normalized one or more key performance indicators to one or more subsystems.
23. The system as claimed in claim 14, wherein the one or more performance parameters comprises at least one of a data radio bearer (DRB), a radio resource control (RRC), a radio resource utilization (RRU), a registration management (RM), a user equipment (UE) context, a session management (SM), a bandwidth usage, a latency, and a packet loss.
24. The system as claimed in claim 14, the system further comprising a Distributed Data Lake [130] to store the one or more performance parameters and the one or more key performance indicators for the one or more network elements [302].
25. The system as claimed in claim 14, wherein the normalization layer [108] is configured to convert the one or more performance parameters into a predefined standardized format.
26. The system as claimed in claim 14, wherein the one or more criteria comprise a time, number of aggregation levels, a type of node, node instance and a location.
27. A user equipment (UE) comprising:
a processor configured to:
receive a normalized one or more key performance indicators;

wherein the one or more key performance indicators are normalized based on:
collecting one or more performance parameters from one or more network elements [302];
processing the collected one or more performance parameters for the one or more network elements [302];
calculating one or more key performance indicators for each of the one or more network elements [302] based on the processed one or more performance parameters;
segregating the calculated one or more key performance indicators based on one or more criteria;
normalizing the segregated one or more key performance indicators; and
transmitting the normalized one or more key performance indicators to one or more subsystems.

Documents

Application Documents

# Name Date
1 202321047792-STATEMENT OF UNDERTAKING (FORM 3) [15-07-2023(online)].pdf 2023-07-15
2 202321047792-PROVISIONAL SPECIFICATION [15-07-2023(online)].pdf 2023-07-15
3 202321047792-FORM 1 [15-07-2023(online)].pdf 2023-07-15
4 202321047792-FIGURE OF ABSTRACT [15-07-2023(online)].pdf 2023-07-15
5 202321047792-DRAWINGS [15-07-2023(online)].pdf 2023-07-15
6 202321047792-FORM-26 [18-09-2023(online)].pdf 2023-09-18
7 202321047792-Proof of Right [23-10-2023(online)].pdf 2023-10-23
8 202321047792-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf 2023-12-08
9 202321047792-FORM-5 [12-07-2024(online)].pdf 2024-07-12
10 202321047792-ENDORSEMENT BY INVENTORS [12-07-2024(online)].pdf 2024-07-12
11 202321047792-DRAWING [12-07-2024(online)].pdf 2024-07-12
12 202321047792-CORRESPONDENCE-OTHERS [12-07-2024(online)].pdf 2024-07-12
13 202321047792-COMPLETE SPECIFICATION [12-07-2024(online)].pdf 2024-07-12
14 202321047792-FORM 3 [02-08-2024(online)].pdf 2024-08-02
15 Abstract-1.jpg 2024-08-16
16 202321047792-Request Letter-Correspondence [16-08-2024(online)].pdf 2024-08-16
17 202321047792-Power of Attorney [16-08-2024(online)].pdf 2024-08-16
18 202321047792-Form 1 (Submitted on date of filing) [16-08-2024(online)].pdf 2024-08-16
19 202321047792-Covering Letter [16-08-2024(online)].pdf 2024-08-16
20 202321047792-CERTIFIED COPIES TRANSMISSION TO IB [16-08-2024(online)].pdf 2024-08-16
21 202321047792-FORM 18A [10-03-2025(online)].pdf 2025-03-10
22 202321047792-FER.pdf 2025-10-14

Search Strategy

1 202321047792_SearchStrategyNew_E_SearchStrategyE_10-10-2025.pdf