Sign In to Follow Application
View All Documents & Correspondence

Method And System For Creating A Network Area

Abstract: The present disclosure relates to a method and a system for creating a network area. The disclosure encompasses: receiving, at a User Interface (UI) [202], a request for creating the network area; transmitting, by a load balancer [100k], the request to an integrated performance management (IPM) [100a]; storing, by the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analysing, by the Indexer [208], the data associated with the request to create the network area; creating, by the Indexer [208], the network area based on the analysis of the data associated with the request; enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user; and uploading, by the indexer [208], the enriched network data at the DDL [100u] for storage. [Fig. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
19 July 2023
Publication Number
04/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Jugal Kishore
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Gaurav Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Kishan Sahu
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Rahul Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
7. Sunil Meena
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
8. Gourav Gurbani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
9. Sanjana Chaudhary
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
10. Chandra Ganveer
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
11. Supriya Kaushik De
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
12. Debashish Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
13. Mehul Tilala
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
14. Dharmendra Kumar Vishwakarma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
15. Yogesh Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
16. Niharika Patnam
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
17. Harshita Garg
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
18. Avinash Kushwaha
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
19. Sajal Soni
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
20. Kunal Telgote
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
21. Manasvi Rajani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR CREATING A NETWORK
AREA”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR CREATING A NETWORK AREA
FIELD OF INVENTION
[0001] The present disclosure generally relates to a network performance management system. More particularly, the present disclosure relates to a method and system of creating a static network area and a dynamic network area.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Network performance management systems typically track network elements and data from network monitoring tools and combine and process such data to determine key performance indicators (KPI) of the network. Integrated performance management systems provide the means to visualize the network performance data so that network operators and other relevant stakeholders are able to identify the service quality of the overall network, and individual/ grouped network elements. By having an overall as well as detailed view of the network performance, the network operators can detect, diagnose, and remedy actual service issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
[0004] In network performance management systems, generally a mix of different kinds of information gives more complete understanding of a scenario. A user might require a new field in the documents and results with the possibilities of being

derived from two or more existing fields or from a sub-part of an existing field. Further, over the period of time various solutions have been developed to provide the user with different kinds of information or with required information, from various fields, however, there are certain challenges with existing solutions. For instance, the existing solutions are not efficient in providing to the user a required information or a mix of different kinds of information, thereby leading to a partial or vague understanding of a scenario in the network systems. Moreover, the existing solutions are inefficient in deriving a required new field in the documents and results from two or more existing fields/information or from a sub-part of an existing field/information available at disposal. Furthermore, such limitations of the existing solutions also lead to inability of these existing solutions to help operations to roll-up and drill-down monitoring of KPI's and counter for their trouble shooting.
[0005] Thus, there exists an imperative need in the art to provide a solution that can overcome these and other limitations of the existing solutions, which the present disclosure aims to address.
OBJECTS OF THE INVENTION
[0006] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0007] It is an object of the present disclosure to provide a solution that create a static network area and a dynamic network area from the existing information/fields or from or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field in the documents and results.
[0008] It is another object of the present disclosure to provide a solution that help operations to roll-up and drill-down monitoring of Key Performance Indicator (KPI)’s and counter for their trouble shooting by providing for fulfilling a user

requirement, the dynamic network area, and the static network area from the existing information/fields.
[0009] It is yet another object of the present disclosure to provide a solution that provide an enrichment facility for new fields that is completely autonomous, scheduled, follows user-defined rules, and takes effect as soon as the dynamic network areas (converged network areas (CNAs), and hierarchical network areas (HNAs)) and static network areas (SNAs) are created.
[0010] It is yet another object of the present disclosure to provide a solution for providing the users a facility for auto enrichment.
[0011] It is yet another object of the present disclosure to provide a solution that decide values for the newly created network area based on the values of the old existing field(s), and where a flexibility is given to provide a mapping between these two by either entering them one by one manually or uploading them using a data file such as spreadsheet etc.
[0012] It is yet another object of the present disclosure to provide a solution that helps in drilling down the information at various levels in the network for enhanced analysis.
[0013] It is yet another object of the present disclosure to provide a solution that when needed provide a facility to a user to create network areas from the existing network areas as well as in the same manner and can allow the user to modify their Network logic in real-time whilst observing the corresponding changes.
SUMMARY
[0014] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description.

This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0015] According to an aspect of the present disclosure, a method for creating a
5 network area is disclosed. The method includes receiving, at a User Interface (UI),
a request for creating the network area. Next, the method includes transmitting, by a load balancer, the request to an integrated performance management (IPM). Next, the method includes storing, by the IPM, a data associated with the request at a Distributed Data Lake (DDL). Next, the method includes transmitting, by the IPM,
10 the data associated with the request to an Indexer (IN). Next, the method includes
analysing, by the Indexer, the data associated with the request to create the network area. Next, the method includes creating, by the Indexer, the network area based on the analysis of the data associated with the request. Next, the method includes enriching, by the indexer, a network data associated with the created network area
15 based on a set of user input received from a user. Thereafter, the method includes
uploading, by the indexer, the enriched network data at the DDL for storage.
[0016] In an exemplary aspect of the present disclosure, the enrichment of the network data is performed in a predefined scheduled interval of time. 20
[0017] In an exemplary aspect of the present disclosure, at least the network area comprises at least one of a static network area or a dynamic network area.
[0018] In an exemplary aspect of the present disclosure, the request to create the
25 network area comprises at least selection of one or more nodes, and one or more
categories for which the network area is to be created.
[0019] In an exemplary aspect of the present disclosure, the set of user input for the
enrichment comprises: a first input from the user for selection of at least one
30 existing field from which the new network area is to be derived; a second input
5

from the user for selection of an operation to be executed on the selected at least one existing field.
[0020] In an exemplary aspect of the present disclosure, the enrichment of the
5 network data based on the set of user input comprises generating, by the indexer, a
value corresponding to the executed operation on the selected at least one existing field; mapping, by the indexer, the generated value to a pre-defined value provided within a data set; assigning, by the indexer, the mapped value to the created new network area.
10
[0021] According to another aspect of the present disclosure, a system for creating a network area is disclosed. The system comprising a User Interface (UI), configured to receive a request for creating the network area; a load balancer, configured to transmit the request to an integrated performance management (IPM).
15 Further the IPM is configured to: store a data associated with the request at a
Distributed Data Lake (DDL); transmit the data associated with the request to an Indexer (IN). Furthermore, the system comprises an Indexer is configured to: analyse the data associated with the request to create the network area; create the network area based on the analysis of the data associated with the request; enrich a
20 network data associated with the created network based on a set of user input
received from a user; and upload the enriched network data at the DDL for storage.
[0022] According to yet another aspect of the present disclosure, a user equipment (UE) for creating a network area is disclosed. The UE comprising a processor
25 configured to: send, via a User Interface (UI), a request for creating the network
area; transmit, via a load balancer, the request to an integrated performance management (IPM); store, via the IPM, the data associated with the request at a Distributed Data Lake (DDL); transmit, via the IPM, the data associated with the request to an Indexer (IN); analyse, via the Indexer, the data associated with the
30 request to create the network area; create, via the Indexer, the network area based
on the analysis of the data associated with the request; enrich, via the Indexer, a
6

network data associated with the created network based on a set of user input received from a user; and upload, via the indexer, the enriched network data at the DDL for storage.
5 [0023] According to yet another aspect of the present disclosure relates to a non-
transitory computer-readable storage medium storing instruction for creating a network area, the storage medium comprising executable code which, when executed by one or more units of a system, causes: a User Interface (UI) [202] to receive a request for creating the network area; a load balancer [100k] to transmit
10 the request to an integrated performance management (IPM) [100a]; the IPM [100a]
to: store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and an Indexer [208] to: analyse the data associated with the request to create the network area, create the network area based on the analysis of the data associated with the
15 request, enrich a network data associated with the created network based on a set
of user input received from a user, and upload the enriched network data at the DDL [100u] for storage.
BRIEF DESCRIPTION OF THE DRAWINGS
20
[0024] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale,
25 emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such
30 drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
7

[0025] Fig. 1 illustrates an exemplary block diagram of a network performance management system, in accordance with the exemplary embodiments of the present invention. 5
[0026] Fig. 2 illustrates an exemplary system for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
10 [0027] Fig. 3 illustrates an exemplary method flow diagram indicating the process
for creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention.
[0028] Fig. 4 illustrates an exemplary process for creating a network area i.e., a
15 static network area and a dynamic network area, in accordance with the exemplary
embodiments of the present invention.
[0029] Fig. 5 illustrates an exemplary block diagram of a computing device upon which an embodiment of the present disclosure may be implemented. 20
[0030] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
25
[0031] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific
30 details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
8

address any of the problems discussed above or might address only some of the problems discussed above.
[0032] The ensuing description provides exemplary embodiments only, and is not
5 intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
10 disclosure as set forth.
[0033] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
15 specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
[0034] Also, it is noted that individual embodiments may be described as a process
20 which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
25 included in a figure.
[0035] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
30 aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
9

designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
5 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0036] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for
10 processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The
15 processor may perform signal coding data processing, input/output processing,
and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
20 [0037] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The
25 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of
30 a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
10

[0038] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
5 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
10
[0039] As used herein, an indexer refers to a component within the network system that analyses data associated with a user's request to create and enrich network areas. The indexer performs data analysis, creates network areas based on the analysed data, enriches the network data by applying user-defined operations to
15 existing fields, and assigns the resulting values to the new network areas. The
enriched data is then stored in the Distributed Data Lake (DDL) for future use and retrieval.
[0040] As used herein, nodes refer to individual or multiple points within a network
20 that can process or transfer data. These nodes can represent various entities, such as
devices, servers, or virtual entities, and are essential components in the creation and management of network areas. The nodes serve as the blocks for network configurations, allowing users to define and categorize different segments of the network based on specific criteria and operations. 25
[0041] As used herein, categories refer to classifications or groups within a network
that organize nodes or data based on shared characteristics or attributes. These
categories help in structuring the network by grouping similar types of data or
nodes, facilitating more targeted analysis and management. Users can select these
30 categories when creating network areas, enabling customized and efficient
organization of network resources.
11

[0042] As used herein, network area refers to a defined segment within a network
created for specific analysis or management purposes. The network area can
encompass static or dynamic configurations and includes selected nodes and
5 categories that are grouped based on user-defined criteria. The segmentation allows
for focused monitoring, performance assessment, and enrichment of network data, enhancing the ability to drill down or roll up information for comprehensive network management.
10 [0043] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the above-
mentioned and other existing problems in this field of technology by providing a
solution that can create a static network area and a dynamic network area from the
existing information/fields or from or a sub-part of an existing field available at
15 disposal in network systems, wherein the dynamic network area and the static
network area are created at least to fulfill a user requirement of a new field in the
documents and results. Moreover, based on the implementation of the features of
the present disclosure Network Areas i.e., the dynamic network area and the static
network area can be created at different granularities. It can be created for one
20 Network Node only, for multiple Nodes in the Network, for one category in a
Network Node, and for selected Categories in a Network Node etc. Hence, this
helps in drilling down the information at various levels in the Network for enhanced
analysis. Also, the solution is executed for the stored values of the counters in the
database before displaying the output and it helps operations to roll-up and drill-
25 down monitoring of KPI’s and counter for their trouble shooting.
[0044] Hereinafter, exemplary embodiments of the present disclosure will be described with reference to the accompanying drawings.
30 [0045] Fig. 1 illustrates an exemplary block diagram of a network performance
management system [100], in accordance with the exemplary embodiments of the
12

present invention. Referring to Fig. 1, the network performance management
system [100] comprises various sub-systems such as: integrated performance
management system [100a], normalization layer [100b], computation layer [100d],
anomaly detection layer [100o], streaming engine [100l], load balancer [100k],
5 operations and management system [100p], API gateway system [100r], analysis
engine [100h], parallel computing framework [100i], forecasting engine [100t], distributed file system [100j], mapping layer [100s], distributed data lake [100u], scheduling layer [100g], reporting engine [100m], message broker [100e], graph layer [100f], caching layer [100c], service quality manager [100q] and correlation
10 engine[100n]. Exemplary connections between these subsystems is also as shown
in Fig. 1. However, it will be appreciated by those skilled in the art that the present disclosure is not limited to the connections shown in the diagram, and any other connections between various subsystems that are needed to realise the effects are within the scope of this disclosure.
15
[0046] Following are the various components of the system [100], the various components may include:
[0047] Integrated performance management system [100a] comprises of one or
20 more 5G performance engine [100v] and one or more 5G Key Performance
Indicator (KPI) Engine [100w].
[0048] Integrated performance management (IPM) system [100a]: The IPM
collects performance counters to visualize the performance counters of a node,
25 creating and analysing the KPI’s, creating counter/KPI’s reports consisting of single
or multiple nodes with multiple levels of aggregation.
[0049] 5G Performance Management Engine [100v]: The 5G Performance
Management engine [100v] is a crucial component of the integrated system,
30 responsible for collecting, processing, and managing performance counter data
from various data sources within the network. The gathered data includes metrics
13

such as connection speed, latency, data transfer rates, and many others. This raw
data is then processed and aggregated as required, forming a comprehensive
overview of network performance. The processed information is then stored in a
Distributed Data Lake [100u], a centralized, scalable, and flexible storage solution,
5 allowing for easy access and further analysis. The 5G Performance Management
engine [100v] also enables the reporting and visualization of this performance
counter data, thus providing network administrators with a real-time, insightful
view of the network's operation. Through these visualizations, operators can
monitor the network's performance, identify potential issues, and make informed
10 decisions to enhance network efficiency and reliability.
[0050] 5G Key Performance Indicator (KPI) Engine [100w]: The 5G Key Performance Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of all the network elements. It uses the performance counters,
15 which are collected and processed by the 5G Performance Management engine
from various data sources. These counters, encapsulating crucial performance data, are harnessed by the KPI engine [100u] to calculate essential KPIs. These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering
20 a multi-layered and detailed understanding of network performance. The processed
KPI data is then stored in the Distributed Data Lake [100u], ensuring a highly accessible, centralized, and scalable data repository for further analysis and utilization. Similar to the Performance Management engine, the KPI engine [100u] is also responsible for reporting and visualization of KPI data. This functionality
25 allows network administrators to gain a comprehensive, visual understanding of the
network's performance, thus supporting informed decision-making and efficient network management.
[0051] Ingestion layer [not shown]: The Ingestion layer forms a key part of the
30 Integrated Performance Management system. Its primary function is to establish an
environment capable of handling diverse types of incoming data. This data may
14

include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs),
Infrastructure metrics, Logs, and Inventory data, all of which are crucial for
maintaining and optimizing the network's performance. Upon receiving this data,
the Ingestion layer processes it by validating its integrity and correctness to ensure
5 it is fit for further use. Following validation, the data is routed to various
components of the system, including the Normalization layer, Streaming Engine,
Streaming Analytics, and Message Brokers. The destination is chosen based on
where the data is required for further analytics and processing. By serving as the
first point of contact for incoming data, the Ingestion layer plays a vital role in
10 managing the data flow within the system, thus supporting comprehensive and
accurate network performance analysis.
[0052] Normalization layer [100b]: The Normalization Layer [100b] serves to standardize, enrich, and store data into the appropriate databases. It takes in data
15 that has been ingested and adjusts it to a common standard, making it easier to
compare and analyse. This process of "normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [100u], Caching Layer [100c], and Graph Layer [100f], depending on its intended use. The choice of storage
20 determines how the data can be accessed and used in the future. Additionally, the
Normalization Layer [100b] produces data for the Message Broker, a system that enables communication between different parts of the performance management system through the exchange of data messages. Moreover, the Normalization Layer [100b] supplies the standardized data to several other subsystems. These include
25 the Analysis Engine for detailed data examination, the Correlation Engine [100n]
for detecting relationships among various data elements, the Service Quality Manager [100q] for maintaining and improving the quality of services, and the Streaming Engine [100l] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and
30 accurately, demonstrating the Normalization Layer's [100b] critical role in the entire
system.
15

[0053] Caching layer [100c]: The Caching Layer [100c] in the Integrated
Performance Management system plays a significant role in data management and
optimization. During the initial phase, the Normalization Layer [100b] processes
5 incoming raw data to create a standardized format, enhancing consistency and
comparability. The Normalizer Layer then inserts this normalized data into various databases. One such database is the Caching Layer [100c]. The Caching Layer [100c] is a high-speed data storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing
10 frequently accessed data in the Caching Layer [100c], the system significantly
reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [100c] serves as an intermediate layer between the data sources and the sub-systems, such as the Analysis Engine, Correlation Engine [100n], Service Quality Manager, and Streaming Engine. The
15 Normalization Layer [100b] is responsible for providing these sub-systems with the
necessary data from the Caching Layer [100c].
[0054] Computation layer [100d]: The Computation Layer [100d] in the Integrated Performance Management system serves as the main hub for complex
20 data processing tasks. In the initial stages, raw data is gathered, normalized, and
enriched by the Normalization Layer [100b]. The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [100u], Caching Layer [100c], and Graph Layer [100f], and also feeds it to the Message Broker [100e]. Within the Computation Layer [100d], several powerful
25 sub-systems such as the Analysis Engine [100h], Correlation Engine [100n],
Service Quality Manager, and Streaming Engine, utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine performs in-depth data analytics to generate insights from the data. The Correlation Engine [100n] identifies and understands the relations and patterns within the data.
30 The Service Quality Manager assesses and ensures the quality of the services. And
the Streaming Engine processes and analyses the real-time data feeds. In essence,
16

the Computation Layer [100d] is where all major computation and data processing tasks occur. It uses the normalized data provided by the Normalization Layer [100b], processing it to generate useful insights, ensure service quality, understand data patterns, and facilitate real-time data analytics. 5
[0055] Message broker [100e]: The Message Broker [100e], an integral part of the Integrated Performance Management system, operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [100e] facilitates
10 communication between data producers and consumers through message-based
topics. This creates an advanced platform for contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [100e] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and
15 caching, boosting its speed and efficiency. The design of the Message Broker [100e]
is centred around reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and consistency of the data. With its robust design and capabilities, the Message Broker [100e] forms a critical component in managing and delivering real-time data in the system.
20
[0056] Graph layer [100f]: The Graph Layer [100f], serving as the Relationship Modeler, plays a pivotal role in the Integrated Performance Management system. It can model a variety of data types, including alarm, counter, configuration, CDR data, Infra-metric data, 5G Probe Data, and Inventory data. Equipped with the
25 capability to establish relationships among diverse types of data, the Relationship
Modeler offers extensive modelling capabilities. For instance, it can model Alarm and Counter data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the Modeler should be adept at processing steps provided in the model and delivering the results to the system requested, whether it be a Parallel
30 Computing system, Workflow Engine, Query Engine, Correlation System [100n],
5G Performance Management Engine, or 5G KPI Engine [100u]. With its powerful
17

modelling and processing capabilities, the Graph Layer [100f] forms an essential part of the system, enabling the processing and analysis of complex relationships between various types of network data.
5 [0057] Scheduling layer [100g]: The Scheduling Layer [100g] serves as a key
element of the Integrated Performance Management System, endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an Elastic Search query, and storing its output in the
10 Distributed Data Lake [100u] or Distributed File System or sending it to another
micro-service. The versatility of the Scheduling Layer [100g] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This crucial capability enables seamless and automated operations within the system, ensuring that various tasks and services are performed on schedule, without manual intervention,
15 enhancing the system's efficiency and performance. In sum, the Scheduling Layer
[100g] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
[0058] Analysis Engine [100h]: The Analysis Engine [100h] forms a crucial part
20 of the Integrated Performance Management System, designed to provide an
environment where users can configure and execute workflows for a wide array of
use-cases. This facility aids in the debugging process and facilitates a better
understanding of call flows. With the Analysis Engine [100h], users can perform
queries on data sourced from various subsystems or external gateways. This
25 capability allows for an in-depth overview of data and aids in pinpointing issues.
The system's flexibility allows users to configure specific policies aimed at
identifying anomalies within the data. When these policies detect abnormal
behaviour or policy breaches, the system sends notifications, ensuring swift and
responsive action. In essence, the Analysis Engine [100h] provides a robust
30 analytical environment for systematic data interrogation, facilitating efficient
18

problem identification and resolution, thereby contributing significantly to the system's overall performance management.
[0059] Parallel Computing Framework [100i]: The Parallel Computing
5 Framework [100i] is a key aspect of the Integrated Performance Management
System, providing a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts of data. Users can input data via Distributed File System (DFS) [100j] locations or Distributed Data Lake (DDL) indices. The
10 framework supports the creation of task chains by interfacing with the Service
Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the allocation of specific host lists for different computing tasks.
15 The Parallel Computing Framework [100i] is an essential tool for enhancing
processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
[0060] Distributed File System [100j]: The Distributed File System (DFS) [100j]
20 is a critical component of the Integrated Performance Management System,
enabling multiple clients to access and interact with data seamlessly. This file
system is designed to manage data files that are partitioned into numerous segments
known as chunks. In the context of a network with vast data, the DFS [100j]
effectively allows for the distribution of data across multiple nodes. This
25 architecture enhances both the scalability and redundancy of the system, ensuring
optimal performance even with large data sets. DFS [100j] also supports diverse
operations, facilitating the flexible interaction with and manipulation of data. This
accessibility is paramount for a system that requires constant data input and output,
as is the case in a robust performance management system.
30
19

[0061] Load Balancer [100k]: The Load Balancer (LB) [100k] is a vital
component of the Integrated Performance Management System, designed to
efficiently distribute incoming network traffic across a multitude of backend servers
or microservices. Its purpose is to ensure the even distribution of data requests,
5 leading to optimized server resource utilization, reduced latency, and improved
overall system performance. The LB [100k] implements various routing strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and
10 context-based dispatching allow for more intelligent, request-specific routing.
Header-based dispatching routes requests based on data contained within the headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [100k] manages
15 event and event acknowledgments, forwarding requests or responses to the specific
microservice that has requested the event. This system ensures efficient, reliable, and prompt handling of requests, contributing to the robustness and resilience of the overall performance management system.
20 [0062] Streaming Engine [100l]: The Streaming Engine [100l], also referred to as
Stream Analytics, is a critical subsystem in the Integrated Performance Management System. This engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to
25 dynamic changes. Data is received from various connected subsystems and
processed in real-time by the Streaming Engine [100l]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming Engine [100l] cooperates with the Distributed Data Lake [100u], Message Broker [100e], and Caching Layer [100c] to provide seamless, real-time data flow. Stream
30 Analytics is designed to perform required computations on incoming data instantly,
ensuring that the most relevant and up-to-date information is always available at
20

the UI. Furthermore, this system can also retrieve data from the Distributed Data
Lake [100u], Message Broker [100e], and Caching Layer [100c] as per the
requirement and deliver it to the UI in real-time. The streaming engine's [100l]
ultimate goal is to provide fast, reliable, and efficient data streaming, contributing
5 to the overall performance of the management system.
[0063] Reporting Engine [100m]: The Reporting Engine [100m] is a key subsystem of the Integrated Performance Management System. The fundamental purpose of designing the Reporting Engine [100m] is to dynamically create report
10 layouts of API data, catered to individual client requirements, and deliver these
reports via the Notification Engine (not shown). The REM serves as the primary interface for creating custom reports based on the data visualized through the client's dashboard. These custom dashboards, created by the client through the User Interface (UI), provide the basis for the Reporting Engine [100m] to process and
15 compile data from various interfaces. The main output of the Reporting Engine
[100m] is a detailed report generated in spreadsheet format. The Reporting Engine’s [100m] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and requirements, and generate a comprehensive report makes it an essential component of this performance
20 management system. Furthermore, the Reporting Engine [100m] integrates
seamlessly with the Notification Engine (not shown) to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
25
[0064] The present invention focusses on a creation of a network areas i.e., a dynamic network area and a static network area via a user interface (UI), an integrated performance management system (IPMS), an indexer (IN), and a distributed data lake (DDL). In order to create the network areas, in an
30 implementation, the solution as disclosed by the present disclosure is implemented
via an exemplary system [200] as shown in Fig. 2 for creating the static network
21

area and the dynamic network area, in accordance with the exemplary embodiments
of the present invention, wherein the system [200] works in conjunction with the
system [100]. In an implementation, the dynamic network area refers to flexible and
changing network environments based on network conditions in real-time such as,
5 network IP address. In an implementation, the static network area refers to fixed
network conditions environment set up for providing services in the network.
[0065] Now, referring to Fig. 2 illustrates an exemplary system for creating a network area i.e., static network area and dynamic network area, in accordance with
10 the exemplary embodiments of the present invention. In an operation, as shown in
the Fig. 2, the system [200] comprises at least one user interface UI [202], at least one load balancer [100k], at least one integrated performance management (IPM)/ integrated performance management system (IPMS) [100a], at least one indexer (IN) [208], and at least one distributed data lake (DDL) [100u]. As shown in Fig. 2,
15 the devices/components are shown for illustrative purpose, not restricted to shown
devices/components only, there may be more devices/components present in the system [200].
[0066] Further, the UI [202] of the system [200] is configured to receive a request
20 for creating the network area. In an implementation, a user or a network
administrator may request for creating the network area from the UI [202]. Further,
the request to create the network area comprises at least selection of one or more
nodes, and one or more categories for which the network area is to be created i.e.,
the user may provide at least one of parameters such as, but not limited to, cluster,
25 circle, a number of network node(s) (e.g. one or more), one or more category type
of network nodes (e.g. customer service type, network establishing type), one or
more types of network fields (e.g. static network area, dynamic network area), one
or more network attributes (e.g. throughput, latency, packet loss rate, and
performance counter etc.), one or more geographic locations and boundary region
30 with the request. In an implementation, the UI [202] may be a part of or externally
attached to a computing device, smartphone, laptop, human machine interface
22

(HMI) and the like. After receiving the request for creating the network area the user or network administrator may save the created network area.
[0067] In an implementation, the network area comprises at least one of the static
5 network area and/or the dynamic network area. In an implementation, the one or
more nodes comprises at least one of servers, switches, databases, and gateways. In an aspect, one or more nodes may associate with communication network. In an implementation, one or more nodes may associate in the communication network with network functions, such as access and mobility management function (AMF)
10 and session management function (SMF). In an implementation, one or more nodes
comprise servers or databases associated with the AMF and SMF. Further in an implementation, the one or more categories may be at least one of, but not limited to, customer service type, network service establishing type, and premium service type.
15
[0068] The system [200] further comprises the load balancer [100k], which may distribute the incoming traffic from the UI [202] or other network component/device. The load balancer [100k] efficiently routes the traffic to other network components so that network operation optimally maintained, and
20 performance should not be affected. The load balancer [100k] is configured to
transmit the request to an integrated performance management (IPM) [100a]. In an implementation, the load balancer [100k] may transmit the network traffic from the UI [202] to one of the IPM [100a] (hereinafter also referred to as IPMS unit [100a]), which has low network load.
25
[0069] The system [200] further comprises the IPM/IPMS unit [100a], which is configured to store a data associated with the request at a Distributed Data Lake (DDL) [100u] i.e., the data related to user’s created or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k]. The
30 IPM/IPMS unit [100a] stores the received network area creation request data and
the set of parameters into the DDL [100u]. The IPM [100a] further configured to
23

store a data associated with the request at a Distributed Data Lake (DDL) [100u] i.e., send the received network area request data and the set of parameters to the indexer for the analysing and creating the network area.
5 [0070] The system [200] further comprises the Indexer [208], which is configured
to analyse the data associated with the request to create the network area. Further, the indexer [208] may be configured to analyse the set of parameters associated with the network area. Further, the indexer [208] may be configured to create the network area based on the analysis of the data associated with the request i.e., the
10 indexer [208] analyses the user’s request network area data with the set of
parameters and creates the network area based on the analysis of the network data associated with the request. Further, the indexer [208] is configured to enrich a network data associated with the created network based on a set of user input received from a user. Further, the enrichment of the network data is performed in a
15 predefined scheduled interval of time. The indexer [208] may provide one or more
option(s) to receive a set of inputs from the user to enrich a network data associated with the created network area. Thereafter, the indexer [208] is configured to upload the enriched network data at the DDL [100u] for storage i.e., to stores the enriched network data into the DDL [100u].
20
[0071] Further, as disclosed by the present disclosure the set of user input for the enrichment comprises a first input from the user for selection of at least one existing field from which the new network area is to be derived, and a second input from the user for selection of an operation to be executed on the selected at least one existing
25 field. Further, the solution as disclosed by the present disclosure to enrich the
network data based on the set of user input, the indexer [208] is configured to generate a value corresponding to the executed operation on the selected at least one existing field. Further, to enrich based on the set of user input the network data the indexer [208] is configured map the generated value to a pre-defined value
30 provided within a data set. Thereafter, to enrich based on the set of user input the
24

network data the indexer [208] is configured assign the mapped value to the created new network area.
[0072] In an implementation, the user may select via the UI [202] at least one of
5 existing field such as ‘static network area field’ or ‘dynamic network area field’
from which a new network area is to be derived via the indexer [208]. Further, user may provide one or more input for selection of an operation to be executed on the selected at least one existing field. In an implementation, user may perform one or more operation such as, but not limited to, concatenating, splitting, or transforming
10 the data in some way. The operation helps transform or manipulate the data within
the existing field. Based on the applied operation on the selected existing field, the indexer [208] generates a value and then maps this generated value to a corresponding pre-defined value within a data set. In an implementation, user or network administrator may define a pre-defined data set and value in a spreadsheet
15 format. Further, user or network administrator may define predefined mappings or
rules that specify how certain or exemplary values should be translated or interpreted in the spreadsheet. In further proceedings, the indexer [208] assigns the mapped value to the created new network area. In an implementation, the mapped value obtained from the spreadsheet is assigned to the newly created network area.
20 This value represents the desired outcome or characteristic of the network area
based on the selected node, category, existing field, and applied operation.
[0073] In an implementation, the indexer [208] stores the data associated with the created new network into the DDL [100u].
25
[0074] Furthermore, based on the implementation of the features of the present disclosure, the user selects for which node and category the user wants to create the network area. Then the existing fields (e.g., SNA/HNA/CNA/ etc.) is selected from which the new network area needs to be derived. Thereafter, selecting the operation
30 whose application on the existing field gives a value which would be mapped to a
25

value in a file (e.g., spreadsheet) provided. These values are then assigned to the created network area.
[0075] It is pertinent to note that the system [200] is capable of implementing the
5 features that are obvious to a person skilled in the art in light of the disclosure as
disclosed above and the implementation of the system is not limited to the above disclosure.
[0076] Referring to Fig. 3 an exemplary method flow diagram [300], for creating a
10 network area i.e., a static network area and a dynamic network area, in accordance
with exemplary embodiments of the present invention is shown. In an implementation the method [300] is performed by the system [200]. As shown in Fig. 3, the method [300] starts at step [302].
15 [0077] At step [304], the method [300] as disclosed by the present disclosure
comprises receiving, at a user interface (UI) [202], a request for creating the network area. In an implementation, of the present disclosure, at least the network area comprises at least one of a static network area or a dynamic network area. In an implementation, of the present disclosure, the request to create the network area
20 comprises at least selection of one or more nodes, and one or more categories for
which the network area is to be created. In an implementation, the user may request via the UI [202] the request for creating the network area based on one or more network node or one or more category type. In an implementation, the category type may be, such as, but not limited to, a customer service type, a service level, and the
25 like.
[0078] Next, at step [306], the method [300] as disclosed by the present disclosure
comprises transmitting, by a load balancer [100k], the request to an integrated
performance management (IPM) [100a]. The method [300] implemented by the
30 system [200] comprises transmitting by the load balancer [100k] the incoming
request from the UI [202] to the IPM [100a]. In an implementation, the load
26

balancer [100k] efficiently routes the traffic to the IPM [100a] or other network
components so that network operation optimally maintained, and performance
should not be affected. In an implementation, the load balancer [100k] may transmit
the network traffic from the UI [202] to one of the IPM/IPMS unit [100a], which
5 has low network load.
[0079] Next, at step [308], the method [300] as disclosed by the present disclosure
comprises storing, by the IPM [100a], a data associated with the request at a
Distributed Data Lake (DDL) [100u]. The method [300] implemented by the system
10 [200] comprises the IPM [100a], which stores data associated with the request at
the DDL [100u]. In an implementation, the IPM/IPMS unit [100a] may receive a request data related to user’s created or configured network area and the defined set of parameters from the UI [202] via the load balancer [100k].
15 [0080] Next, at step [310], the method [300] as disclosed by the present disclosure
comprises transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208]. The method [300] implemented by the system [200] comprises the IPM unit [100a] transmits the data associated with the request to an Indexer (IN) [208]. In an implementation, the IPM [100a] receives request data
20 related to user created and/or configured network area and the defined set of
parameters from the UI [202] via the load balancer [100k]. The IPM/IPMS unit [100a] transmits the received network area request data and the set of parameters to the indexer [208] for the analysing and creating the network area.
25 [0081] Next, at step [312], the method [300] as disclosed by the present disclosure
comprises analysing, by the Indexer [208], the data associated with the request to create the network area. The method [300] further comprises the indexer [208], wherein the indexer [208] analyses the data associated with the request to create the network area. The indexer [208] performs one or more pre-processing or processing
30 operations on an incoming data associated with the request to create the network
27

area with user defined set of parameters, number of nodes, types of category and the like.
[0082] Next, at step [314], the method [300] as disclosed by the present disclosure
5 comprises creating, by the Indexer [208], the network area based on the analysis of
the data associated with the request. The method [300] comprises the indexer [208] for creating the network area based on the analysis of the data associated with the request. In an implementation, the indexer [208] may create the network area one of type such as, the static network area and the dynamic network area.
10
[0083] Next, at step [316], the method [300] as disclosed by the present disclosure comprises enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user. The method [300] comprises the indexer [208] for enriching the network data associated with
15 the created network area based on a set of user input receiver from a user. In an
implementation, the indexer [208], may provide one or more option(s) to receive the set of inputs from the user to enrich the network data associated with the created network area. Further, the set of user input received by the indexer [208] for the enrichment comprises a first input from the user for selection of at least one existing
20 field from which the new network area is to be derived and a second input from the
user for selection of an operation to be executed on the selected at least one existing field. In an implementation, the user may select via the UI [202] at least one of existing field such as ‘static network area field’ or ‘dynamic network area field’ from which a new network area is to be derived via the indexer [208]. Further, user
25 may provide one or more input for selection of an operation to be executed on the
selected at least one existing field. In an implementation, user may perform one or more operation such as, but not limited to, concatenating, splitting, or transforming the data in some way. The operation helps transform or manipulate the data within the existing field.
30
28

[0084] In an implementation, the indexer, [208] for the enrichment of the network
data based on the set of user input comprises, generating, by the indexer [208], a
value corresponding to the executed operation on the selected at least one existing
field. Further, the indexer, [208] for the enrichment of the network data based on
5 the set of user input comprises mapping, by the indexer [208], the generated value
to a pre-defined value provided within a data set. Thereafter, the indexer, [208] for the enrichment of the network data based on the set of user input comprises assigning, by the indexer [208], the mapped value to the created new network area. Further, the indexer [208] may generate the value corresponding to the executed
10 operation (e.g., splitting, concatenating) on the selected at least one existing field
(e.g., SNA/CNA/HNA). Further, the indexer [208] maps the generated value to a pre-defined value provided within a data set and assigns, the mapped value to the created new network area. In an exemplary implementation, based on the applied operation on the selected existing field, the indexer [208] generates a value and then
15 maps this generated value to a corresponding pre-defined value within a data set. In
an implementation, user or network administrator may define a pre-defined data set and value in a spreadsheet format. Further, user or network administrator may define predefined mappings or rules that specify how certain or exemplary values should be translated or interpreted in the spreadsheet. In further proceedings, the
20 indexer [208] assigns the mapped value to the created new network area. In an
implementation, the mapped value obtained from the spreadsheet is assigned to the newly created network area. This value represents the desired outcome or characteristic of the network area based on the selected node, category, existing field, and applied operation.
25
[0085] In an implementation, the enrichment of the network data is performed in a predefined scheduled interval of time via the indexer [208]. The user or network administrator may define interval time and at least one of number of network nodes, category types, geographic location, boundary region and the like to perform the
30 enrichment of the network data.
29

[0086] Next, at step [318], the method [300] as disclosed by the present disclosure
comprises uploading, by the indexer [208], the enriched network data at the DDL
[100u] for storage. The method [300] comprises uploading, by the indexer [208] the
enriched network data at the DDL [100u] for storage after processing. In an
5 implementation, the indexer [208] may store data associated with the created new
network area into the DDL [100u].
[0087] Thereafter, the method [300] terminates at step [320].
10 [0088] Further referring to Fig. 4, it illustrates an exemplary process [400] for
creating a network area i.e., a static network area and a dynamic network area, in accordance with the exemplary embodiments of the present invention. In an implementation the process as depicted in the Fig. 4 is executed by the system [200] in conjunction with the system [100] to create a network area i.e., a dynamic
15 network area and a static network area from the existing information/fields or from
or a sub-part of an existing field available at disposal in network systems, wherein the dynamic network area and the static network area are created at least to fulfill a user requirement of a new field or value in the data sets and results.
20 [0089] For example, at step S1, the user [402] sends a request to the UI server (such
as UI [202]) for network area creation. In an exemplary aspect, the request may comprise number of nodes and type of category, geographic location for network area creation.
25 [0090] Next, at step S2, the request for network area creation is forwarded to the
load balancer [100k].
[0091] Next, at step S3, the load balancer [100k] checks for available instance with
the IPM [100a] for sending the request for network area creation. The load balancer
30 [100k] hits the available IPM [100a] instance for sending the request for network
area creation.
30

[0092] Next, at step S4, the IPM [100a] saves the data into the distributed data lake [100u] associated with the request for network area creation.
5 [0093] Further, at step S5, the IPM [100a] forwards the data associated with the
request for network area creation to the indexer (IN) [208] and subsequently at step
S6, the indexer [208] analyses the received data and stores the analysed network
area data into the database [100u]. The indexer [208] is configured to analyse the
data associated with the request to create the network area (SNA/CNA/HNA) with
10 set of parameters such as, one or more network nodes (e.g., servers) and category
type (e.g., customer service type).
[0094] Further, in implementation, at step S6 firstly, the indexer [208] perform enrichment of the network data associated with the created network based on a set
15 of user input received from the user [402], such as, existing fields (e.g.
SNA/CNA/HNA) and operations (e.g., splitting, concatenating) from which the new network area is to be derived. The indexer [208] may generate new field or value and maps the generated field or value with predefined data set value and assigns the mapped value to the created new network area and secondly, the indexer
20 [208].
[0095] uploads the enriched network data or data associated with the created new network area at the DDL [100u] for storage.
25 [0096] Thereafter, at step S7, the indexer [208] may perform scheduled enrichment
on the network data associated with the created network based on a set of user input received from a user [402]. The user [402] (such as network administrator) may define interval time, and at least one of number of network nodes, category types, geographic location, boundary region and the like to perform the enrichment of the
30 network data associated with the created network.
31

[0097] Finally, at step S8 and step S9, the IPM [100a] sends via UI [202] to the user [402] for successful creation of network area request, stored information of enrichment data and created new network.
5 [0098] Referring to Fig. 5, which illustrates an exemplary block diagram of a
computing device [500] (also referred herein as a computer system [500]) upon
which an embodiment of the present disclosure may be implemented. In an
implementation, the computing device [500] implements the method for creating a
network area i.e., a dynamic and static network area using the system [200]. In
10 another implementation, the computing device [500] itself implements the method
for creating a network area i.e., a dynamic and static network area using one or more units configured within the computing device [500], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
15 [0099] The computing device [500] may include a bus [502] or other
communication mechanism for communicating information, and a processor [504] coupled with bus [502] for processing information. The processor [504] may be, for example, a general purpose microprocessor. The computing device [500] may also include a main memory [506], such as a random access memory (RAM), or other
20 dynamic storage device, coupled to the bus [502] for storing information and
instructions to be executed by the processor [504]. The main memory [506] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [504]. Such instructions, when stored in non-transitory storage media accessible to the processor
25 [504], render the computing device [500] into a special-purpose machine that is
customized to perform the operations specified in the instructions. The computing device [500] further includes a read only memory (ROM) [508] or other static storage device coupled to the bus [502] for storing static information and instructions for the processor [504].
30
32

[0100] A storage device [510], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [502] for storing information and
instructions. The computing device [500] may be coupled via the bus [502] to a
display [512], such as a cathode ray tube (CRT), for displaying information to a
5 computer user. An input device [514], including alphanumeric and other keys, may
be coupled to the bus [502] for communicating information and command
selections to the processor [504]. Another type of user input device may be a cursor
controller [516], such as a mouse, a trackball, or cursor direction keys, for
communicating direction information and command selections to the processor
10 [504], and for controlling cursor movement on the display [512]. This input device
typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0101] The computing device [500] may implement the techniques described
15 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [500] causes
or programs the computing device [500] to be a special-purpose machine.
According to one embodiment, the techniques herein are performed by the
computing device [500] in response to the processor [504] executing one or more
20 sequences of one or more instructions contained in the main memory [506]. Such
instructions may be read into the main memory [506] from another storage medium,
such as the storage device [510]. Execution of the sequences of instructions
contained in the main memory [506] causes the processor [504] to perform the
process steps described herein. In alternative embodiments, hard-wired circuitry
25 may be used in place of or in combination with software instructions.
[0102] The computing device [500] also may include a communication interface
[518] coupled to the bus [502]. The communication interface [518] provides a two-
way data communication coupling to a network link [520] that is connected to a
30 local network [522]. For example, the communication interface [518] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
33

a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [518] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
5 implementation, the communication interface [518] sends and receives electrical,
electromagnetic, or optical signals that carry digital data streams representing various types of information.
[0103] The computing device [500] can send messages and receive data, including
10 program code, through the network(s), the network link [520] and the
communication interface [518]. In the Internet example, a server [530] might
transmit a requested code for an application program through the Internet [528], the
ISP [526], the local network [522], the host [524] and the communication interface
[518]. The received code may be executed by the processor [504] as it is received,
15 and/or stored in the storage device [510], or other non-volatile storage for later
execution.
[0104] Further, in a telecommunications organization implementing the method and system as encompassed by this disclosure in their network performance
20 management system which involves configuration of one or more APIs to collect
data from various network equipment vendors, fetches real-time performance data, standardizes it, and stores it in a distributed data lake [100u]. The method and system for creating a network area i.e., a dynamic and static network area within a network performance management system enables the company to efficiently
25 monitor and optimize network performance across diverse equipment, reducing
downtime and ensuring a seamless experience for their customers.
[0105] According to yet another aspect of the present disclosure relates to a non-
transitory computer-readable storage medium storing instruction for creating a
30 network area, the storage medium comprising executable code which, when
executed by one or more units of a system, causes: a User Interface (UI) [202] to
34

receive a request for creating the network area; a load balancer [100k] to transmit
the request to an integrated performance management (IPM) [100a]; the IPM [100a]
to: store a data associated with the request at a Distributed Data Lake (DDL) [100u];
transmit the data associated with the request to an Indexer (IN) [208]; and an
5 Indexer [208] to: analyse the data associated with the request to create the network
area, create the network area based on the analysis of the data associated with the request, enrich a network data associated with the created network based on a set of user input received from a user, and upload the enriched network data at the DDL [100u] for storage.
10
[0106] According to yet another aspect of the present disclosure relates to a User Equipment (UE) for creating a network area, comprising a processor configured to: send, via a User Interface (UI) [202], a request for creating the network area; transmit, via a load balancer [100k], the request to an integrated performance
15 management (IPM) [100a]; store, via the IPM [100a], a data associated with the
request at a Distributed Data Lake (DDL) [100u]; transmit, via the IPM [100a], the data associated with the request to an Indexer (IN) [208]; analyse, via the Indexer [208], the data associated with the request to create the network area; create, via the Indexer [208], the network area based on the analysis of the data associated with
20 the request; enrich, via the Indexer [208], a network data associated with the created
network based on a set of user input received from a user; and upload, via the indexer [208], the enriched network data at the DDL [100u] for storage.
[0107] As is evident from the above, the present disclosure provides a technically
25 advanced solution for creating a dynamic network area and a static network area
from an existing information/fields or from or a sub-part of the existing field
available at disposal in a network systems, wherein the dynamic network area and
the static network area are created at least to fulfill a user requirement of a new field
in the documents and results. The enrichment facility mentioned in the present
30 disclosure for new fields is completely autonomous, scheduled, follows a user-
defined rules and takes effect as soon as CNAs, HNAs and SNAs are created. The
35

values for the newly create Network Area is decided based on the values of the old existing field. Furthermore, the present disclosure provides a mapping between these two by either entering them one by one manually or uploading them using spreadsheet. Furthermore, the present disclosure facilitates the user to modify their Network logic in real-time whilst observing the corresponding changes. Moreover, based on the implementation of the features of the present disclosure a network Areas i.e., the dynamic network area and the static network area are created at different granularities. It is created for one network node only, for multiple nodes in the network, for one category in a network node, and for selected categories in a network node etc. Hence, this helps in drilling down the information at various levels in the network for enhanced analysis. Also, the solution helps operations to roll-up and drill-down monitoring of KPI’s and counter for their trouble shooting.
[0108] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units, as disclosed in the disclosure, should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0109] While considerable emphasis has been placed herein on the disclosed embodiments, it will be appreciated that many embodiments can be made and that many changes can be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

We Claim
1. A method for creating a network area, comprising:
- receiving, at a User Interface (UI) [202], a request for creating the network area;
- transmitting, by a load balancer [100k], the request to an integrated performance management (IPM) [100a];
- storing, by the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u];
- transmitting, by the IPM [100a], the data associated with the request to an Indexer (IN) [208];
- analysing, by the Indexer [208], the data associated with the request to create the network area;
- creating, by the Indexer [208], the network area based on the analysis of the data associated with the request;
- enriching, by the indexer [208], a network data associated with the created network area based on a set of user input received from a user; and
- uploading, by the indexer [208], the enriched network data at the DDL [100u] for storage.

2. The method as claimed in claim 1, wherein the enrichment of the network data is performed in a predefined scheduled interval of time.
3. The method as claimed in claim 1, wherein at least the network area comprises at least one of a static network area or a dynamic network area.
4. The method as claimed in claim 1, wherein the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.

5. The method as claimed in claim 1, wherein the set of user input for the
enrichment comprises:
a first input from the user for selection of at least one existing field from which a new network area is to be derived, and
a second input from the user for selection of an operation to be executed on the selected at least one existing field.
6. The method as claimed in claim 5, wherein the enrichment of the network
data based on the set of user input comprises:
generating, by the indexer [208], a value corresponding to the executed operation
on the selected at least one existing field,
mapping, by the indexer [208], the generated value to a pre-defined value provided
within a data set, and
assigning, by the indexer [208], the mapped value to the created new network area.
7. A system for creating a network area, comprising:
- a User Interface (UI) [202], configured to receive a request for creating the network area;
- a load balancer [100k], configured to transmit the request to an integrated performance management (IPM) [100a];
- the IPM [100a], configured to:
store a data associated with the request at a Distributed Data Lake (DDL) [100u]; transmit the data associated with the request to an Indexer (IN) [208]; and
- an Indexer [208], configured to:
analyse the data associated with the request to create the network area,
create the network area based on the analysis of the data associated with the request,
enrich a network data associated with the created network based on a set of user
input received from a user, and
upload the enriched network data at the DDL [100u] for storage.

8. The system as claimed in claim 7, wherein the enrichment of the network data is performed in a predefined scheduled interval of time.
9. The system as claimed in claim 7, wherein at least the network area comprises at least one of a static network area or a dynamic network area.
10. The system as claimed in claim 7, wherein the request to create the network area comprises at least selection of one or more nodes, and one or more categories for which the network area is to be created.
11. The system as claimed in claim 7, wherein the set of user input for the enrichment comprises:
a first input from the user for selection of at least one existing field from which a new network area is to be derived, and
a second input from the user for selection of an operation to be executed on the selected at least one existing field.
12. The system as claimed in claim 11, wherein to enrich the network data based
on the set of user input, the indexer [208] is configured to:
generate a value corresponding to the executed operation on the selected at least one existing field,
map the generated value to a pre-defined value provided within a data set, and assign the mapped value to the created new network area.
13. A User Equipment (UE) for creating a network area, comprising a processor
configured to :
send, via a User Interface (UI) [202], a request for creating the network area; transmit, via a load balancer [100k], the request to an integrated performance management (IPM) [100a];
store, via the IPM [100a], a data associated with the request at a Distributed Data Lake (DDL) [100u];

transmit, via the IPM [100a], the data associated with the request to an Indexer (IN)
[208];
analyse, via the Indexer [208], the data associated with the request to create the
network area;
create, via the Indexer [208], the network area based on the analysis of the data
associated with the request;
enrich, via the Indexer [208], a network data associated with the created network
based on a set of user input received from a user; and
upload, via the indexer [208], the enriched network data at the DDL [100u] for
storage.

Documents

Application Documents

# Name Date
1 202321048371-STATEMENT OF UNDERTAKING (FORM 3) [19-07-2023(online)].pdf 2023-07-19
2 202321048371-PROVISIONAL SPECIFICATION [19-07-2023(online)].pdf 2023-07-19
3 202321048371-FORM 1 [19-07-2023(online)].pdf 2023-07-19
4 202321048371-FIGURE OF ABSTRACT [19-07-2023(online)].pdf 2023-07-19
5 202321048371-DRAWINGS [19-07-2023(online)].pdf 2023-07-19
6 202321048371-FORM-26 [20-09-2023(online)].pdf 2023-09-20
7 202321048371-Proof of Right [23-10-2023(online)].pdf 2023-10-23
8 202321048371-ORIGINAL UR 6(1A) FORM 1 & 26)-301123.pdf 2023-12-08
9 202321048371-FORM-5 [18-07-2024(online)].pdf 2024-07-18
10 202321048371-ENDORSEMENT BY INVENTORS [18-07-2024(online)].pdf 2024-07-18
11 202321048371-DRAWING [18-07-2024(online)].pdf 2024-07-18
12 202321048371-CORRESPONDENCE-OTHERS [18-07-2024(online)].pdf 2024-07-18
13 202321048371-COMPLETE SPECIFICATION [18-07-2024(online)].pdf 2024-07-18
14 202321048371-FORM 3 [02-08-2024(online)].pdf 2024-08-02
15 202321048371-Request Letter-Correspondence [20-08-2024(online)].pdf 2024-08-20
16 202321048371-Power of Attorney [20-08-2024(online)].pdf 2024-08-20
17 202321048371-Form 1 (Submitted on date of filing) [20-08-2024(online)].pdf 2024-08-20
18 202321048371-Covering Letter [20-08-2024(online)].pdf 2024-08-20
19 202321048371-CERTIFIED COPIES TRANSMISSION TO IB [20-08-2024(online)].pdf 2024-08-20
20 Abstract-1.jpg 2024-09-06
21 202321048371-FORM 18A [12-03-2025(online)].pdf 2025-03-12
22 202321048371-FER.pdf 2025-04-04
23 202321048371-FORM 3 [20-05-2025(online)].pdf 2025-05-20
24 202321048371-FER_SER_REPLY [21-05-2025(online)].pdf 2025-05-21

Search Strategy

1 202321048371_SearchStrategyNew_E_SearchStrategy202321048371E_04-04-2025.pdf