Sign In to Follow Application
View All Documents & Correspondence

Method And System For Automatically Detecting A New Network Node Associated With A Network

Abstract: The present disclosure relates to a method and system for automatically detecting a new network node associated with a network. The method comprises receiving, by a transceiver unit [302], a load balance request. The method further comprises transmitting, by the transceiver unit [302], a KPI data request. The method further comprises fetching, by a fetching unit [304], a KPI data. The method further comprises identifying, by an identification unit [306], a target counter from the one or more counters. The method further comprises fetching, by the fetching unit [304] from a storage unit [312], a target counter data. The method further comprises generating, by a generation unit [308], a computed data based on the target counter data and a notification, and automatically detecting, by a detection unit [310], the new network node associated with the network based on generating at least one of the computed data and the notification. [FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 August 2023
Publication Number
07/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
3. Jugal Kishore
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Gaurav Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Kishan Sahu
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
6. Rahul Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
7. Gourav Gurbani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
8. Sanjana Chaudhary
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
9. Chandra Ganveer
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
10. Supriya Kaushik De
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
11. Debashish Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
12. Mehul Tilala
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
13. Yogesh Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
14. Niharika Patnam
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
15. Harshita Garg
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
16. Avinash Kushwaha
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
17. Sajal Soni
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
18. Srinath Kalikivayi
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
19. Vitap Pandey
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
20. Manasvi Rajani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
21. Sunil Meena
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
22. Dharmendra Kumar Vishwakarma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR AUTOMATICALLY DETECTING A NEW NETWORK NODE ASSOCIATED WITH A NETWORK”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR AUTOMATICALLY DETECTING A NEW NETWORK NODE ASSOCIATED WITH A NETWORK
TECHNICAL FIELD
5
[0001] Embodiments of the present disclosure generally relate to network management systems. More particularly, embodiments of the present disclosure relate to a method and a system for automatically detecting a new network node associated with a network.
10 BACKGROUND
[0002] The following description of the related art is intended to provide background
information pertaining to the field of the disclosure. This section may include certain aspects
of the art that may be related to various features of the present disclosure. However, it should
15 be appreciated that this section is used only to enhance the understanding of the reader with
respect to the present disclosure, and not as admissions of the prior art.
[0003] In the field of telecommunication, a network performance management (NPM) is a
process of managing, enabling, and ensuring a plurality of performance levels in a network.
20 The NPM performs continuous monitoring of a quality and a performance service level of the
network. Additionally, the NPM monitors one or more performance metrics, such as an error rate, a network delay, a packet loss, a packet transmission, and a throughput across the network.
[0004] Further, network performance management systems typically track network elements
25 and data from network monitoring tools and combine and process such data to determine key
performance indicators (KPIs) of the network. Integrated performance management systems
provide the means to visualize a network performance data so that network operators and other
relevant stakeholders are able to identify the service quality of the overall network and
individual/ grouped network elements. By having an overall as well as detailed view of the
30 network performance, a network operator may detect, diagnose, and remedy actual service
issues, as well as predict potential service issues or failures in the network and take precautionary measures accordingly.
2

[0005] However, several challenges arise during an integration of a new node or new instance
into the network. A network administrator faces delays and complexity during the integration
of the new node or new instance in a conventional network architecture. Further, this delay and
complexity hinder network management and monitoring processes as the network
5 administration has to wait until the next scheduled configuration cycle to observe the newly
added node or instance. Further, the network administrator is required to manually update an
existing dashboard. The manual process of updating the existing dashboard is time-consuming
and prone to one or more errors, as the manual process involves the selection and configuration
of the new node within the existing dashboard. Further, an inability to immediately monitor the
10 new addition may also lead to potential service disruptions. Further, a report generated via a
network monitoring tool (such as network performance management systems) needs to be rescheduled or reconfigured whenever there is the integration of the new node in the network, which disrupts regular reporting schedules.
15 [0006] Hence, there is a need to provide an improved method and system for automatically
detecting a new network node associated with a network.
SUMMARY
20 [0007] This section is provided to introduce certain aspects of the present disclosure in a
simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0008] An aspect of the present disclosure may relate to a method for automatically detecting
25 a new network node associated with a network. The method comprises receiving, by a
transceiver unit via an Integrated Performance Management (IPM) module from a load
balancer, a load balance request associated with at least a first network node in the network,
wherein the first network node is associated with at least a set of counter data. The method
further comprises transmitting, by the transceiver unit from the IPM module to a computational
30 layer, a key performance indicator (KPI) data request based on the load balance request. The
method comprises fetching, by a fetching unit via the IPM module from the computational layer, a KPI data based on the KPI data request, wherein the KPI data is associated with one or more counters of the network. The method comprises identifying, by an identification unit at the IPM module, a target counter from the one or more counters, based on the KPI data. The
3

method comprises fetching, by the fetching unit via the IPM module from a storage unit, a
target counter data associated with the target counter. The method further comprises
generating, by a generation unit at the IPM module, at least one of a computed data based on
the target counter data and a notification associated with the computed data. The method further
5 comprises automatically detecting, by a detection unit at the IPM module, the new network
node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.
[0009] In an exemplary aspect of the present disclosure, the KPI data is fetched from a pre-
10 processed data stored in a database.
[0010] In an exemplary aspect of the present disclosure, the notification associated with the computed data is generated in an event the computed data is generated via the computational layer.
15
[0011] In an exemplary aspect of the present disclosure, the fetching the KPI data via the IPM module from the computational layer further comprises transmitting, by the transceiver unit via the computational layer in the network to a server, a pre-stored data access request associated with one or more sets of pre-stored data. The fetching the KPI data via the IPM module from
20 the computational layer further comprises receiving, by the transceiver unit via the
computational layer in the network from the server, at least one set of pre-stored data from the one or more sets of pre-stored data based on the pre-stored data access request. The fetching the KPI data via the IPM module from the computational layer further comprises generating, by the generation unit via the computational layer in the network, the KPI data based on the at
25 least one set of pre-stored data.
[0012] In an exemplary aspect of the present disclosure, the receiving the at least one set of
pre-stored data further comprises generating, by the generation unit, the one or more sets of
pre-stored data, wherein generating the one or more sets of pre-stored data further comprises
30 receiving, by the transceiver unit, at a normalization layer from an ingestion layer of the
network, a processed data based on an input data received at the ingestion layer. The generation of the one or more sets of pre-stored data further comprises generating, by the generation unit, from the normalization layer of the network at the server, the one or more sets of normalized data based on the processed data. The generation of the one or more sets of pre-stored data
4

further comprises storing, at the storage unit from the normalization layer of the network, the one or more sets of normalized data, wherein the stored one or more sets of normalized data corresponds to the one or more sets of pre-stored data.
5 [0013] In an exemplary aspect of the present disclosure, the method further comprising
updating, a set of existing dashboards based on the one or more sets of normalized data.
[0014] In an exemplary aspect of the present disclosure, each pre-stored data from the one or
more sets of pre-stored data is associated with at least one counter from the one or more
10 counters of the network.
[0015] In an exemplary aspect of the present disclosure, the target counter data is fetched, by the fetching unit, via the IPM module from the storage unit based on the at least one or more sets of pre-stored data stored at the storage unit.
15
[0016] In an exemplary aspect of the present disclosure, automatically detecting the new network node further comprises detecting, by the detection unit, at the IPM module, a success status associated with the notification. The automatic detection of the new network node further comprises receiving, by the transceiver unit, at the IPM module, from the load balancer in the
20 network, a fetch KPI data request based on the success status associated with the notification.
The automatic detection of the new network node further comprises computing, by a computation unit, at the IPM module, a new network node KPI data associated with the new network node based on the fetch KPI data request. The automatic detection of the new network node further comprises transmitting, by the transceiver unit, from the IPM module to the load
25 balancer, the new network node KPI data.
[0017] Another aspect of the present disclosure may relate to a system for automatically
detecting a new network node associated with a network. The system comprises a transceiver
unit via an Integrated Performance Management (IPM) module, configured to receive, from a
30 load balancer, a load balance request associated with at least a first network node in the
network, wherein the first network node is associated with at least a set of counter data. The transceiver unit is further configured to transmit to a computational layer in the network, a key performance indicator (KPI) data request based on the load balance request. Further, the system comprises a fetching unit connected at least to the transceiver unit, wherein the fetching unit is
5

configured to fetch via the IPM module, from the computation layer in the network, a KPI data
based on the KPI data request, wherein the KPI data is associated with one or more counters
of the network. Further, the system comprises an identification unit connected to at least the
fetching unit, wherein the identification unit is configured to identify via the IPM module, a
5 target counter from the one or more counters, based on the KPI data. Further, the fetching unit
of the system is configured to fetch, from a storage unit, a target counter data associated with
the target counter. Further, the system comprises a generation unit connected at least to the
identification unit, wherein the generation unit is configured to generate via the IPM module,
at least one of a computed data based on the target counter data and a notification associated
10 with the computed data. Further, the system comprises a detection unit connected to at least
the generation unit, wherein the detection unit is configured to automatically detect via the IPM module, the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.
15 [0018] Another aspect of the present disclosure may relate to a user equipment (UE) for
automatically detecting a new network node associated with a network. The UE comprises a memory, a processor coupled to the memory and the processor is configured to automatically detect the new network node associated with the network via a system. The automatic detection of the new network associated with a network is done by receiving, by a transceiver unit of the
20 system, from a load balancer, a load balance request associated with at least a first network
node in the network, wherein the first network node is associated with at least a set of counter data. The automatic detection of the new network associated with a network is further done by transmitting, by the transceiver unit of the system, from an Integrated Performance Management (IPM) module to a computational layer, a key performance indicator (KPI) data
25 request based on the load balance request. The automatic detection of the new network
associated with a network is done by fetching, by a fetching unit of the system, a KPI data based on the KPI data request, wherein the KPI data is associated with one or more counters of the network. The automatic detection of the new network associated with a network is done by identifying, by an identification unit of the system, a target counter from the one or more
30 counters, based on the KPI data. The automatic detection of the new network associated with
a network is done by fetching, by the fetching unit of the system, from a storage unit, a target counter data associated with the target counter. The automatic detection of the new network associated with a network is done by generating, by a generation unit of the system, at least one of a computed data based on the target counter data and a notification associated with the
6

computed data. The automatic detection of the new network associated with a network is done by automatically detecting, by a detection unit of the system, the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data. 5
[0019] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instructions for automatically detecting a new network node associated with a network, the instructions include executable code which, when executed by one or more units of a system, causes a transceiver unit of the system to receive, from a load
10 balancer, a load balance request associated with at least a first network node in the network,
wherein the first network node is associated with at least a set of counter data and to transmit, to a computational layer in the network, a Key performance indicator (KPI) data request based on the load balance request. The executable code when executed further causes a fetching unit of the system to fetch from the computational layer in the network, a KPI data based on the
15 KPI data request, wherein the KPI data is associated with one or more counters of the network.
The executable code when executed further causes an identification unit of the system to identify a target counter from the one or more counters, based on the KPI data. The executable code when executed causes further the fetching unit of the system to fetch, from a storage unit, a target counter data associated with the target counter. The executable code when executed
20 further causes a generation unit of the system to generate at least one of a computed data based
on the target counter data and a notification associated with the computed data. The executable code when executed further causes a detection unit of the system to automatically detect the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.
25
OBJECTS OF THE INVENTION
[0020] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below. 30
[0021] It is an object of the present disclosure to provide a method and a system for automatically detecting a new network node associated with a network.
7

[0022] It is another object of the present disclosure to provide a solution that eliminates a need to carry out an amendment in an existing node integration workflow in the network to integrate a new network node.
5 [0023] It is another object of the present disclosure to provide a solution for automatically
detecting a new network node associated with a network with a zero downtime, as none of a microservice need to be restarted.
[0024] It in another object of the present disclosure to provide a solution for expanding a load
10 on a node in the network without any amendment in an execution flow.
DESCRIPTION OF THE DRAWINGS
[0025] The accompanying drawings, which are incorporated herein, and constitute a part of
15 this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in
which like reference numerals refer to the same parts throughout the different drawings.
Components in the drawings are not necessarily to scale, emphasis instead being placed upon
clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the
figures are not to be construed as limiting the disclosure, but the possible variants of the method
20 and system according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
25 [0026] FIG. 1 illustrates an exemplary block diagram of a computing device upon which the
features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0027] FIG. 2 illustrates an exemplary block diagram of a network performance management
30 system, in accordance with exemplary implementations of the present disclosure.
[0028] FIG. 3 illustrates an exemplary block diagram of a system for automatically detecting a new network node associated with a network, in accordance with exemplary implementations of the present disclosure.
8

[0029] FIG. 4 illustrates a flow diagram of a method for automatically detecting a new network node associated with a network in accordance with exemplary implementations of the present disclosure. 5
[0030] FIG. 5 illustrates an exemplary signal flow diagram of a method for storing a data into a distributed file system, in accordance with exemplary implementations of the present disclosure.
10 [0031] FIG. 6 illustrates an exemplary signal flow diagram of a method for automatically
detecting a new network node associated with a network, in accordance with exemplary implementations of the present disclosure.
[0032] FIG. 7 illustrates an exemplary architecture of a system architecture for automatically
15 detecting a new network node associated with a network, in accordance with exemplary
implementations of the present disclosure.
DETAILED DESCRIPTION
20 [0033] In the following description, for the purposes of explanation, various specific details
are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature
25 may not address any of the problems discussed above or might address only some of the
problems discussed above.
[0034] The ensuing description provides exemplary embodiments only, and is not intended to
limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description
30 of the exemplary embodiments will provide those skilled in the art with an enabling description
for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
9

[0035] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of ordinary skill in
the art that the embodiments may be practiced without these specific details. For example,
circuits, systems, processes, and other components may be shown as components in block
5 diagram form in order not to obscure the embodiments in unnecessary detail.
[0036] Also, it is noted that individual embodiments may be described as a process which is
depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block
diagram. Although a flowchart may describe the operations as a sequential process, many of
10 the operations may be performed in parallel or concurrently. In addition, the order of the
operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
[0037] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive—in a manner similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
25 [0038] As used herein, a “processing unit” or “processor” or “operating processor” includes
one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a
30 microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array
circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
10

[0039] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-
device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless
communication device”, “a mobile communication device”, “a communication device” may
5 be any electrical, electronic and/or computing device or equipment, capable of implementing
the features of the present disclosure. The user equipment/device may include, but is not limited
to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital
assistant, tablet computer, wearable device or any other computing device which is capable of
implementing the features of the present disclosure. Also, the user device may contain at least
10 one input means configured to receive an input from at least one of a transceiver unit, a
processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
[0040] As used herein, “storage unit” or “memory unit” refers to a machine or computer-
15 readable medium including any mechanism for storing information in a form readable by a
computer or similar machine. For example, a computer-readable medium includes read-only
memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical
storage media, flash memory devices or other types of machine-accessible storage media. The
storage unit stores at least the data that may be required by one or more units of the system to
20 perform their respective functions.
[0041] As used herein “interface” or “user interface refers to a shared boundary across which
two or more separate components of a system exchange information or data. The interface may
also be referred to a set of rules or protocols that define communication or interaction of one
25 or more modules or one or more units with each other, which also includes the methods,
functions, or procedures that may be called.
[0042] All modules, units, components used herein, unless explicitly excluded herein, may be
software modules or hardware processors, the processors being a general-purpose processor, a
30 special purpose processor, a conventional processor, a digital signal processor (DSP), a
plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
11

[0043] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system. 5
[0044] As discussed in the background section, several challenges arise during the integration of the new node or new instance into the network. Further, as discussed above, the network administrator faces delays and complexity while integrating the new node or new instance in a conventional network architecture. Further, this delay and complexity hinder the network
10 management and monitoring processes, as the network administration has to wait until the next
scheduled configuration cycle to observe the newly added node or instance. Further, the network administrator is required to manually update an existing dashboard. The manual process of updating the existing dashboard is time-consuming and prone to one or more errors, as the manual process involves the selection and configuration of the new node within the
15 existing dashboard. Further, an inability to immediately monitor the new addition may also
lead to potential service disruptions. Further, a report generated via a network monitoring tool (such as network performance management systems) is needed to be rescheduled or reconfigured whenever there is the addition of a new node in the network. This process disrupts regular reporting schedules. Hence, current known solutions have several shortcomings. The
20 present disclosure aims to overcome the above-mentioned and other existing problems in this
field of technology by providing a novel solution that automatically detects a new network node associated with a network. Further, the novel solution involves receiving a load balance request, which may indicate that the network is under load or might require any adjustment. Further, a Key Performance Indicator (KPI) data request is transmitted, and in response to the
25 request, the KPI data is collected, which may include detailed performance metrics that help
assess the current state of the network. Further, a target counter is identified from the KPI data, which indicates a specific performance indicator that is relevant to detecting new network activity. Further, data associated with the target counter is retrieved, and based on this data, computed data and a notification are generated. Further, the computed data is analysed to detect
30 the presence of the new node in the network. If the computed data shows any deviations from
a general pattern or standard data of the network, then the presence of the new node in the network is detected.
12

[0045] FIG. 1 illustrates an exemplary block diagram of a computing device [100] upon which
the features of the present disclosure may be implemented in accordance with exemplary
implementations of the present disclosure. In an implementation, the computing device [100]
may also implement a method for automatically detecting a new network node associated with
5 a network by utilising the system [200]. In another implementation, the computing device [100]
itself implements the method for automatically detecting the new network node associated with the network using one or more units configured within the computing device [100], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
10
[0046] The computing device [100] may include a bus [102] or other communication mechanism for communicating information, and a processor [104] coupled with the bus [102] for processing information. The processor [104] may be, for example, a general-purpose microprocessor. The computing device [100] may also include a main memory [106], such as
15 a random-access memory (RAM), or other dynamic storage device, coupled to the bus [102]
for storing information and instructions to be executed by the processor [104]. The main memory [106] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [104]. Such instructions, when stored in non-transitory storage media accessible to the processor [104],
20 render the computing device [100] into a special-purpose machine that is customized to
perform the operations specified in the instructions. The computing device [100] further includes a Read Only Memory (ROM) [108] or other static storage device coupled to the bus [102] for storing static information and instructions for the processor [104].
25 [0047] A storage device [110], such as a magnetic disk, optical disk, or solid-state drive is
provided and coupled to the bus [102] for storing information and instructions. The computing device [100] may be coupled via the bus [102] to a display [112], such as a Cathode Ray Tube (CRT), Liquid Crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [114],
30 including alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [102] for communicating information and command selections to the processor [104]. Another type of user input device may be a cursor controller [116], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [104], and for controlling cursor movement on the display [112]. The input device
13

typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0048] The computing device [100] may implement the techniques described herein using
5 customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic
which in combination with the computing device [100] causes or programs the computing device [100] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [100] in response to the processor [104] executing one or more sequences of one or more instructions contained in the main memory
10 [106]. Such instructions may be read into the main memory [106] from another storage
medium, such as the storage device [110]. Execution of the sequences of instructions contained in the main memory [106] causes the processor [104] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
15
[0049] The computing device [100] also may include a communication interface [118] coupled to the bus [102]. The communication interface [118] provides a two-way data communication coupling to a network link [120] that is connected to a local network [122]. For example, the communication interface [118] may be an integrated services digital network (ISDN) card,
20 cable modem, satellite modem, or a modem to provide a data communication connection to a
corresponding type of telephone line. As another example, the communication interface [118] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [118] sends and receives electrical, electromagnetic or optical signals
25 that carry digital data streams representing various types of information.
[0050] The computing device [100] can send messages and receive data, including program
code, through the network(s), the network link [120] and the communication interface [118].
In the Internet example, a server [130] might transmit a requested code for an application
30 program through the Internet [128], the ISP [126], the local network [122], the host [124] and
the communication interface [118]. The received code may be executed by the processor [104] as it is received, and/or stored in the storage device [110], or other non-volatile storage for later execution.
14

[0051] FIG. 2 illustrates an exemplary block diagram [200] of a network performance
management system [200], in accordance with the exemplary embodiments of the present
invention. Referring to FIG. 2, the network performance management system [200] comprises
various sub-systems such as: an integrated performance management module [202], a
5 normalization layer [204], a computation layer [206], an anomaly detection layer [208], a
streaming engine [210], a load balancer [212], an operations and management system [214],
an API gateway system [216], an analysis engine [218], a parallel computing framework [220],
a forecasting engine [222], a distributed file system [224], a mapping layer [226], a distributed
data lake [228], a scheduling layer [230], a reporting engine [232], a message broker [234], a
10 graph layer [236], a caching layer [238], a service quality manager [240], a correlation
engine[242] and an Ingestion layer [244].
[0052] Exemplary connections between these subsystems is also as shown in FIG. 2. However,
it will be appreciated by those skilled in the art that the present disclosure is not limited to the
15 connections shown in the diagram, and any other connections between various subsystems that
are needed to realise the effects are within the scope of this disclosure.
[0053] Following are the various components of the system [200], the various components may
include: The Integrated performance management module [202] comprise of a Performance
20 Management Engine [246] and a Key Performance Indicator (KPI) Engine [248].
[0054] Performance Management Engine [246]: The Performance Management engine [246] is a crucial component, responsible for collecting, processing, and managing performance counter data from various data sources within the network. The gathered data includes metrics
25 such as connection speed, latency, data transfer rates, and many others. This raw data is then
processed and aggregated as required, forming a comprehensive overview of network performance. The processed information is then stored in a Distributed Data Lake [228], a centralized, scalable, and flexible storage solution, allowing for easy access and further analysis. The Performance Management engine [246] also enables the reporting and
30 visualization of this performance counter data, thus providing network administrators with a
real-time, insightful view of the network's operation. Through these visualizations, operators can monitor the network's performance, identify potential issues, and make informed decisions to enhance network efficiency and reliability.
15

[0055] Key Performance Indicator (KPI) Engine [248]: The Key Performance Indicator (KPI)
Engine [248] is a dedicated component tasked with managing the KPIs of all the network
elements. It uses the performance counters, which are collected and processed by the
Performance Management engine from various data sources. These counters, encapsulating
5 crucial performance data, are harnessed by the KPI engine [248] to calculate essential KPIs.
These KPIs might include data throughput, latency, packet loss rate, and more. Once the KPIs are computed, they are segregated based on the aggregation requirements, offering a multi-layered and detailed understanding of network performance. The processed KPI data is then stored in the Distributed Data Lake [228], ensuring a highly accessible, centralized, and
10 scalable data repository for further analysis and utilization. Similar to the Performance
Management engine, the KPI engine [248] is also responsible for reporting and visualization of KPI data. This functionality allows network administrators to gain a comprehensive, visual understanding of the network's performance, thus supporting informed decision-making and efficient network management.
15
[0056] Ingestion layer [244]: Its primary function is to establish an environment capable of handling diverse types of incoming data. This data may include Alarms, Counters, Configuration parameters, Call Detail Records (CDRs), Infrastructure metrics, Logs, and Inventory data, all of which are crucial for maintaining and optimizing the network's
20 performance. Upon receiving this data, the Ingestion layer [244] processes it by validating its
integrity and correctness to ensure it is fit for further use. Following validation, the data is routed to various components of the system, including the normalization layer [204], Streaming Engine [210], Streaming Analytics, and Message Brokers. The destination is chosen based on where the data is required for further analytics and processing. By serving as the first point of
25 contact for incoming data, the Ingestion layer [244] plays a vital role in managing the data flow
within the system, thus supporting comprehensive and accurate network performance analysis.
[0057] Normalization layer [204]: The normalization Layer [204] serves to standardize, enrich,
and store data into the appropriate databases. It takes in data that has been ingested and adjusts
30 it to a common standard, making it easier to compare and analyse. This process of
"normalization" reduces redundancy and improves data integrity. Upon completion of normalization, the data is stored in various databases like the Distributed Data Lake [228], Caching Layer, and Graph Layer, depending on its intended use. The choice of storage determines how the data can be accessed and used in the future. Additionally, the normalization
16

Layer [204] produces data for the Message Broker, a system that enables communication
between different parts of the performance management system through the exchange of data
messages. Moreover, the normalization Layer [204] supplies the standardized data to several
other subsystems. These include the Analysis Engine [218] for detailed data examination, the
5 Correlation Engine [242] for detecting relationships among various data elements, the Service
Quality Manager for maintaining and improving the quality of services, and the Streaming Engine [210] for processing real-time data streams. These subsystems depend on the normalized data to perform their operations effectively and accurately, demonstrating the normalization Layer's [204] critical role in the entire system.
10
[0058] Caching layer [238]: The Caching Layer [238] plays a significant role in data management and optimization. During the initial phase, the normalization Layer [204] processes incoming raw data to create a standardized format, enhancing consistency and comparability. The Normalizer Layer then inserts this normalized data into various databases.
15 One such database is the Caching Layer [238]. The Caching Layer [238] is a high-speed data
storage layer which temporarily holds data that is likely to be reused, to improve speed and performance of data retrieval. By storing frequently accessed data in the Caching Layer [238], the system significantly reduces the time taken to access this data, improving overall system efficiency and performance. Further, the Caching Layer [238] serves as an intermediate layer
20 between the data sources and the sub-systems, such as the Analysis Engine [218], Correlation
Engine [242], Service Quality Manager, and Streaming Engine [210]. The normalization Layer [204] is responsible for providing these sub-systems with the necessary data from the Caching Layer [238].
25 [0059] Computation layer [206]: The Computation Layer [206] serves as the main hub for
complex data processing tasks. In the initial stages, raw data is gathered, normalized, and enriched by the normalization Layer [204]. The Normalizer Layer then inserts this standardized data into multiple databases including the Distributed Data Lake [228], Caching Layer [238], and Graph Layer, and also feeds it to the Message Broker. Within the Computation Layer [206],
30 several powerful sub-systems such as the Analysis Engine [218], Correlation Engine [242],
Service Quality Manager, and Streaming Engine [210], utilize the normalized data. These systems are designed to execute various data processing tasks. The Analysis Engine [218] performs in-depth data analytics to generate insights from the data. The Correlation Engine [242] identifies and understands the relations and patterns within the data. The Service Quality
17

Manager assesses and ensures the quality of the services. And the Streaming Engine [210]
processes and analyses the real-time data feeds. In essence, the Computation Layer [206] is
where all major computation and data processing tasks occur. It uses the normalized data
provided by the normalization Layer [204], processing it to generate useful insights, ensure
5 service quality, understand data patterns, and facilitate real-time data analytics.
[0060] Message broker [234]: The Message Broker [234] operates as a publish-subscribe messaging system. It orchestrates and maintains the real-time flow of data from various sources and applications. At its core, the Message Broker [234] facilitates communication between data
10 producers and consumers through message-based topics. This creates an advanced platform for
contemporary distributed applications. With the ability to accommodate a large number of permanent or ad-hoc consumers, the Message Broker [234] demonstrates immense flexibility in managing data streams. Moreover, it leverages the filesystem for storage and caching, boosting its speed and efficiency. The design of the Message Broker [234] is centred around
15 reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the integrity and
consistency of the data. With its robust design and capabilities, the Message Broker [234] forms a critical component in managing and delivering real-time data in the system.
[0061] Graph layer [236]: The Graph Layer [236], serving as the Relationship Modeler, plays
20 a pivotal role in the Integrated Performance Management system. It can model a variety of data
types, including alarm, counter, configuration, CDR data, Infra-metric data, Probe Data, and
Inventory data. Equipped with the capability to establish relationships among diverse types of
data, the Relationship Modeler offers extensive modelling capabilities. For instance, it can
model Alarm and Counter data, Probing and Alarm data, elucidating their interrelationships.
25 Moreover, the Modeler should be adept at processing steps provided in the model and
delivering the results to the system requested, whether it be a Parallel Computing system,
Workflow Engine, Query Engine, Correlation System [200n], Performance Management
Engine, or KPI Engine [248]. With its powerful modelling and processing capabilities, the
Graph Layer [236] forms an essential part of the system, enabling the processing and analysis
30 of complex relationships between various types of network data.
[0062] Scheduling layer [230]: The Scheduling Layer [230] endowed with the ability to execute tasks at predetermined intervals set according to user preferences. A task might be an activity performing a service call, an API call to another microservice, the execution of an
18

Elastic Search query, and storing its output in the Distributed Data Lake [228] or Distributed
File System or sending it to another micro-service. The versatility of the Scheduling Layer
[230] extends to facilitating graph traversals via the Mapping Layer to execute tasks. This
crucial capability enables seamless and automated operations within the system, ensuring that
5 various tasks and services are performed on schedule, without manual intervention, enhancing
the system's efficiency and performance. In sum, the Scheduling Layer [230] orchestrates the systematic and periodic execution of tasks, making it an integral part of the efficient functioning of the entire system.
10 [0063] Analysis Engine [218]: The Analysis Engine [218] is designed to provide an
environment where users can configure and execute workflows for a wide array of use-cases. This facility aids in the debugging process and facilitates a better understanding of call flows. With the Analysis Engine [218], users can perform queries on data sourced from various subsystems or external gateways. This capability allows for an in-depth overview of data and
15 aids in pinpointing issues. The system's flexibility allows users to configure specific policies
aimed at identifying anomalies within the data. When these policies detect abnormal behaviour or policy breaches, the system sends notifications, ensuring swift and responsive action. In essence, the Analysis Engine [218] provides a robust analytical environment for systematic data interrogation, facilitating efficient problem identification and resolution, thereby
20 contributing significantly to the system's overall performance management.
[0064] Parallel Computing Framework [220]: The Parallel Computing Framework [220] provides a user-friendly yet advanced platform for executing computing tasks in parallel. This framework showcases both scalability and fault tolerance, crucial for managing vast amounts
25 of data. Users can input data via Distributed File System (DFS) [200j] locations or Distributed
Data Lake (DDL) indices. The framework supports the creation of task chains by interfacing with the Service Configuration Management (SCM) Sub-System. Each task in a workflow is executed sequentially, but multiple chains can be executed simultaneously, optimizing processing time. To accommodate varying task requirements, the service supports the
30 allocation of specific host lists for different computing tasks. The Parallel Computing
Framework [220] is an essential tool for enhancing processing speeds and efficiently managing computing resources, significantly improving the system's performance management capabilities.
19

[0065] Distributed File System [200j]: The Distributed File System (DFS) [200j] enables
multiple clients to access and interact with data seamlessly. This file system is designed to
manage data files that are partitioned into numerous segments known as chunks. In the context
of a network with vast data, the DFS [200j] effectively allows for the distribution of data across
5 multiple nodes. This architecture enhances both the scalability and redundancy of the system,
ensuring optimal performance even with large data sets. DFS [200j] also supports diverse operations, facilitating the flexible interaction with and manipulation of data. This accessibility is paramount for a system that requires constant data input and output, as is the case in a robust performance management system.
10
[0066] Load Balancer [212]: The Load Balancer (LB) [212] is a vital component of the Integrated Performance Management System, designed to efficiently distribute incoming network traffic across a multitude of backend servers or microservices. Its purpose is to ensure the even distribution of data requests, leading to optimized server resource utilization, reduced
15 latency, and improved overall system performance. The LB [200k] implements various routing
strategies to manage traffic. These include round-robin scheduling, header-based request dispatch, and context-based request dispatch. Round-robin scheduling is a simple method of rotating requests evenly across available servers. In contrast, header and context-based dispatching allow for more intelligent, request-specific routing. Header-based dispatching
20 routes requests based on data contained within the headers of the Hypertext Transfer Protocol
(HTTP) requests. Context-based dispatching routes traffic based on the contextual information about the incoming requests. For example, in an event-driven architecture, the LB [200k] manages event and event acknowledgments, forwarding requests or responses to the specific microservice that has requested the event. This system ensures efficient, reliable, and prompt
25 handling of requests, contributing to the robustness and resilience of the overall performance
management system.
[0067] Streaming Engine [210]: The Streaming Engine [210], also referred to as Stream
Analytics, is a critical subsystem in the Integrated Performance Management System. This
30 engine is specifically designed for high-speed data pipelining to the User Interface (UI). Its
core objective is to ensure real-time data processing and delivery, enhancing the system's ability to respond promptly to dynamic changes. Data is received from various connected subsystems and processed in real-time by the Streaming Engine [210]. After processing, the data is streamed to the UI, fostering rapid decision-making and responses. The Streaming
20

Engine [210] cooperates with the Distributed Data Lake [228], Message Broker [234], and
Caching Layer [238] to provide seamless, real-time data flow. Stream Analytics is designed to
perform required computations on incoming data instantly, ensuring that the most relevant and
up-to-date information is always available at the UI. Furthermore, this system can also retrieve
5 data from the Distributed Data Lake [228], Message Broker [234], and Caching Layer [238] as
per the requirement and deliver it to the UI in real-time. The streaming engine's [210] ultimate goal is to provide fast, reliable, and efficient data streaming, contributing to the overall performance of the management system.
10 [0068] Reporting Engine [232]: The Reporting Engine [232] is a key subsystem of the
Integrated Performance Management System. The fundamental purpose of designing the Reporting Engine [232] is to dynamically create report layouts of API data, catered to individual client requirements, and deliver these reports via the Notification Engine. The REM serves as the primary interface for creating custom reports based on the data visualized through
15 the client's dashboard. These custom dashboards, created by the client through the User
Interface (UI), provide the basis for the Reporting Engine [232] to process and compile data from various interfaces. The main output of the Reporting Engine [232] is a detailed report generated in Excel format. The Reporting Engine’s [232] unique capability to parse data from different subsystem interfaces, process it according to the client's specifications and
20 requirements, and generate a comprehensive report makes it an essential component of this
performance management system. Furthermore, the Reporting Engine [232] integrates seamlessly with the Notification Engine to ensure timely and efficient delivery of reports to clients via email, ensuring the information is readily accessible and usable, thereby improving overall client satisfaction and system usability.
25
[0069] Correlation Engine [242]: The Correlation Engine [242] provides an interactive user interface to correlate one or more user defined workflow steps. The one or more user defined workflow steps are then executed based on a data received over a message broker by correlating with the network nodes data inserted by the normalization layer [204].
30
[0070] Anomaly Detection Layer [208]: The Anomaly Detection Layer [208] identifies one or more deviations or one or more irregularities in a network behaviour. The one or more deviations or one or more irregularities may indicate one or more potential issues or one or more threats in the network. The Anomaly Detection Layer [208] analyses an incoming data
21

stream for identifying one or more deviations or one or more irregularities in a network behaviour.
[0071] Operations and management system [214]: The operations and management system
5 [200p] ensures a seamless operation and optimization of an infrastructure of the network. The
operations and management system [214] may include one or more tools and may operate according to one or more processes for monitoring, managing, maintaining the performance of the network.
10 [0072] Service quality manager [240]: The service quality manager [240] is configured to
observe and maintain a quality of service delivered across the network. The service quality manager [240] performs monitoring, analysing and optimizing one or more aspects of service delivery for ensuring a consistent performance and a customer satisfaction.
15 [0073] API gateway system [216]: The Application programming interface (API) gateway
system [200r] ensures a secured and an efficient communication between a plurality of microservices, devices and systems associated with the network. The API gateway system [216] is configured to manage, optimize Lifecyle of APIs, handle traffic routing, authentication, authorization for ensuring the secured and efficient communication.
20
[0074] Mapping layer [226]: The mapping layer [226] may facilitate in transformation of data into a plurality of formats, structures and schemas within the network. The mapping layer [226] ensures an interoperability and compatibility by mapping one or more data elements from one representation to another representation.
25
[0075] Forecasting engine [222]: The forecasting engine [222] may predict future one or more data trends, a behaviour of the network, an outcome of the network based on one or more historical data patterns. The forecasting engine [222] may utilize one or more machine learning techniques and/or one or more artificial intelligence techniques for predicting the one or more
30 data trends, the behaviour of the network, the outcome of the network.
[0076] Referring to FIG. 3, an exemplary block diagram of a system [300] for automatically detecting a new network node associated with a network is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one
22

transceiver unit [302], at least one fetching unit [304], at least one identification unit [306], at
least one generation unit [308], at least one detection unit [310], at least one storage unit [312],
and at least one computing unit [314]. Also, all of the components/ units of the system [300]
are assumed to be connected to each other unless otherwise indicated below. As shown in the
5 figures all units shown within the system [300] should also be assumed to be connected to each
other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may be present in a user device/ user equipment to implement the features of the
10 present disclosure. The system [300] may be a part of the user device or may be independent
of the user device, but in communication with the user device (may also referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
15
[0077] The system [300] is configured for automatically detecting the new network node associated with the network, with the help of the interconnection between the components/units of the system [300].
20 [0078] Further, in accordance with the present disclosure, it is to be acknowledged that the
functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be
25 construed as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[0079] In order to automatically detect the new network node associated with the network, the
30 transceiver unit [302] is configured to receive via an Integrated Performance Management
(IPM) module from a load balancer [212], a load balance request associated with at least a first network node in the network, wherein the first network node is associated with at least a set of counter data. The transceiver unit [302] is further configured to transmit to a computation layer
23

[206] in the network, a key performance indicator (KPI) data request based on the load balance request.
[0080] The present disclosure encompasses that the load balance request may refer to a signal
5 which may be received from a user for distributing an incoming traffic among one or more
resources such as a server, a node, and any other such like source. The load balance request is
associated with an existing network node (i.e., the first network node). Further, the first network
node is associated with the set of counter data such as a bandwidth usage, a latency, a packet
loss, and any other such like counter data that may be appreciated by a person skilled in the art.
10 Further, the load balance request may act as a prompt in determining a presence of the new
network node in the network.
[0081] Further, the bandwidth usage may refer to an amount of data transmitted over the
network within a pre-defined amount of time. The latency refers to a time consumed for
15 transmission of a data from a source to a destination in the network. The packet loss refers to
an event wherein one or more data packets which are transmitted from the source to the destination in the network fails to reach the destination.
[0082] The present disclosure encompasses that the KPI data request refers to a request for
20 gathering a data associated with one or more performance metrics of the network such as the
latency, the bandwidth usage. Further, the data provides an information about a current state or an operational state of the network.
[0083] Further, the fetching unit [304] is configured to fetch via the IPM module [202] from
25 the computation layer [206] in the network, a KPI data based on the KPI data request, wherein
the KPI data is associated with one or more counters of the network.
[0084] The present disclosure encompasses that the KPI data is fetched from a pre-processed
data stored in a database. Further, the pre-processed data may be a set of data stored by a
30 network administrator.
[0085] The present disclosure encompasses that the fetching unit may utilize one or more fetching techniques for fetching the KPI data. The one or more fetching techniques may include a set of instructions to fetch the KPI data from the pre-processed data which is stored in the
24

database. Further, the one or more fetching techniques may be pre-stored in the database and/or may be pre-defined by the network administrator. Further, the one or more fetching techniques may include any such technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure. 5
[0086] The present disclosure encompasses that the KPI data refers to a data associated with one or more performance metrics of the network such as, an availability data, a response time, the latency, the bandwidth usage. Further, the data provides an information about a current state or an operational state of the network.
10
[0087] The availability data may refer to a data that indicates about a percentage of time that network node or the entire network is operational and accessible for use. The latency data refers to a time consumed for transmission of a data from the source to the destination in the network. The packet loss data refers to an event wherein one or more data packets which are transmitted
15 from the source to the destination in the network fails to reach the destination. The response
time data refers to a measurement of time elapsed between sending a request to an entity such as Access and Mobility Management Function (AMF) in the network and receiving a response to the corresponding request. For example, a request to allocate a resource (R1) to a user device is transmitted to the AMF and a corresponding response (such as allocation of R1) is received
20 from the AMF, hence, the response time in this scenario refers to a time elapsed between the
transmission of the request and the response time.
[0088] The present disclosure encompasses that the KPI data may be derived from aggregating,
analysing and interpreting the one or more counters. For instance, the latency (KPI) may be
25 calculated based on a latency counter. Also, in for instance, a counter may monitor a resource
usage rate in every 5 minutes and through the monitored data, the corresponding resource utilization KPI is derived. Hence, each KPI is associated with a corresponding counter in the network.
30 [0089] The present disclosure encompasses that for fetching the KPI data via the IPM module
[202] from the computation layer [206], the transceiver unit [302] is further configured to transmit, via the computation layer [206] in the network to a server, a pre-stored data access request associated with one or more sets of pre-stored data. The transceiver unit [302] is further configured to receive, via the computation layer [206] in the network from the server, at least
25

one set of pre-stored data from the one or more sets of pre-stored data based on the pre-stored data access request. Thereafter, the generation unit [308] is further configured to generate, via the computation layer [206] in the network, the KPI data based on the at least one set of pre-stored data. 5
[0090] In other words, the transceiver unit [302] transmits the pre-stored data access request (i.e., a request or a query for accessing the pre-stored data stored in the database) to a server. Further the database may be present in the server or associated with the server. In response to the pre-stored data access request, the server provides the required data (i.e., a target pre-stored
10 data). Further, the generation unit [308] may further process the pre-stored data to obtain the
KPI data. The generation unit [308] may utilize one or more data processing techniques for generating the KPI data. The data processing techniques may include but not limited to a data collection technique, a data preparation technique, a data sorting technique. Further, the one or more data processing techniques may be pre-stored in the database and/or may be pre-defined
15 by the network administrator. Further, the one or more data processing techniques may include
any such technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure.
[0091] The present disclosure encompasses that for receiving the at least one set of pre-stored
20 data, the generation unit [308] is further configured to generate the one or more sets of pre-
stored data. Further, to generate the one or more sets of pre-stored data, the transceiver unit
[302] is configured to receive, at a normalization layer [204] from an ingestion layer [244] of
the network, a processed data based on an input data received at the ingestion layer [244]. The
generation unit [308] is configured to generate, from the normalization layer [204] of the
25 network at the server, one or more sets of normalized data based on the processed data. Further,
the storage unit [312] is configured to store, from the normalization layer [204] of the network, the one or more sets of normalized data, wherein the stored one or more sets of normalized data corresponds to the one or more sets of pre-stored data.
30 [0092] In other words, for generating the one or more sets of pre-stored data, the transceiver
unit [302] is configured to receive the processed data based on the input data, wherein the input data is received from the ingestion layer [244]. Further, the one or more sets of normalized data are generated based on the processed data. Further, the one or more sets of normalized data are stored in the storage unit [312]. Also, the database may be present in the storage unit [312].
26

[0093] Further, the one or more sets of normalized data refers to a set of data in which
unstructured data and/or a redundant data is removed. The normalized data refers to a data
which is processed in accordance with a pre-defined format to reduce redundancy and to
5 improve a data integrity. For example, the data may be processed by removing one or more
repeating groups, removing one or more partial dependencies, removing one or more transitive dependencies.
[0094] Further, the normalization layer may utilize one or more data normalization techniques
10 such as a Z-score Normalization, a Min-Max Normalization, and a Normalization by decimal
scaling, for generating the one or more sets of the normalized data. Further, the one or more
data normalization techniques may be pre-stored in the database and/or may be pre-defined by
the network administrator. Further, the data normalization techniques may include any such
technique that may be appreciated by a person skilled in the art to implement the solution of
15 the present disclosure.
[0095] The present disclosure encompasses that the system [300] is further configured to update a set of existing dashboards based on the one or more sets of pre-stored data.
20 [0096] The present disclosure encompasses that the set of existing dashboards refers to a
collection of graphical user interfaces (GUIs) or any visual representations which displays the one or more KPIs and a data related to the KPIs to the network administrator or any other concerned authority such as an operator, a network manager, etc. Further, the set of existing dashboards may be updated in real time based on the one or more sets of pre-stored data.
25 Further, the set of existing dashboards may be updated in pre-defined time intervals.
[0097] The present disclosure encompasses that each pre-stored data from the one or more sets
of pre-stored data is associated with at least one counter from the one or more counters of the
network. For example, a pre-stored data such as bandwidth usage data (i.e., pre-stored data) is
30 associated with a bandwidth usage counter (i.e., counter) in the network.
[0098] The fetching unit [304] is configured to fetch the target counter data from the storage unit [312] based on the at least one or more sets of pre-stored data stored at the storage unit
27

[312]. For example, if the pre-stored data relates to the bandwidth usage data, then the fetching unit [304] may fetch the bandwidth counter data from the storage unit [312].
[0099] The identification unit [306] via the IPM module [202], is configured to identify a target
5 counter from the one or more counters, based on the KPI data. The present disclosure
encompasses that the identification unit [306] may utilize one or more identification techniques
such as counter identification technique, deep packet inspection (DPI) technique for identifying
the target counter from the one or more counters. Further, the one or more identification
techniques may be pre-stored in the database and/or may be pre-defined by the network
10 administrator. Further, the one or more identification techniques may include any such
technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure.
[0100] Also, the fetching unit [304] is further configured to fetch, from a storage unit [312], a
15 target counter data associated with the target counter. For example, in case of the bandwidth
counter (i.e., target counter), the fetching unit [304] may fetch the target counter data such as a bandwidth counter data, for example, bandwidth of the network is 40 Mbps.
[0101] Further the generation unit [308] via the IPM module [202], is configured to generate
20 at least one of a computed data based on the target counter data and a notification associated
with the computed data (i.e., a dashboard data which may include an information related to a
new hardware (i.e., new node) installation). The present disclosure encompasses that the
computed data may include, but is not limited to, a data which indicates an availability of the
new network node, a location of the new network node, one or more patterns associated with
25 the new network node such as data transmission pattern. Further, the computed data may
include any relevant data which may indicate a presence of the new network node in the network.
[0102] As used herein, the “computed data” may refer to a processed and an analysed data
30 generated from the target counter data, to detect patterns associated with the new network node.
Further, the computed data may be generated by utilising a statistical model, and a machine learning techniques, wherein said statistical model, and said machine learning techniques may utilise at least one of the target counter data and the KPI data to identify an information associated with the new network node such as the new network node performance, usage, and
28

trends. The computed data may include indicators of new network node availability, a location
data, and a patterns associated with data transmission, a network usage, and a performance
metrics. For example, a target counter data such as 100,000 data packets transmitted through
Cell Tower X in the past 24 hours with timestamps such as 2024-07-21 00:00:00 - 2024-07-22
5 00:00:00 over a network protocol is 50% TCP, 30% UDP, 20% ICMP associated with usage
of video streaming applications is 40% video streaming service A, 30% video streaming service
B 30% other. Then a computed data for the new network node such as Cell Tower X" has
experienced a 25% increase in data transmission volume over the past 24 hours, with a peak
usage of 500 Mbps during the evening hours of 7-10 PM. Further, a data transmission pattern
10 associated with the computed data indicates a high usage of video streaming services, with a
30% increase in a video streaming service A traffic and a 20% increase in a video streaming service B traffic.
[0103] The present disclosure encompasses that the generator unit may utilize one or more
15 data computation techniques for generating the computed based on the target counter data. The
one or more data computation techniques may include but not limited to distributed data
computation technique, a real time data computation technique, a batch data computing
technique. Further, the one or more data computation techniques may be pre-stored in the
database and/or may be pre-defined by the network administrator. Further, the one or more data
20 computation techniques may include any such technique that may be appreciated by a person
skilled in the art to implement the solution of the present disclosure.
[0104] The present disclosure encompasses that the notification associated with the computed data is generated in an event the computed data is generated via the computation layer [206].
25 The notification may include a message that provides an alert about completion of the
generation of the computed data. The notification may be displayed on a display interface of the user equipment. For ease of understanding, continuing from the example above, a notification comprising may comprise an alert such as the Cell Tower X has experienced a significant increase in data transmission volume. In another implementation the alert may also
30 comprise a recommendation to increase a bandwidth allocation to ensure optimal network
performance. Further such alert may be associated with the computed data, wherein the i.e., the computed data is generated based on grouping data packets transmitted per hour through Cell Tower X in the past 24 hours, a recognising a pattern associated with said transmitted data packets to identify peak usage hours and the usage of video streaming applications, thereafter
29

utilising a statistical models to calculate percentage increases in data transmission volume and application usage.
[0105] Further, the detection unit [310] via the IPM module [202], is configured to
5 automatically detect the new network node associated with the network based on generating at
least one of the computed data and the notification associated with the computed data. As, the computed data may include the data which indicate the availability of new network node such as the one or more patterns associated with the new network node such as data transmission pattern. Hence, by processing the computed data and the notification, which is associated with
10 the computed data, the new network node is detected. For example, the computed data may
include a pattern which indicates an abnormal utilization of resources or availability of new network node (Node X) or a heartbeat signal from the Node X, the detection unit process the pattern to detect the new network node which is associated with the network. Further, the heartbeat signal is a signal which periodically indicates an operational status of the node in the
15 network.
[0106] Further, to automatically detect the new network node, the detection unit [310] is configured to detect, a success status associated with the notification. The transceiver unit [302] is further configured to receive, from the load balancer [212] in the network, a fetch KPI data
20 request based on the success status associated with the notification. The present disclosure
encompasses the computation unit [314], wherein the computation unit [314] is configured to compute, a new network node KPI data associated with the new network node based on the fetched KPI data request. The transceiver unit [302] is further configured to transmit to the load balancer [212], the new network node KPI data.
25
[0107] The present disclosure encompasses that the computation unit may utilize one or more detection techniques such as classification technique, a regression technique, for automatically detecting the new network node. Further, the one or more detection techniques may be pre-stored in the database and/or may be pre-defined by the network administrator. Further, the one
30 or more detection techniques may include any such technique that may be appreciated by a
person skilled in the art to implement the solution of the present disclosure.
[0108] In other words, the detection unit [310] detects the success status which is associated with the notification and thereafter the transceiver unit [302] fetch the KPI data request.
30

Further, the computation unit compute the new network node KPI data (such as KPI data of a new node X) that is associated with the new network node based on the fetched KPI data request. Thereafter, the new network node KPI data is transmitted to the load balancer [212].
5 [0109] Referring to FIG. 4, a method flow diagram [400] for automatically detecting a new
network node associated with a network, in accordance with exemplary implementations of the
present disclosure is shown. In an implementation the method [400] is performed by the system
[300]. Further, in an implementation, the system [300] may be present in a server device to
implement the features of the present disclosure. Also, as shown in FIG.4, the method [400]
10 starts at step [402].
[0110] At step [404], the method [400] comprises receiving, by a transceiver unit [302] via an
Integrated Performance Management (IPM) module [202] from a load balancer [212], a load
balance request associated with at least a first network node in the network, wherein the first
15 network node is associated with at least a set of counter data.
[0111] The present disclosure encompasses that the load balance request may refer to a signal which may be received from a user for distributing an incoming traffic among one or more resources such as a server, a node, and any other such like source. The load balance request is
20 associated with an existing network node (i.e., the first network node). Further, the first network
node is associated with the set of counter data such as a bandwidth usage, a latency, a packet loss., and any other such like counter data that may be appreciated by a person skilled in the art. Further, the load balance request may act as a prompt in determining a presence of the new network node in the network.
25
[0112] Further the bandwidth usage may refer to an amount of data transmitted over the network within a pre-defined amount of time. The latency refers to a time consumed for transmission of a data from a source to a destination in the network. The packet loss refers to an event wherein one or more data packets which are transmitted from the source to the
30 destination in the network fails to reach the destination.
[0113] The present disclosure encompasses that the KPI data request refers to a request for gathering a data associated with one or more performance metrics of the network such as the
31

latency, the bandwidth usage. Further, the data provides an information about a current state or an operational state of the network.
[0114] At step [406], the method [400] comprises transmitting, by the transceiver unit [302]
5 from the IPM module [202] to a computation layer [206], a key performance indicator (KPI)
data request based on the load balance request.
[0115] At step [408], the method [400] comprises fetching, by a fetching unit [304] via the
IPM module [202] from the computation layer [206], a KPI data based on the KPI data request,
10 wherein the KPI data is associated with one or more counters of the network.
[0116] The present disclosure encompasses that the KPI data is fetched from a pre-processed data stored in a database. Further, the pre-processed data may be a set of data stored by a network administrator.
[0117] The present disclosure encompasses that the fetching unit [304] may utilize one or more fetching techniques for fetching the KPI data. The one or more fetching techniques may include a set of instructions to fetch the KPI data from the pre-processed data which is stored in the database. Further, the one or more fetching techniques may be pre-stored in the database and/or may be pre-defined by the network administrator. Further, the one or more fetching techniques may include any such technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure.
[0118] The present disclosure encompasses that the KPI data refers to a data associated with
25 one or more performance metrics of the network such as an availability data, a response time,
the latency, the bandwidth usage. Further, the data provides an information about a current state or an operational state of the network.
[0119] The availability data may refer to a data that indicates about a percentage of time that
30 network node or the entire network is operational and accessible for use. The latency data refers
to a time consumed for transmission of a data from the source to the destination in the network. The packet loss data refers to an event wherein one or more data packets which are transmitted from the source to the destination in the network fails to reach the destination. The response time data refers to a measurement of time elapsed between sending a request to an entity such
32

as Access and Mobility Management Function (AMF) in the network and receiving a response
to the corresponding request. For example, a request to allocate a resource (R1) to a user device
is transmitted to the AMF and a corresponding response (such as allocation of R1) is received
from the AMF, hence, the response time in this scenario refers to a time elapsed between the
5 transmission of the request and the response time.
[0120] The present disclosure encompasses that the fetching the KPI data via the IPM module
[202] from the computation layer [206] further comprises transmitting, by the transceiver unit
[302] via the computation layer [206] in the network to a server, a pre-stored data access request
10 associated with one or more sets of pre-stored data.
[0121] The fetching the KPI data via the IPM module [202] from the computation layer [206]
further comprises receiving, by the transceiver unit [302] via the computation layer [206] in
the network from the server, at least one set of pre-stored data from the one or more sets of pre-
15 stored data based on the pre-stored data access request.
[0122] The fetching the KPI data via the IPM module [202] from the computation layer [206] further comprises generating, by the generation unit [308] via the computation layer [206] in the network, the KPI data based on the at least one set of pre-stored data.
20
[0123] In other words, the transceiver unit [302] transmits the pre-stored data access request (i.e., a request or a query for accessing the pre-stored data stored in the database) to a server. Further the database may be present in the server or associated with the server. In response to the pre-stored data access request, the server provides the required data (i.e., a pre-stored data).
25 Further, the generation unit [308] may further process the pre-stored data to obtain the KPI
data. The data processing techniques may include but not limited to a data collection technique, a data preparation technique, a data sorting technique. The generation unit [308] may utilize one or more data processing techniques for generating the KPI data. Further, the one or more data processing techniques may be pre-stored in the database and/or may be pre-defined by the
30 network administrator. Further, the one or more data processing techniques may include any
such technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure.
33

[0124] The present disclosure encompasses that the receiving of at least one set of pre-stored
data further comprises generating, by the generation unit [308], the one or more sets of pre-
stored data, wherein generating the one or more sets of pre-stored data further comprises
receiving, by the transceiver unit [302], at a normalization layer [204] from an ingestion layer
5 [244] of the network, a processed data based on an input data received at the ingestion layer
[244].
[0125] Further generating the one or more sets of pre-stored data comprises generating, by the
generation unit [308], from the normalization layer [204] of the network at the server, the one
10 or more sets of normalized data based on the processed data.
[0126] Further generating the one or more sets of pre-stored data comprises storing, at the
storage unit [312] from the normalization layer [204] of the network, the one or more sets of
normalized data, wherein the stored one or more sets of normalized data corresponds to the one
15 or more sets of pre-stored data.
[0127] In other words, for generating the one or more sets of pre-stored data, the transceiver
unit [302] is configured to receive the processed data based on the input data. Further, the input
data is received from the ingestion layer [244]. Further, the one or more sets of normalized data
20 are generated based on the processed data. Further, the one or more sets of normalized data are
stored in the storage unit [312]. Also, the database may be present in the storage unit [312].
[0128] Further, the one or more sets of normalized data refers to a set of data in which unstructured data and/or a redundant data is removed. The normalized data refers to a data
25 which is processed in accordance with a pre-defined format to reduce redundancy and to
improve a data integrity. For example, the data may be processed by removing one or more repeating groups, removing one or more partial dependencies, removing one or more transitive dependencies. Further, the normalization layer may utilize one or more data normalization techniques such as a Z-score Normalization, a Min-Max Normalization, and a Normalization
30 by decimal scaling, for generating the one or more sets of the normalized data. Further, the one
or more data normalization techniques may be pre-stored in the database and/or may be pre-defined by the network administrator. Further, the data normalization techniques may include any such technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure.
34

[0129] The present disclosure encompasses that the method [400] further comprising updating, a set of existing dashboards based on the one or more sets of normalized data.
[0130] The present disclosure encompasses that the set of existing dashboards refers to a
5 collection of graphical user interfaces (GUIs) or any visual representations which displays the
one or more KPIs and a data related to the KPIs to the network administrator or any other concerned authority such as an operator, a network manager, etc. Further, the set of existing dashboards may be updated in real time based on the one or more sets of pre-stored data. Further, the set of existing dashboards may be updated in pre-defined time intervals.
10
[0131] The present disclosure encompasses that each pre-stored data from the one or more sets of pre-stored data is associated with at least one counter from the one or more counters of the network. For example, a pre-stored data such as bandwidth usage data (i.e., pre-stored data) is associated with a bandwidth usage counter (i.e., counter) in the network.
15
[0132] The present disclosure encompasses that the target counter data is fetched, by the fetching unit [304], via the IPM module [202] from the storage unit [312] based on the at least one or more sets of pre-stored data stored at the storage unit [312]. For example, if the pre-stored data relates to a bandwidth usage data, then the fetching unit [304] may fetch a
20 bandwidth counter data from the storage unit [312].
[0133] At step [410], the method [400] comprises identifying, by an identification unit [306] via the IPM module [202], a target counter from the one or more counters, based on the KPI data. The present disclosure encompasses that the identification unit [306] may utilize one or
25 more identification techniques such as counter identification technique, deep packet inspection
(DPI) technique for identifying the target counter from the one or more counters. Further, the one or more identification techniques may be pre-stored in the database and/or may be pre-defined by the network administrator. Further, the one or more identification techniques may include any such technique that may be appreciated by a person skilled in the art to implement
30 the solution of the present disclosure.
[0134] At step [412], the method [400] comprises fetching, by the fetching unit [304] via the IPM module [202] from a storage unit [312], a target counter data associated with the target counter. For example, in case of the bandwidth counter (i.e., target counter), the fetching unit
35

[304] may fetch the target counter data such as a bandwidth counter data, for example, bandwidth of the network is 40 Mbps.
[0135] At step [414], the method [400] comprises generating, by a generation unit [308] via
5 the IPM module [202], at least one of a computed data based on the target counter data and a
notification associated with the computed data. The present disclosure encompasses that the
computed data may include, but is not limited, to a data which indicates an availability of the
new network node, a location of the new network node, one or more patterns associated with
the new network node such as data transmission pattern. Further, the computed data may
10 include any relevant data which may indicate the presence of the new network node in the
network.
[0136] The present disclosure encompasses that the generator unit may utilize one or more data
computation techniques for generating the computed based on the target counter data. Further,
15 the one or more data computation techniques may be pre-stored in the database and/or may be
pre-defined by the network administrator. Further, the one or more data computation techniques may include any such technique that may be appreciated by a person skilled in the art to implement the solution of the present disclosure.
20 [0137] The present disclosure encompasses that the notification associated with the computed
data is generated in an event the computed data is generated via the computation layer [206]. The notification may include a message that provides an alert about completion of the generation of the computed data. The notification may be displayed on a display interface of the user equipment.
25
[0138] At step [416], the method [400] comprises automatically detecting, by a detection unit [310] via the IPM module [202], the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.
30
[0139] As, the computed data may include the data which indicate the availability of new network node such as the one or more patterns associated with the new network node such as data transmission pattern. Hence, by processing the computed data and the notification, which is associated with the computed data, the new network node is detected. For example, the
36

computed data may include a pattern which indicates an abnormal utilization of resources or
availability of new network node (Node X) or a heartbeat signal from the Node X, the detection
unit process the pattern to detect the new network node which is associated with the network.
Further, the heartbeat signal is a signal which periodically indicates an operational status of the
5 node in the network.
[0140] The method terminates at step [418].
[0141] The present disclosure encompasses that the automatically detecting the new network
10 node further comprises detecting, by the detection unit [310], via the IPM module [202], a
success status associated with the notification. The automatically detecting the new network node further comprises receiving, by the transceiver unit [302], via the IPM module [202], from the load balancer [212] in the network, a fetch KPI data request based on the success status associated with the notification. The automatically detecting the new network node further
15 comprises computing, by a computation unit [314], via the IPM module [202], a new network
node KPI data associated with the new network node based on the fetch KPI data request. The automatically detecting the new network node further comprises transmitting, by the transceiver unit [302], from the IPM module [202], to the load balancer [212], the new network node KPI data.
20
[0142] The present disclosure encompasses that the computation unit may utilize one or more detection techniques such as classification technique, a regression technique for automatically detecting the new network node. Further, the one or more detection techniques may be pre-stored in the database and/or may be pre-defined by the network administrator. Further, the one
25 or more detection techniques may include any such technique that may be appreciated by a
person skilled in the art to implement the solution of the present disclosure.
[0143] In other words, the detection unit [310] detects the success status which is associated
with the notification and thereafter the transceiver unit [302] fetch the KPI data request.
30 Further, the computation unit compute the new network node KPI data (such as KPI data of a
new node X) that is associated with the new network node based on the fetched KPI data request. Thereafter, the new network node KPI data is transmitted to the load balancer [212].

[0144] Referring to FIG. 5, exemplary signal flow diagram of a method [500] for storing a data into a distributed file system [224], in accordance with exemplary implementations of the present disclosure is shown.
5 [0145] The method [500] initiates at step S1, wherein a data (may be related to a Key
Performance Indictor (KPI) data of a network) is transmitted from a data record [502] to an ingestion layer [244].
[0146] At step S2, the ingestion layer [244] receives the data and further forwards the data to
10 a normalization layer [204] for processing of the data.
[0147] At step S3, the data is transmitted from the normalization layer [204] to a distributed data lake [228] and further the data is transmitted to a distributed file system [224] at step S4.
15 [0148] The method [500] terminates after completion of step S4.
[0149] Referring to FIG. 6, an exemplary signal flow diagram of a method [600] for
automatically detecting a new network node associated with a network, in accordance with
exemplary implementations of the present disclosure is shown. In an implementation the
20 method [600] is performed by the system [300]. Further, in an implementation, the system
[300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG.6, the method [600] starts at step S1.
[0150] At step S1, a user sends a user request i.e., a load balance request to a user interface
25 (UI) server. Further, at step S2, the UI server [602] forwards the request to a Load Balancer
(LB) [212], wherein the request may be associated with a first network node in the network.
[0151] Thereafter, at step S3, the LB [212] forwards the request to an Integrated Performance Management (IPM) module [202]. 30
[0152] Further at step S4, a data request (i.e., a key performance indicator (KPI) data request) is forwarded to a computation layer [206]. At step S5, an acknowledgment message in response to the data request may be received from the computation layer [206].

[0153] At step S6, the computation layer [206] requests a stored data access i.e., a pre-stored data access request from a distributed file system [224].
[0154] At step S7, the distributed file system [224] sends a required data i.e., a set of pre-stored
5 data to the computation layer [206] based on the request in step S6.
[0155] Upon receiving the required data, at step S8, a Key Performance Indicator (KPI) data is computed on the computation layer [206].
10 [0156] Further, at step S9, the computation layer [206] sends the Key Performance Indicator
(KPI) data after computing the KPI data (at step S8). Thereafter, at step S10, the IPM module [202] fetches a required counter i.e., a target counter data from a distributed data lake [228]. At step S11, the distributed data lake [228] sends a data i.e., a target counter data associated to the required counter data) to the IPM module [202].
15
[0157] At step S12, the IPM module [202] performs a calculation, which results in a dashboard computed data i.e., the computed data.
[0158] At step S13, the dashboard computed data (i.e., the computed data) is transmitted to the
20 LB [212] along with a notification.
[0159] At step S14, the dashboard computed data along with the notification is forwarded to the UI server [602], as an output to the user on the UI sever (at step S15).
25 [0160] At step S16, if the user clicks onto the notification, a result request i.e., the fetch KPI
data request is forwarded to the UI server [602] and further, the result is fetched from the load balancer [212] via the UI server [602] at step S17.
[0161] At step S18, a request from the load balancer [212] is transmitted to the IPM module
30 [202]. Further, step S19, in response, to the request, a corresponding data (i.e., the KPI data) is
calculated by the IPM module [202].
[0162] At step S20, the calculated KPI data i.e., the new network node KPI data is forwarded to the LB [212] and further, at step S21, the calculated KPI data is forwarded to the UI server
39

[602] and further the same is presented to the user as a result in response to the result request (at step S22).
[0163] The method [600] terminates after step S22. 5
[0164] Referring to FIG. 7, an exemplary architecture of a system [700] for automatically detecting a new network node associated with a network, in accordance with exemplary implementations of the present disclosure is shown. The system [300] may be implemented in conjunction with the system [700] as depicted in FIG. 7 to implement the present disclosure.
10
[0165] The system [700] comprises of a data record [502], an ingestion layer [244], a normalization layer [204], a distributed data lake [228], a distributed file system [224], a computation layer [206], an integrated performance management (IPM) module [202], a load balancer [212] and a user interface [702]. Further, all the layers and load balancer [212] are
15 defined in the description of FIG. 2.
[0166] Further, the data record [502] are connected with the ingestion layer [244] preferably via a Hypertext Transfer Protocol (HTTP). The ingestion layer [244] is connected with the normalization layer [204]. The ingestion layer [244] communicates with the normalization
20 layer [204] preferably by the HTTP. The normalization layer [204] is further connected with
the distributed data lake [228] and the distributed file system [224]. The normalization layer [204] communicates with the distributed data lake [228] via a Transmission Control Protocol (TCP). The normalization layer [204] performs File I/O (Input/Output) operations on the distributed file system [224]. The File I/O (Input/Output) operations refers to one or more
25 operations (such as reading data, writing data) performed on the distributed file system [224].
[0167] Further the IPM module [202] is connected with the computation layer [206] and the
IPM module [202] communicates with the computation layer [206] via the HTTP. Further the
IPM module [202] is connected with the load balancer [212] and the IPM module [202]
30 communicates with the load balancer via the HTTP. Further the load balancer [212] is
connected with the user interface [702]. The load balancer [212] communicates with the user interface [602] via the HTTP.

[0168] The HTTP refers to an application layer protocol that is configured to transfer an
information or a data between one or more networked devices. The TCP may refer to a
communication standard for delivering the data between one or more networked devices.
Further the TCP is a basic standard that defines one or more rules of the internet and is a
5 protocol used to deliver the data in a digital network communication.
[0169] The present disclosure may provide a user equipment (UE) for automatically detecting a new network node associated with a network. The UE comprises a memory, a processor coupled to the memory and the processor is configured to automatically detect the new network
10 node associated with the network via a system [300]. The automatic detection of the new
network associated with a network is done by receiving, by a transceiver unit [302] of the system [300], from a load balancer [212], a load balance request associated with at least a first network node in the network, wherein the first network node is associated with at least a set of counter data. The automatic detection of the new network associated with a network is further
15 done by transmitting, by the transceiver unit [302] of the system [300], from an Integrated
Performance Management (IPM) module [202] to a computational layer, a key performance indicator (KPI) data request based on the load balance request. The automatic detection of the new network associated with a network is done by fetching, by a fetching unit [304] of the system [300], a KPI data based on the KPI data request, wherein the KPI data is associated
20 with one or more counters of the network. The automatic detection of the new network
associated with a network is done by identifying, by an identification unit [306] of the system [300], a target counter from the one or more counters, based on the KPI data. The automatic detection of the new network associated with a network is done by fetching, by the fetching unit [304] of the system [300], from a storage unit [312], a target counter data associated with
25 the target counter. The automatic detection of the new network associated with a network is
done by generating, by a generation unit [308] of the system [300], at least one of a computed data based on the target counter data and a notification associated with the computed data. The automatic detection of the new network associated with a network is done by automatically detecting, by a detection unit [310] of the system [300], the new network node associated with
30 the network based on generating at least one of the computed data and the notification
associated with the computed data.
[0170] The present disclosure may provide a non-transitory computer readable storage medium storing instructions for automatically detecting a new network node associated with a network,
the instructions include executable code which, when executed by one or more units of a system
[300], causes a transceiver unit [302] of the system [300] to receive, from a load balancer [212],
a load balance request associated with at least a first network node in the network, wherein the
first network node is associated with at least a set of counter data and to transmit, to a
5 computational layer [206] in the network, a Key performance indicator (KPI) data request
based on the load balance request. The executable code when executed further causes a fetching unit [304] of the system [300] to fetch from the computational layer [206] in the network, a KPI data based on the KPI data request, wherein the KPI data is associated with one or more counters of the network. The executable code when executed further causes an identification
10 unit [306] of the system [300] to identify a target counter from the one or more counters, based
on the KPI data. The executable code when executed further causes the fetching unit [304] of the system [300] to fetch, from a storage unit [312], a target counter data associated with the target counter. The executable code when executed further causes a generation unit [308] of the system [300] to generate at least one of a computed data based on the target counter data
15 and a notification associated with the computed data. The executable code when executed
further causes a detection unit [310] of the system [300] to automatically detect the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.
20 [0171] As is evident from the above, the present disclosure provides a technically advanced
solution for automatically detecting a new network node associated with a network. The present solution eliminates the need to manually update a dashboard with new instances. The present solution ensures that none of a microservice needs to be restarted when new network nodes are added. Further, the present solution provides an expansion of a load on a node in the network
25 in real time without affecting an execution flow. Hence, the present solution effectively detects
the new network node which is associated with the network without any manual intervention and without any manually update in the existing flow.
[0172] While considerable emphasis has been placed herein on the disclosed implementations,
30 it will be appreciated that many implementations can be made and that many changes can be
made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

We claim:
1. A method [400] for automatically detecting a new network node associated with a
network, the method [400] comprising:
5 - receiving, by a transceiver unit [302] via an Integrated Performance Management (IPM)
module [202] from a load balancer [212], a load balance request associated with at least a first network node in the network, wherein the first network node is associated with at least a set of counter data;
- transmitting, by the transceiver unit [302] from the IPM module [202] to a computation
10 layer [206], a key performance indicator (KPI) data request based on the load balance
request;
- fetching, by a fetching unit [304] via the IPM module [202] from the computation layer
[206], a KPI data based on the KPI data request, wherein the KPI data is associated
with one or more counters of the network;
15 - identifying, by an identification unit [306] via the IPM module [202], a target counter
from the one or more counters, based on the KPI data;
- fetching, by the fetching unit [304] via the IPM module [202] from a storage unit [312],
a target counter data associated with the target counter;
- generating, by a generation unit [308] via the IPM module [202], at least one of a
20 computed data based on the target counter data and a notification associated with the
computed data; and
- automatically detecting, by a detection unit [310] via the IPM module [202], the new
network node associated with the network based on generating at least one of the
computed data and the notification associated with the computed data.
25
2. The method as claimed in claim 1, wherein the KPI data is fetched from a pre-processed
data stored in a database.
3. The method as claimed in claim 1, wherein the notification associated with the
30 computed data is generated in an event the computed data is generated via the
computation layer [206].
4. The method [400] as claimed in claim 1, wherein fetching the KPI data via the IPM
module [202] from the computation layer [206] further comprises:
- transmitting, by the transceiver unit [302] via the computation layer [206] in the
network to a server, a pre-stored data access request associated with one or more sets
5 of pre-stored data;
- receiving, by the transceiver unit [302] via the computation layer [206] in the network
from the server, at least one set of pre-stored data from the one or more sets of pre-
stored data based on the pre-stored data access request; and
- generating, by the generation unit [308] via the computation layer [206] in the network,
10 the KPI data based on the at least one set of pre-stored data.
5. The method [400] as claimed in claim 2, wherein receiving the at least one set of pre-
stored data further comprises generating, by the generation unit [308], the one or more
sets of pre-stored data, wherein generating the one or more sets of pre-stored data
15 further comprises:
- receiving, by the transceiver unit [302], at a normalization layer [204] from an ingestion
layer [244] of the network, a processed data based on an input data received at the
ingestion layer [244];
- generating, by the generation unit [308], from the normalization layer [204] of the
20 network at the server, the one or more sets of normalized data based on the processed
data; and
- storing, at the storage unit [312] from the normalization layer [204] of the network, the
one or more sets of normalized data, wherein the stored one or more sets of normalized
data corresponds to the one or more sets of pre-stored data.
25
6. The method [400] as claimed in claim 5, the method [200] further comprises updating,
a set of existing dashboards based on the one or more sets of normalized data.
7. The method [400] as claimed in claim 5, wherein each pre-stored data from the one or
30 more sets of pre-stored data is associated with at least one counter from the one or more
counters of the network.

8. The method [400] as claimed in claim 1, wherein the target counter data is fetched, by the fetching unit [304], via the IPM module [202] from the storage unit [312] based on the at least one or more sets of pre-stored data stored at the storage unit [312].
5 9. The method [400] as claimed in claim 1, wherein automatically detecting the new
network node further comprises:
- detecting, by the detection unit [310], via the IPM module [202], a success status
associated with the notification;
- receiving, by the transceiver unit [302], via the IPM module [202], from the load
10 balancer [212] in the network, a fetch KPI data request based on the success status
associated with the notification;
- computing, by a computation unit [314], via the IPM module [202], a new network
node KPI data associated with the new network node based on the fetch KPI data
request; and
15 - transmitting, by the transceiver unit [302], from the IPM module [202], to the load
balancer [212], the new network node KPI data.
10. A system [300] for automatically detecting a new network node associated with a network, the system [300] comprising: 20
• a transceiver unit [302] via an Integrated Performance Management (IPM) module
[202], configured to:
o receive, from a load balancer [212], a load balance request associated with
at least a first network node in the network, wherein the first network node is associated with at least a set of counter data;
o transmit, to a computation layer [206] in the network, a key performance indicator (KPI) data request based on the load balance request;
• a fetching unit [304] connected at least to the transceiver unit [302], wherein the
fetching unit [304] is configured to fetch via the IPM module [202], from the
computation layer [206] in the network, a KPI data based on the KPI data request,
wherein the KPI data is associated with one or more counters of the network;


• an identification unit [306] connected to at least the fetching unit [304], wherein
the identification unit [306] is configured to identify via the IPM module [202], a
target counter from the one or more counters, based on the KPI data;
the fetching unit [304] further configured to fetch, from a storage unit [312], a target counter data associated with the target counter;
• a generation unit [308] connected at least to the identification unit [306], wherein the generation unit [308] is configured to generate via the IPM module [202], at least one of a computed data based on the target counter data and a notification associated with the computed data; and
• a detection unit [310] connected to at least the generation unit [308], wherein the detection unit [310] is configured to automatically detect via the IPM module [202], the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.
11. The system [300] as claimed in claim 10, wherein the KPI data is fetched from a pre-processed data stored in a database.

12. The system [300] as claimed in claim 10, wherein the notification associated with the computed data is generated in an event the computed data is generated via the
computation layer [206].


13. The system [300] as claimed in claim 10, wherein for fetching the KPI data via the IPM module [202] from the computation layer [206]:
- the transceiver unit [302] is further configured to:
o transmit, via the computation layer [206] in the network to a server, a pre-stored data access request associated with one or more sets of pre-stored data,
o receive, via the computation layer [206] in the network from the server, at least one set of pre-stored data from the one or more sets of pre-stored data based on the pre-stored data access request; and
- the generation unit [308] is further configured to generate, via the computation layer
[206] in the network, the KPI data based on the at least one set of pre-stored data.


14. The system [300] as claimed in claim 13, for receiving the at least one set of pre-stored
data, the generation unit [308] is further configured to generate the one or more sets of
pre-stored data, and wherein to generate the one or more sets of pre-stored data:
- the transceiver unit [302] is configured to receive, via a normalization layer [204] from an ingestion layer [244] of the network, a processed data based on an input data received
at the ingestion layer [244];
- the generation unit [308] is configured to generate, from the normalization layer [204]
of the network at the server, the one or more sets of normalized data based on the
processed data; and
- the storage unit [312] configured to store, from the normalization layer [204] of the
network, the one or more sets of normalized data, wherein the stored one or more sets of normalized data corresponds to the one or more sets of pre-stored data.
15. The system [300] as claimed in claim 14, the system [300] is further configured to update a set of existing dashboards based on the one or more sets of pre-stored data.
16. The system [300] as claimed in claim 14, wherein each pre-stored data from the one or
more sets of pre-stored data is associated with at least one counter from the one or more
counters of the network.
17. The system [300] as claimed in claim 10, wherein the fetching unit [304] is configured
to fetch the target counter data from the storage unit [312] based on the at least one or
more sets of pre-stored data stored at the storage unit [312].
18. The system [300] as claimed in claim 10, the system [300] further comprising a
computation unit [314], wherein to automatically detect the new network node:
- the detection unit [310] is further configured to detect, a success status associated with
the notification;
- the transceiver unit [302] is further configured to receive, from the load balancer [212] in the network, a fetch KPI data request based on the success status associated with the
notification;
- the computation unit [314] is configured to compute, a new network node KPI data
associated with the new network node based on the fetch KPI data request; and

- the transceiver unit [302] is further configured to transmit to the load balancer [212], the new network node KPI data.


19. A user equipment (UE) for automatically detecting a new network node associated with a network, the UE comprises: a memory; and
a processor coupled to the memory and the processor is configured to automatically detect the new network node associated with the network via a system [300], wherein the automatic detection of the new network associated with the network is done by:
• receiving, by a transceiver unit [302] of the system [300], from a load balancer [212], a load balance request associated with at least a first network node in the network, wherein the first network node is associated with at least a set of counter data;
• transmitting, by the transceiver unit [302] of the system [300], from an Integrated Performance Management (IPM) module [202] to a computation layer [206], a key performance indicator (KPI) data request based on the load balance request;
• fetching, by a fetching unit [304] of the system [300], a KPI data based on the KPI data request, wherein the KPI data is associated with one or more counters of the network;
• identifying, by an identification unit [306] of the system [300], a target counter from the one or more counters, based on the KPI data;
• fetching, by the fetching unit [304] of the system [300], from a storage unit [312], a target counter data associated with the target counter;
• generating, by a generation unit [308] of the system [300], at least one of a computed data based on the target counter data and a notification associated with the computed data; and
• automatically detecting, by a detection unit [310] of the system [300], the new network node associated with the network based on generating at least one of the computed data and the notification associated with the computed data.

Documents

Application Documents

# Name Date
1 202321051742-STATEMENT OF UNDERTAKING (FORM 3) [01-08-2023(online)].pdf 2023-08-01
2 202321051742-PROVISIONAL SPECIFICATION [01-08-2023(online)].pdf 2023-08-01
3 202321051742-FORM 1 [01-08-2023(online)].pdf 2023-08-01
4 202321051742-FIGURE OF ABSTRACT [01-08-2023(online)].pdf 2023-08-01
5 202321051742-DRAWINGS [01-08-2023(online)].pdf 2023-08-01
6 202321051742-FORM-26 [21-09-2023(online)].pdf 2023-09-21
7 202321051742-Proof of Right [14-12-2023(online)].pdf 2023-12-14
8 202321051742-ORIGINAL UR 6(1A) FORM 1 & 26-300124.pdf 2024-02-03
9 202321051742-FORM-5 [31-07-2024(online)].pdf 2024-07-31
10 202321051742-ENDORSEMENT BY INVENTORS [31-07-2024(online)].pdf 2024-07-31
11 202321051742-DRAWING [31-07-2024(online)].pdf 2024-07-31
12 202321051742-CORRESPONDENCE-OTHERS [31-07-2024(online)].pdf 2024-07-31
13 202321051742-COMPLETE SPECIFICATION [31-07-2024(online)].pdf 2024-07-31
14 202321051742-FORM 3 [02-08-2024(online)].pdf 2024-08-02
15 202321051742-Request Letter-Correspondence [20-08-2024(online)].pdf 2024-08-20
16 202321051742-Power of Attorney [20-08-2024(online)].pdf 2024-08-20
17 202321051742-Form 1 (Submitted on date of filing) [20-08-2024(online)].pdf 2024-08-20
18 202321051742-Covering Letter [20-08-2024(online)].pdf 2024-08-20
19 202321051742-CERTIFIED COPIES TRANSMISSION TO IB [20-08-2024(online)].pdf 2024-08-20
20 Abstract-1.jpg 2024-10-10