Sign In to Follow Application
View All Documents & Correspondence

Method And System For Dynamically Assigning Network Counters

Abstract: The present disclosure relates to a method [300] and a system [200] for dynamically assigning network counters. The method [300] comprises: receiving [304], by a transceiver unit [202], a counter assign request associated with one or more user groups. The method [300] further comprises identifying [306], by a processing unit [204] at a Network performance management system, one or more network counters based on the counter assign request. Thereafter, the method [300] comprises dynamically assigning [308], by the processing unit [204] from the network performance management system, the one or more network counters to the one or more user groups associated with the counter assign request. [FIG. 2]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
22 August 2023
Publication Number
09/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Ankit Murarka
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
3. Jugal Kishore
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
4. Gaurav Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
5. Kishan Sahu
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
6. Rahul Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
7. Sunil Meena
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
8. Gourav Gurbani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
9. Sanjana Chaudhary
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
10. Chandra Ganveer
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
11. Supriya Kaushik De
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
12. Debashish Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
13. Mehul Tilala
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
14. Dharmendra Kumar Vishwakarma
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
15. Yogesh Kumar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
16. Niharika Patnam
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
17. Harshita Garg
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
18. Avinash Kushwaha
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
19. Sajal Soni
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
20. Kunal Telgote
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
21. Manasvi Rajani
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR DYNAMICALLY ASSIGNING
NETWORK COUNTERS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre
Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR DYNAMICALLY ASSIGNING
NETWORK COUNTERS
FIELD OF INVENTION
5
[0001] Embodiments of the present disclosure relate to a method and a system for
dynamic counter assignment i.e., for dynamically assigning network counters.
BACKGROUND
10
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
15 to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
[0003] Network performance management systems typically track network
elements and data from network monitoring tools and then combine and process
20 such data to determine key performance indicators (KPI) of the network.
[0004] In many organizations, dashboards are used to monitor and analyze the
performance of systems, networks, or other operations. These dashboards rely on
specific metrics, called counters, to generate insights and measure performance.
25 However, in many cases, users are overwhelmed with too many counters that may
not be relevant to their needs. This makes it difficult to focus on the most important
data and can lead to less accurate or meaningful analysis.
[0005] Thus, there exists an imperative need in the art for dynamic counter
30 assignment i.e., dynamically assigning network counters, which the present
disclosure aims to address.
3
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features 5 or the scope of the claimed
subject matter.
[0007] An aspect of the present disclosure relates to a method for dynamically
assigning network counters. The method comprises receiving, by a transceiver unit,
10 a counter assign request associated with one or more user groups. The method
further comprises identifying, by a processing unit at a Network performance
management system, one or more network counters based on the counter assign
request. The method furthermore comprises dynamically assigning, by the
processing unit from the network performance management system, the one or
15 more network counters to the one or more user groups associated with the counter
assign request.
[0008] In an exemplary aspect of the present disclosure, the method further
comprises transmitting, by the transceiver unit from the network performance
20 management system, a request completion status associated with the counter assign
request based on dynamically assigning at least the one or more network counters
to the one or more user groups.
[0009] In an exemplary aspect of the present disclosure, the one or more network
25 counters are identified from a list of counters based on a network node and a
category of the one or more user groups associated with the counter assign request.
[0010] In an exemplary aspect of the present disclosure, the method further
comprises recommending, by the processing unit using a trained model, at least one
30 counter based on a user profile data, a user preference data, and a historical usage
data of the one or more user groups.
4
[0011] Another aspect of the present disclosure relates to a system for dynamically
assigning network counters. The system comprises a transceiver unit, wherein the
transceiver unit is configured to receive a counter assign request associated with
one or more user groups. The system further comprises 5 a processing unit connected
to at least the transceiver unit, wherein the processing unit is configured to identify,
at a Network performance management system, one or more network counters
based on the counter assign request. The processing unit is further configured to
dynamically assign, from the network performance management system, the one or
10 more network counters to the one or more user groups associated with the counter
assign request.
[0012] Yet another aspect of the present disclosure relates to a non-transitory
computer readable storage medium storing instructions for dynamically assigning
15 network counters, the instructions include executable code which, when executed
by one or more units of a system, causes a transceiver unit of the system to receive
a counter assign request associated with one or more user groups. Further, the
instructions include executable code which, when executed causes a processing unit
at a Network performance management system to identify one or more network
20 counters based on the counter assign request. Further, the instructions include
executable code which, when executed causes the processing unit from the Network
performance management system to dynamically assign the one or more network
counters to the one or more user groups associated with the counter assign request.
25 OBJECTS OF THE DISCLOSURE
[0013] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
30 [0014] It is an object of the present disclosure to provide a system and a method for
dynamically assigning network counters.
5
[0015] It is another object of the present disclosure to provide a solution that
receives a target category from a set of categories associated with an assign counter
request.
5
[0016] It is yet another object of the present disclosure to provide a solution that
retrieves a list of counters based on at least one of the targets categories from the
set of categories and a node.
10 [0017] It is yet another object of the present disclosure to provide a solution to
assign one or more target counters to at least one of the user’s groups and the target
user group received in the assign counter request.
BRIEF DESCRIPTION OF THE DRAWINGS
15
[0018] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the drawings are not necessarily to scale,
20 emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such
25 drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
[0019] FIG. 1 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
30 exemplary implementation of the present disclosure.
6
[0020] FIG. 2 illustrates an exemplary block diagram of a system for dynamically
assigning network counters, in accordance with exemplary implementations of the
present disclosure.
[0021] FIG. 3 illustrates a method flow diagram for dynamically 5 assigning network
counters, in accordance with exemplary implementations of the present disclosure.
[0022] FIG. 4 illustrates an exemplary block diagram of a network performance
management system, in accordance with the exemplary embodiments of the present
10 disclosure.
[0023] FIG. 5 illustrates a system architecture diagram for dynamically assigning
network counters, in accordance with exemplary implementations of the present
disclosure.
15
[0024] FIG. 6 illustrates a signalling flow diagram for dynamically assigning
network counters, in accordance with exemplary implementations of the present
disclosure.
20 [0025] The foregoing shall be more apparent from the following more detailed
description of the disclosure.
DETAILED DESCRIPTION
25 [0026] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
30 another or with any combination of other features. An individual feature may not
7
address any of the problems discussed above or might address only some of the
problems discussed above.
[0027] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, 5 or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
10 disclosure as set forth.
[0028] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
15 specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
embodiments in unnecessary detail.
[0029] It should be noted that the terms "first", "second", "primary", "secondary",
20 "target" and the like, herein do not denote any order, ranking, quantity, or
importance, but rather are used to distinguish one element from another.
[0030] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
25 diagram, or a block diagram. Although a flowchart may describe the operations as
a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
30
8
[0031] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred 5 or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
10 similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
[0032] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
15 processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more microprocessors in association with a Digital
Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
20 integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
25 [0033] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
“a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
30 user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
9
tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
5
[0034] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
10 magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
that may be required by one or more units of the system to perform their respective
functions.
15 [0035] As used herein “interface” or “user interface refers to a shared boundary
across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
20 called.
[0036] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor, a
25 digital signal processor (DSP), a plurality of microprocessors, one or more
microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
30 [0037] As used herein the transceiver unit include at least one receiver and at least
one transmitter configured respectively for receiving and transmitting data, signals,
10
information or a combination thereof between units/components within the system
and/or connected with the system.
[0038] As used herein “User Interface” (UI) refers to the user interface is the point
of interaction between a user and 5 a computer system. It allows users to
communicate with and control the system, typically through graphical elements
such as windows, buttons, and menus.
[0039] As used herein “network management performance system refers to
10 describe a system or component that manages and coordinates various processes or
tasks within an environment.
[0040] As discussed in the background section, the current known solutions have
several shortcomings. As discussed in the background section, the current known
15 solutions for dynamic counter assignment have several shortcomings such as lack
of a method that incorporates a dynamic counter assignment feature, which is
essential for efficient network performance monitoring and management. The
existing prior art fails to provide a solution that allows for the dynamic allocation
of counters, thereby hindering the ability to analyse and troubleshoot network issues
20 effectively. Additionally, the prior art does not offer the flexibility to configure
counters into the application for monitoring specific network nodes as desired by
the user. This limitation restricts the customization and granularity of network
monitoring, impeding the ability to focus on specific areas of interest. Another
technical challenge is the absence of adaptability to changing network
25 requirements. As networks constantly evolve and expand, it is crucial to have a
monitoring application that can readily adjust to new demands. The previous
solution lacks configurability and adaptability, which hampers effective network
management and monitoring in dynamic environments.
30 [0041] The present disclosure aims to overcome the above-mentioned and other
existing problems in this field of technology by disclosing a novel solution centred
11
around the dynamic counter assignment feature, which tackles the challenge of
analysing dashboards based on specific counters configured for each user group.
This solution stands out due to its unique approach and methodology employed to
address this problem effectively. It also addresses the dependency of Key
Performance Indicators (KPIs) on the counters by 5 dynamically assigning counters
to users. As a result, KPIs are composed only of relevant counters assigned to the
user, eliminating any irrelevant information. This leads to better and optimized
metrics, enhancing the efficiency of monitoring network issues for different user
groups. Another distinctive aspect of this feature is its ability to recommend
10 counters to users based on their profile and history, further improving the efficiency
of monitoring processes. Overall, this innovative solution offers a comprehensive
and efficient approach to analysing dashboards and monitoring network issues.
[0042] The KPI (Key Performance Indicator): Indicators that reflect the network
15 performance. KPIs are collected from the network or are calculated from the
network measurements. The KPI such as Accessibility KPI, Integrity KPI,
Utilization KPI, Retainability KPI, Mobility KPI, and Energy Efficiency (EE) KPI.
[0043] As would be further noted, Dynamic Counter Assignment allows users to
20 select only the counters that are relevant to their specific group or task. As a result,
the dashboards are customized to show only the most relevant information, making
it easier to analyze data and create more accurate KPIs. This feature also ensures
that KPI calculations are based only on the selected counters, leading to more
precise and useful performance measurements. Additionally, the system may
25 suggest counters based on the user’s preferences and past behavior, saving time and
improving the user experience.
[0044] This invention simplifies the management of large sets of counters, allowing
users to upload counter details via an excel file for quick updates and adjustments,
30 making the system both scalable and flexible.
12
[0045] Hereinafter, exemplary embodiments of the present disclosure will be
described with reference to the accompanying drawings.
[0046] FIG. 1 illustrates an exemplary block diagram of a computing device [100]
upon which the features of 5 the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [100] may also implement a method for
managing performance data of a node in a network utilising the system. In another
implementation, the computing device [100] itself implements the method for
10 managing performance data of a node in a network using one or more units
configured within the computing device [100], wherein said one or more units are
capable of implementing the features as disclosed in the present disclosure.
[0047] The computing device [100] may include a bus [102] or other
15 communication mechanism for communicating information, and a hardware
processor [104] coupled with bus [102] for processing information. The hardware
processor [104] may be, for example, a general-purpose microprocessor. The
computing device [100] may also include a main memory [106], such as a randomaccess
memory (RAM), or other dynamic storage device, coupled to the bus [102]
20 for storing information and instructions to be executed by the processor [104]. The
main memory [106] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [104]. Such instructions, when stored in non-transitory storage media
accessible to the processor [104], render the computing device [100] into a special25
purpose machine that is customized to perform the operations specified in the
instructions. The computing device [100] further includes a read only memory
(ROM) [108] or other static storage device coupled to the bus [102] for storing static
information and instructions for the processor [104].
30 [0048] A storage device [110], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [102] for storing information and
13
instructions. The computing device [100] may be coupled via the bus [102] to a
display [112], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [114], including
alphanumeric and other keys, touch screen 5 input means, etc. may be coupled to the
bus [102] for communicating information and command selections to the processor
[104]. Another type of user input device may be a cursor controller [116], such as a
mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [104], and for controlling
10 cursor movement on the display [112]. This input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
[0049] The computing device [100] may implement the techniques described
15 herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [100] causes
or programs the computing device [100] to be a special-purpose machine.
According to one implementation, the techniques herein are performed by the
computing device [100] in response to the processor [104] executing one or more
20 sequences of one or more instructions contained in the main memory [106]. Such
instructions may be read into the main memory [106] from another storage medium,
such as the storage device [110]. Execution of the sequences of instructions
contained in the main memory [106] causes the processor [104] to perform the
process steps described herein. In alternative implementations of the present
25 disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
[0050] The computing device [100] also may include a communication interface
[118] coupled to the bus [102]. The communication interface [118] provides a two30
way data communication coupling to a network link [120] that is connected to a
local network [122]. For example, the communication interface [118] may be an
14
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [118] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links 5 may also be implemented. In any such
implementation, the communication interface [118] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
various types of information.
10 [0051] The computing device [100] can send messages and receive data, including
program code, through the network(s), the network link [120] and the
communication interface [118]. In the Internet example, a server [130] might
transmit a requested code for an application program through the Internet [128], the
ISP [126], the local network [122], the host [124] and the communication interface
15 [118]. The received code may be executed by the processor [104] as it is received,
and/or stored in the storage device [110], or other non-volatile storage for later
execution.
[0052] Referring to FIG. 2, an exemplary block diagram of a system [200] for
20 dynamically assigning network counters, is shown, in accordance with the
exemplary implementations of the present disclosure. As depicted in FIG. 2, the
system [200] may include at least one transceiver unit [202] and at least one
processing unit [204]. Also, all of the components/ units of the system [200] are
assumed to be connected to each other unless otherwise indicated below. As shown
25 in the figures all units shown within the system [200] should also be assumed to be
connected to each other. Also, in FIG. 2, only a few units are shown, however, the
system [200] may include multiple such units or the system [200] may include any
such numbers of said units, as required to implement the features of the present
disclosure. Further, in an implementation, the system [200] may be present in a user
30 device/ user equipment [102] to implement the features of the present disclosure.
15
[0053] In one example, the system [200] may be implemented as or within a
network performance management system (not depicted in FIG. 2). In such cases,
the units of the system [200], as depicted in FIG. 2, may be in communication with
other entities and/or functions of the network performance management system.
Such entities and/or functions have not 5 been depicted and explained here for the
sake of brevity, and would be well understood by a person skilled in the art.
[0054] The system [200] is configured for dynamically assigning network counters,
with the help of the interconnection between the components/units of the system
10 [200].
[0055] In one example, the transceiver unit [202] is configured to receive a counter
assign request associated with one or more user groups.
15 [0056] In an implementation of the present disclosure, dynamically means that the
assignment of network counters is done in real-time or near real-time based on
current conditions or requirements. The network counters are metrics or identifiers
used to track various parameters or performance indicators in a network. Further,
the transceiver unit [202] is designed to accept requests for the assignment of
20 network counters.
[0057] In an example, the system's transceiver unit [202] receives the request,
which tells it to assign certain counters to one or more specific user groups. This
allows the system to customize which counters are used for monitoring and analysis
25 based on the needs of different user group.
[0058] Thereafter, a processing unit [204], connected to at least the transceiver unit
[202], may identify, at a Network performance management system, one or more
network counters based on the counter assign request.
30
16
[0059] In an implementation of the present disclosure, the processing unit [204]
identifies the appropriate network counters using a Network performance
management system system. This network performance management system acts
as a centralized platform that manages and coordinates the various processes
involved in network counter 5 assignment. The identification process involves
selecting one or more network counters specified in the counter assign request.
Once identified, these network counters are dynamically assigned to the user groups
associated with the request.
10 [0060] In an example, the one or more network counters are identified from a list
of counters based on a network node and a category of the one or more user groups
associated with the counter assign request. The one or more network counters are
chosen from a pre-existing list based on the network node and the category of user
group. A list of counters refers to a predefined collection of network counters, each
15 representing a specific metric, parameter, or identifier used to monitor and measure
various aspects of network performance. These counters can include data points
such as bandwidth usage, latency, packet loss, error rates, throughput, connection
durations, and other relevant indicators. The selection criteria include the specific
network node (which can be a device or location within the network).
20
[0061] In an example, the processing unit [204] is further configured to
recommend, using a trained model, at least one counter based on a user profile data,
a user preference data, and a historical usage data of the one or more user groups.
The processing unit [204] utilizes a trained model (likely a machine learning or AI
25 model) to make recommendations based on the user profile data, the user preference
data, and the historical usage data.
[0062] In another example, the recommendation of counters is informed by three
types of data.
30
17
[0063] The user profile data includes information about the user or the user group,
such as their role, responsibilities, and the types of tasks they typically perform.
[0064] The user preference data captures the specific preferences or choices of the
user or user group. It may include preferences 5 for certain types of metrics, the
format in which data is displayed, or even past selections of counters.
[0065] The historical usage data is based on the user's or user group's past
interactions with the system. It includes the counters they have used or selected in
10 the past, the frequency of use, and the contexts in which those counters were
applied. By analysing this historical data, the system can identify patterns and make
more informed recommendations.
[0066] After that, the processing unit [204] is configured to dynamically assign,
15 from the network performance management system, the one or more network
counters to the one or more user groups associated with the counter assign request.
[0067] In an implementation of the present disclosure, the system dynamically
assigns the identified network counters to the user groups associated with the
20 counter assign request, leveraging the Network performance management system
system. This dynamic assignment process means that the allocation of network
counters happens in real-time or near real-time, allowing the system to respond
quickly to current network conditions and requirements.
25 [0068] In an example, the transceiver unit [202] is further configured to transmit,
from the network performance management system, a request completion status
associated with the counter assign request based on dynamically assigning at least
the one or more network counters to the one or more user groups.
30 [0069] After dynamically assigning the network counters, the transceiver unit [202]
sends a status message indicating the completion of the request. This is done from
18
the network performance management system and is based on the assignment of
the network counters to the user groups.
[0070] Referring to FIG. 3, an exemplary method flow diagram [300] for
dynamically assigning network counters, 5 in accordance with exemplary
implementations of the present disclosure is shown. In an implementation the
method [300] is performed by the system [200]. Further, in an implementation, the
system [200] may be present in a server device to implement the features of the
present disclosure. Also, as shown in FIG. 3, the method [300] starts at step [302].
10
[0071] At step 304, the method comprises, receiving, by a transceiver unit [202], a
counter assign request associated with one or more user groups.
[0072] In an implementation of the present disclosure, dynamically means that the
15 assignment of network counters is done in real-time or near real-time based on
current conditions or requirements. The network counters are metrics or identifiers
used to track various parameters or performance indicators in a network. Further,
the transceiver unit [202] is designed to accept requests for the assignment of
network counters.
20
[0073] In an example, the system's transceiver unit [202] receives the request,
which tells it to assign certain counters to one or more specific user groups. This
allows the system to customize which counters are used for monitoring and analysis
based on the needs of different user group.
25
[0074] At step [306], the method comprises, identifying, by a processing unit [204]
at a Network performance management system, one or more network counters
based on the counter assign request.
30 [0075] In an implementation of the present disclosure, the processing unit [204]
identifies the appropriate network counters using a Network performance
19
management system system. This network performance management system acts
as a centralized platform that manages and coordinates the various processes
involved in network counter assignment. The identification process involves
selecting one or more network counters specified in the counter assign request.
Once identified, these network counters are dynamically 5 assigned to the user groups
associated with the request.
[0076] In an example, the one or more network counters are identified from a list
of counters based on a network node and a category of the one or more user groups
10 associated with the counter assign request. The one or more network counters are
chosen from a pre-existing list based on the network node and the category of user
group. A list of counters refers to a predefined collection of network counters, each
representing a specific metric, parameter, or identifier used to monitor and measure
various aspects of network performance. These counters can include data points
15 such as bandwidth usage, latency, packet loss, error rates, throughput, connection
durations, and other relevant indicators. The selection criteria include the specific
network node (which can be a device or location within the network).
[0077] In an example, the processing unit [204], using a trained model, may
20 recommend at least one counter based on a user profile data, a user preference data,
and a historical usage data of the one or more user groups.
[0078] The processing unit [204] utilizes a trained model (likely a machine learning
or AI model) to make recommendations based on User Profile Data, User
25 Preference Data, and Historical Usage Data:
[0079] The recommendation of counters is informed by three types of data.
[0080] The user profile data includes information about the user or the user group,
30 such as their role, responsibilities, and the types of tasks they typically perform.
20
[0081] The user preference data captures the specific preferences or choices of the
user or user group. It might include preferences for certain types of metrics, the
format in which data is displayed, or even past selections of counters.
[0082] The historical usage data 5 is based on the user's or user group's past
interactions with the system. It includes the counters they have used or selected in
the past, the frequency of use, and the contexts in which those counters were
applied. By analysing this historical data, the system can identify patterns and make
more informed recommendations.
10
[0083] At step [308], the method comprises, dynamically assigning, by the
processing unit [204] from the network performance management system, the one
or more network counters to the one or more user groups associated with the counter
assign request.
15
[0084] In an implementation of the present disclosure, the system dynamically
assigns the identified network counters to the user groups associated with the
counter assign request, leveraging the Network performance management system
system. This dynamic assignment process means that the allocation of network
20 counters happens in real-time or near real-time, allowing the system to respond
quickly to current network conditions and requirements.
[0085] The method further comprises transmitting, by the transceiver unit [202]
from the network performance management system, a request completion status
25 associated with the counter assign request based on dynamically assigning at least
the one or more network counters to the one or more user groups.
[0086] After dynamically assigning the network counters, the transceiver unit [202]
sends a status message indicating the completion of the request. This is done from
30 the network performance management system and is based on the assignment of
the network counters to the user groups.
21
[0087] Thereafter, at step [310], the method [300] is terminated.
[0088] Referring to FIG. 4, an exemplary block diagram of a network performance
management system [400], in accordance 5 with the exemplary embodiments of the
present disclosure is illustrated. In one example, the network performance
management system [400] may be implemented as system [200], as explained in
conjunction with FIGs. 2-3.
10 [0089] As depicted in FIG. 4, the network performance management system [400]
may include various sub-systems such as: performance management system [400a],
normalization layer [400b], computation layer [400d], anomaly detection layer
[400o], streaming engine [400l], load balancer [400k], operations and management
system [400p], API gateway system [400r], analysis engine [400h], parallel
15 computing framework [400i], forecasting engine [400t], distributed file system,
mapping layer [400s], distributed data lake [400w], scheduling layer [400g],
reporting engine [400m], message broker [400e], graph layer [400f], caching layer
[400c], service quality manager [400q] and correlation engine[400n]. Exemplary
connections between these subsystems are also as shown in FIG. 4. However, it will
20 be appreciated by those skilled in the art that the present disclosure is not limited to
the connections shown in the diagram, and any other connections between various
subsystems that are needed to realise the effects are within the scope of this
disclosure.
25 [0090] Following are the various components of the system [400], the various
components may include:
[0091] Performance management system [400a] comprises a performance
engine [400v] and a Key Performance Indicator (KPI) Engine [400u].
30
22
[0092] Performance Management Engine [400v]: The Performance
Management engine [400v] is a crucial component of the system, responsible for
collecting, processing, and managing performance counter data from various data
sources within the network. The gathered data includes metrics such as connection
speed, latency, data transfer rates, and 5 many others. This raw data is then processed
and aggregated as required, forming a comprehensive overview of network
performance. The processed information is then stored in a Distributed Data Lake
[400w], a centralized, scalable, and flexible storage solution, allowing for easy
access and further analysis. The Performance Management engine [400v] also
10 enables the reporting and visualization of this performance counter data, thus
providing network administrators with a real-time, insightful view of the network's
operation. Through these visualizations, operators can monitor the network's
performance, identify potential issues, and make informed decisions to enhance
network efficiency and reliability.
15
[0093] Key Performance Indicator (KPI) Engine [400u]: The Key Performance
Indicator (KPI) Engine is a dedicated component tasked with managing the KPIs of
all the network elements. It uses the performance counters, which are collected and
processed by the Performance Management engine from various data sources.
20 These counters, encapsulating crucial performance data, are harnessed by the KPI
engine [400u] to calculate essential KPIs. These KPIs might include data
throughput, latency, packet loss rate, and more. Once the KPIs are computed, they
are segregated based on the aggregation requirements, offering a multi-layered and
detailed understanding of network performance. The processed KPI data is then
25 stored in the Distributed Data Lake [400w], ensuring a highly accessible,
centralized, and scalable data repository for further analysis and utilization. Similar
to the Performance Management engine, the KPI engine [400u] is also responsible
for reporting and visualization of KPI data. This functionality allows network
administrators to gain a comprehensive, visual understanding of the network's
30 performance, thus supporting informed decision-making and efficient network
management.
23
[0094] Ingestion layer: The Ingestion layer forms a key part of the Performance
Management system. Its primary function is to establish an environment capable of
handling diverse types of incoming data. This data may include Alarms, Counters,
Configuration parameters, Call 5 Detail Records (CDRs), Infrastructure metrics,
Logs, and Inventory data, all of which are crucial for maintaining and optimizing
the network's performance. Upon receiving this data, the Ingestion layer processes
it by validating its integrity and correctness to ensure it is fit for further use.
Following validation, the data is routed to various components of the system,
10 including the Normalization layer, Streaming Engine, Streaming Analytics, and
Message Brokers. The destination is chosen based on where the data is required for
further analytics and processing. By serving as the first point of contact for
incoming data, the Ingestion layer plays a vital role in managing the data flow
within the system, thus supporting comprehensive and accurate network
15 performance analysis.
[0095] Normalization layer [400b]: The Normalization Layer [400b] serves to
standardize, enrich, and store data into the appropriate databases. It takes in data
that's been ingested and adjusts it to a common standard, making it easier to
20 compare and analyse. This process of "normalization" reduces redundancy and
improves data integrity. Upon completion of normalization, the data is stored in
various databases like the Distributed Data Lake [400w], Caching Layer, and Graph
Layer, depending on its intended use. The choice of storage determines how the
data can be accessed and used in the future. Additionally, the Normalization Layer
25 [400b] produces data for the Message Broker, a system that enables communication
between different parts of the performance management system through the
exchange of data messages. Moreover, the Normalization Layer [400b] supplies the
standardized data to several other subsystems. These include the Analysis Engine
for detailed data examination, the Correlation Engine [400n] for detecting
30 relationships among various data elements, the Service Quality Manager for
maintaining and improving the quality of services, and the Streaming Engine for
24
processing real-time data streams. These subsystems depend on the normalized data
to perform their operations effectively and accurately, demonstrating the
Normalization Layer's [400b] critical role in the entire system.
[0096] Caching layer [400c]: The Caching Layer 5 [400c] in the Performance
Management system plays a significant role in data management and optimization.
During the initial phase, the Normalization Layer [400b] processes incoming raw
data to create a standardized format, enhancing consistency and comparability. The
Normalizer Layer then inserts this normalized data into various databases. One such
10 database is the Caching Layer [400c]. The Caching Layer [400c] is a high-speed
data storage layer which temporarily holds data that is likely to be reused, to
improve speed and performance of data retrieval. By storing frequently accessed
data in the Caching Layer [400c], the system significantly reduces the time taken
to access this data, improving overall system efficiency and performance. Further,
15 the Caching Layer [400c] serves as an intermediate layer between the data sources
and the sub-systems, such as the Analysis Engine, Correlation Engine [400n],
Service Quality Manager, and Streaming Engine. The Normalization Layer [400b]
is responsible for providing these sub-systems with the necessary data from the
Caching Layer [400c].
20
[0097] Computation layer [400d]: The Computation Layer [400d] in the
Performance Management system serves as the main hub for complex data
processing tasks. In the initial stages, raw data is gathered, normalized, and enriched
by the Normalization Layer [400b]. The Normalizer Layer then inserts this
25 standardized data into multiple databases including the Distributed Data Lake
[400w], Caching Layer [400c], and Graph Layer, and also feeds it to the Message
Broker. Within the Computation Layer [400d], several powerful sub-systems such
as the Analysis Engine, Correlation Engine [400n], Service Quality Manager, and
Streaming Engine, utilize the normalized data. These systems are designed to
30 execute various data processing tasks. The Analysis Engine performs in-depth data
analytics to generate insights from the data. The Correlation Engine [400n]
25
identifies and understands the relations and patterns within the data. The Service
Quality Manager assesses and ensures the quality of the services. And the Streaming
Engine processes and analyses the real-time data feeds. In essence, the Computation
Layer [400d] is where all major computation and data processing tasks occur. It
uses the normalized data provided by the 5 Normalization Layer [400b], processing
it to generate useful insights, ensure service quality, understand data patterns, and
facilitate real-time data analytics.
[0098] Message broker [400e]: The Message Broker [400e], an integral part of the
10 Performance Management system, operates as a publish-subscribe messaging
system. It orchestrates and maintains the real-time flow of data from various sources
and applications. At its core, the Message Broker [400e] facilitates communication
between data producers and consumers through message-based topics. This creates
an advanced platform for contemporary distributed applications. With the ability to
15 accommodate a large number of permanent or ad-hoc consumers, the Message
Broker [400e] demonstrates immense flexibility in managing data streams.
Moreover, it leverages the filesystem for storage and caching, boosting its speed
and efficiency. The design of the Message Broker [400e] is centred around
reliability. It is engineered to be fault-tolerant and mitigate data loss, ensuring the
20 integrity and consistency of the data. With its robust design and capabilities, the
Message Broker [400e] forms a critical component in managing and delivering realtime
data in the system.
[0099] Graph layer [400f]: The Graph Layer [400f], serving as the Relationship
25 Modeler, plays a pivotal role in the Performance Management system. It can model
a variety of data types, including alarm, counter, configuration, CDR data, Inframetric
data, 5G Probe Data, and Inventory data. Equipped with the capability to
establish relationships among diverse types of data, the Relationship Modeler offers
extensive modelling capabilities. For instance, it can model Alarm and Counter
30 data, Vprobe and Alarm data, elucidating their interrelationships. Moreover, the
Modeler should be adept at processing steps provided in the model and delivering
26
the results to the system requested, whether it be a Parallel Computing system,
Workflow Engine, Query Engine, Correlation System [400n], 5G Performance
Management Engine, or 5G KPI Engine [400u]. With its powerful modeling and
processing capabilities, the Graph Layer [400f] forms an essential part of the
system, enabling the processing and analysis 5 of complex relationships between
various types of network data.
[0100] Scheduling layer [400g]: The Scheduling Layer [400g] serves as a key
element of the Performance Management System, endowed with the ability to
10 execute tasks at predetermined intervals set according to user preferences. A task
might be an activity performing a service call, an API call to another microservice,
the execution of an Elastic Search query, and storing its output in the Distributed
Data Lake [400w] or Distributed File System or sending it to another micro-service.
The versatility of the Scheduling Layer [400g] extends to facilitating graph
15 traversals via the Mapping Layer to execute tasks. This crucial capability enables
seamless and automated operations within the system, ensuring that various tasks
and services are performed on schedule, without manual intervention, enhancing
the system's efficiency and performance. In sum, the Scheduling Layer [400g]
orchestrates the systematic and periodic execution of tasks, making it an integral
20 part of the efficient functioning of the entire system.
[0101] Analysis Engine [400h]: The Analysis Engine [400h] forms a crucial part
of the Performance Management System, designed to provide an environment
where users can configure and execute workflows for a wide array of use-cases.
25 This facility aids in the debugging process and facilitates a better understanding of
call flows. With the Analysis Engine [400h], users can perform queries on data
sourced from various subsystems or external gateways. This capability allows for
an in-depth overview of data and aids in pinpointing issues. The system's flexibility
allows users to configure specific policies aimed at identifying anomalies within
30 the data. When these policies detect abnormal behaviour or policy breaches, the
system sends notifications, ensuring swift and responsive action. In essence, the
27
Analysis Engine [400h] provides a robust analytical environment for systematic
data interrogation, facilitating efficient problem identification and resolution,
thereby contributing significantly to the system's overall performance management.
[0102] Parallel Computing Framework 5 [400i]: The Parallel Computing
Framework [400i] is a key aspect of the Performance Management System,
providing a user-friendly yet advanced platform for executing computing tasks in
parallel. This framework showcases both scalability and fault tolerance, crucial for
managing vast amounts of data. Users can input data via Distributed File System
10 (DFS) [400j] locations or Distributed Data Lake (DDL) indices. The framework
supports the creation of task chains by interfacing with the Service Configuration
Management (SCM) Sub-System. Each task in a workflow is executed sequentially,
but multiple chains can be executed simultaneously, optimizing processing time. To
accommodate varying task requirements, the service supports the allocation of
15 specific host lists for different computing tasks. The Parallel Computing
Framework [400i] is an essential tool for enhancing processing speeds and
efficiently managing computing resources, significantly improving the system's
performance management capabilities.
20 [0103] Distributed File System [400j]: The Distributed File System (DFS) [400j]
is a critical component of the Performance Management System, enabling multiple
clients to access and interact with data seamlessly. This file system is designed to
manage data files that are partitioned into numerous segments known as chunks. In
the context of a network with vast data, the DFS [400j] effectively allows for the
25 distribution of data across multiple nodes. This architecture enhances both the
scalability and redundancy of the system, ensuring optimal performance even with
large data sets. DFS [400j] also supports diverse operations, facilitating the flexible
interaction with and manipulation of data. This accessibility is paramount for a
system that requires constant data input and output, as is the case in a robust
30 performance management system.
28
[0104] Load Balancer [400k]: The Load Balancer (LB) [400k] is a vital
component of the Performance Management System, designed to efficiently
distribute incoming network traffic across a multitude of backend servers or
microservices. Its purpose is to ensure the even distribution of data requests, leading
to optimized server resource utilization, 5 reduced latency, and improved overall
system performance. The LB [400k] implements various routing strategies to
manage traffic. These include round-robin scheduling, header-based request
dispatch, and context-based request dispatch. Round-robin scheduling is a simple
method of rotating requests evenly across available servers. In contrast, header and
10 context-based dispatching allow for more intelligent, request-specific routing.
Header-based dispatching routes requests based on data contained within the
headers of the Hypertext Transfer Protocol (HTTP) requests. Context-based
dispatching routes traffic based on the contextual information about the incoming
requests. For example, in an event-driven architecture, the LB [400k] manages
15 event and event acknowledgments, forwarding requests or responses to the specific
microservice that has requested the event. This system ensures efficient, reliable,
and prompt handling of requests, contributing to the robustness and resilience of
the overall performance management system.
20 [0105] Streaming Engine [400l]: The Streaming Engine [400l], also referred to as
Stream Analytics, is a critical subsystem in the Performance Management System.
This engine is specifically designed for high-speed data pipelining to the User
Interface (UI). Its core objective is to ensure real-time data processing and delivery,
enhancing the system's ability to respond promptly to dynamic changes. Data is
25 received from various connected subsystems and processed in real-time by the
Streaming Engine [400l]. After processing, the data is streamed to the UI, fostering
rapid decision-making and responses. The Streaming Engine [400l] cooperates with
the Distributed Data Lake [400w], Message Broker [400e], and Caching Layer
[400c] to provide seamless, real-time data flow. Stream Analytics is designed to
30 perform required computations on incoming data instantly, ensuring that the most
relevant and up-to-date information is always available at the UI. Furthermore, this
29
system can also retrieve data from the Distributed Data Lake [400w], Message
Broker [400e], and Caching Layer [400c] as per the requirement and deliver it to
the UI in real-time. The streaming engine's [400l] goal is to provide fast, reliable,
and efficient data streaming, contributing to the overall performance of the
5 management system.
[0106] Reporting Engine [400m]: The Reporting Engine [400m] is a key
subsystem of the Performance Management System. The fundamental purpose of
designing the Reporting Engine [400m] is to dynamically create report layouts of
10 API data, catered to individual client requirements, and deliver these reports via the
Notification Engine. The REM serves as the primary interface for creating custom
reports based on the data visualized through the client's dashboard. These custom
dashboards, created by the client through the User Interface (UI), provide the basis
for the Reporting Engine [400m] to process and compile data from various
15 interfaces. The main output of the Reporting Engine [400m] is a detailed report
generated in Excel format. The Reporting Engine’s [400m] unique capability to
parse data from different subsystem interfaces, process it according to the client's
specifications and requirements, and generate a comprehensive report makes it an
essential component of this performance management system. Furthermore, the
20 Reporting Engine [400m] integrates seamlessly with the Notification Engine to
ensure timely and efficient delivery of reports to clients via email, ensuring the
information is readily accessible and usable, thereby improving overall client
satisfaction and system usability.
25 [0107] In the preferred embodiment as illustrated in FIG. 5, the connections
between the various components of a system [500] are established using different
protocols and mechanisms, as well known in the art. For example:
[0108] UI interface to network performance management system: The
30 connection between the User Interface (UI) [502] and the network performance
management system is established using an HTTP connection. HTTP (Hypertext
30
Transfer Protocol) is a widely used protocol for communication between web
browsers and servers. It allows the UI [502] to send requests and configurations to
the network performance management system , and also receive responses or
acknowledgments.
5
[0109] PM to DDL: The connection between the network performance
management system and the Distributed Data Lake (DDL) [506] is established
using a TCP (Transmission Control Protocol) connection. TCP is a reliable and
connection-oriented protocol that ensures the integrity and ordered delivery of data
10 packets. By using TCP, the network performance management system can save and
retrieve relevant data from the DDL [506] for computations, ensuring data
consistency and reliability.
[0110] In some embodiments, the system [500] may include a load balancer [508]
15 for managing connections. The load balancer [508] is adapted to distribute the
incoming network traffic across multiple servers or components to ensure optimal
resource utilization and high availability. Particularly, the load balancer [508] is
commonly employed to evenly distribute incoming requests across multiple
instances of the network performance management system providing scalability and
20 fault tolerance to the system [500]. Overall, these connections and the inclusion of
the load balancer [500] help to facilitate effective communication, data transfer, and
resource management within the system [500], enhancing its performance and
reliability.
25 [0111] Referring to FIG. 6, a signalling flow diagram [600] for dynamically
assigning network counters, in accordance with exemplary implementations of the
present disclosure, is illustrated.
[0112] The interactions between various components involved in processing a
30 counter assignment request.
31
[0113] User (600): The initiator of the request to assign network counters to specific
user groups.
[0114] UI Server: The interface that receives the request from the user and interacts
5 with other components to process the request.
[0115] Load Balancer (508): Distributes the processing load across different
components to ensure efficient handling of the request.
10 [0116] Network Performance Management System (504): A central system
responsible for validating the request, managing network counters, and
coordinating the assignment process.
[0117] Distributed Data Lake (506): A storage system that holds the data related to
15 network counters, including performance metrics and user group associations.
[0118] At step S1: The user initiates the process by submitting a counter assignment
request. The process begins with the User (600) submitting a request to assign a
specific counter to a user group. This request is sent to the UI Server, which acts as
20 the interface between the user and the rest of the system.
[0119] At step 2: In one example, the UI Server forwards the user's request to the
Load Balancer (508). The role of the Load Balancer is to manage and distribute the
request load across the system's components.
25
[0120] At step S3: This network performance management system [504] acts as a
centralized platform that manages and coordinates the various processes involved
in network counter assignment. The identification process involves selecting one or
more network counters that specified in the counter assign request. Once identified,
30 these network counters are dynamically assigned to the user groups associated with
the request. After the network performance management system [504] updates the
32
counter's data, this information is stored in the distributed data lake [506], ensuring
it is accessible for future reference and analysis.
[0121] Step S4: Once the counter's data is successfully updated, the Network
Performance Management S 5 ystem (504) sends a confirmation message back to the
UI Server, indicating that the request validation was successful. This validation may
include verifying the validity of the user group making the request, ensuring that
the requested network counters are available and not already in use, and confirming
that the request data is consistent and complete.
10
[0122] At Step S6: If the request fails validation (e.g., due to incorrect data or unmet
criteria), the Network Performance Management System (504) sends a failure
message to the UI Server.
15 [0123] Step S7: The UI Server then informs the User (600) that the request has
failed, providing an appropriate error message.
[0124] The present disclosure further discloses a non-transitory computer readable
storage medium storing instructions for dynamically assigning network counters,
20 the instructions include executable code which, when executed by one or more units
of a system [200], causes a transceiver unit [202] of the system [200] to receive a
counter assign request associated with one or more user groups. Further, the
instructions include executable code which, when executed causes a processing unit
[204] at a Network performance management system to identify one or more
25 network counters based on the counter assign request. Further, the instructions
include executable code which, when executed causes the processing unit [204]
from the Network performance management system to dynamically assign the one
or more network counters to the one or more user groups associated with the counter
assign request.
30
33
[0125] As is evident from the above, the present disclosure provides a technically
advanced solution for dynamic network counters assignment. The novel solution as
disclosed in the present disclosure provides several technical advantages that make
it a valuable innovation in the field. Firstly, it enables users to analyse tailored
dashboards that cater to their specific 5 needs, resulting in more accurate and
actionable insights. By performing computations based only on relevant counters,
the metrics derived from these dashboards become more meaningful, allowing
network issues to be promptly addressed. Secondly, the assignment of user groups
to their respective counters holds significant importance in the creation of Key
10 Performance Indicators (KPIs). This feature allows for the development of
meaningful KPIs that accurately measure and track performance. Thirdly, the
solution eliminates the inclusion of irrelevant or unnecessary counters in KPI
calculations, leading to more reliable and meaningful KPI metrics. This ensures that
the derived metrics reflect the true performance of the network without any
15 misleading or extraneous data. Fourthly, leveraging AI capabilities, the feature goes
beyond static assignment and can suggest relevant counters to users based on their
profiles, preferences, and historical usage. This dynamic recommendation system
enhances computation and analysis efficiency, resulting in a better user experience.
Lastly, the feature offers scalability and flexibility by allowing users to upload
20 counter details via an Excel file. This facilitates the efficient management of large
sets of counter data, enabling quick updates and adjustments as needed. This
scalability ensures that the solution can accommodate growing data volumes and
evolving network requirements. In conclusion, the technical advantages provided
by this novel solution for patent specification offer improved accuracy, meaningful
25 insights, reliable metrics, efficient computation, and scalability. These advantages
position the invention as a valuable contribution to the field, addressing the
challenges associated with analysing dashboards and monitoring network issues in
a more effective and user-friendly manner.
30 [0126] While considerable emphasis has been placed herein on the disclosed
implementations, it will be appreciated that many implementations can be made and
34
that many changes can be made to the implementations without departing from the
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
5 and non-limiting.
[0127] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
10 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
15 functionality described herein, are considered to be encompassed within the scope
of the present disclosure.

We Claim:

1. A method [300] for dynamically assigning network counters, the method [300]
comprising:
- receiving [304], by a transceiver 5 unit [202], a counter assign request
associated with one or more user groups;
- identifying [306], by a processing unit [204] at a Network performance
management system, one or more network counters based on the counter
assign request; and
- dynamically assigning [308], by the processing unit [204] from the network
performance management system, the one or more network counters to the
one or more user groups associated with the counter assign request.

2. The method [300] as claimed in claim 1, further comprising transmitting, by
the transceiver unit [202] from the network performance management system,
a request completion status associated with the counter assign request based
on dynamically assigning at least the one or more network counters to the one
or more user groups.

3. The method [300] as claimed in claim 1, wherein the one or more network
counters are identified from a list of counters based on a network node and a
category of the one or more user groups associated with the counter assign
request.

4. The method [300] as claimed in claim 1, further comprising: recommending,
by the processing unit [204] using a trained model, at least one counter based
on a user profile data, a user preference data, and a historical usage data of the
one or more user groups.

5. A system [200] for dynamically assigning network counters, the system [200]
comprises:
- a transceiver unit [202], wherein the transceiver unit [202] is configured to:
 receive a counter assign request associated with one or more user
groups,
- a processing unit [204] connected to at least the transceiver unit [202],
wherein t 5 he processing unit [204] is configured to:
 identify, at a Network performance management system, one or more
network counters based on the counter assign request,
 dynamically assign, from the network performance management
system, the one or more network counters to the one or more user groups
associated with the counter assign request.

6. The system [200] as claimed in claim 5, wherein the transceiver unit [202] is
further configured to transmit, from the network performance management
system, a request completion status associated with the counter assign request
based on dynamically assigning at least the one or more network counters to
the one or more user groups.

7. The system [200] as claimed in claim 5, wherein the one or more network
counters are identified from a list of counters based on a network node and a
category of the one or more user groups associated with the counter assign
request.

8. The system [200] as claimed in claim 5, wherein the processing unit [204] is
further configured to recommend, using a trained model, at least one counter
based on a user profile data, a user preference data, and a historical usage data
of the one or more user groups.

Dated this the 22nd Day of August, 2023

Documents

Application Documents

# Name Date
1 202321056270-STATEMENT OF UNDERTAKING (FORM 3) [22-08-2023(online)].pdf 2023-08-22
2 202321056270-PROVISIONAL SPECIFICATION [22-08-2023(online)].pdf 2023-08-22
3 202321056270-FORM 1 [22-08-2023(online)].pdf 2023-08-22
4 202321056270-FIGURE OF ABSTRACT [22-08-2023(online)].pdf 2023-08-22
5 202321056270-DRAWINGS [22-08-2023(online)].pdf 2023-08-22
6 202321056270-FORM-26 [05-09-2023(online)].pdf 2023-09-05
7 202321056270-Proof of Right [10-01-2024(online)].pdf 2024-01-10
8 202321056270-ORIGINAL UR 6(1A) FORM 1 & 26-300124.pdf 2024-02-03
9 202321056270-FORM-5 [20-08-2024(online)].pdf 2024-08-20
10 202321056270-ENDORSEMENT BY INVENTORS [20-08-2024(online)].pdf 2024-08-20
11 202321056270-DRAWING [20-08-2024(online)].pdf 2024-08-20
12 202321056270-CORRESPONDENCE-OTHERS [20-08-2024(online)].pdf 2024-08-20
13 202321056270-COMPLETE SPECIFICATION [20-08-2024(online)].pdf 2024-08-20
14 202321056270-FORM 3 [21-08-2024(online)].pdf 2024-08-21
15 Abstract 1.jpg 2024-08-29
16 202321056270-Request Letter-Correspondence [30-08-2024(online)].pdf 2024-08-30
17 202321056270-Power of Attorney [30-08-2024(online)].pdf 2024-08-30
18 202321056270-Form 1 (Submitted on date of filing) [30-08-2024(online)].pdf 2024-08-30
19 202321056270-Covering Letter [30-08-2024(online)].pdf 2024-08-30
20 202321056270-CERTIFIED COPIES TRANSMISSION TO IB [30-08-2024(online)].pdf 2024-08-30