Sign In to Follow Application
View All Documents & Correspondence

Method And System For Network Performance Management

Abstract: The present disclosure relates to a method and a system for network performance management. The disclosure encompasses configuring, by a configuring unit [302], a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate performance data collection from a plurality of network nodes; periodically auditing, by an auditing unit [304], the IO Cache to check a set of counter reset request entries; retrieving, by a retrieving unit [306], the set of counter reset request entries from the IO Cache based on the audit; and sending, by a processing unit [308], a set of performance management (PM) data requests associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data. [FIG. 5]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
13 July 2023
Publication Number
03/2025
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Sandeep Bisht
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
2. Lokesh Poonia
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
3. Aayush Bhatnagar
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
4. Shubham Kumar Naik
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India
5. Vishal Oak
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR NETWORK PERFORMANCE
MANAGEMENT”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR NETWORK PERFORMANCE
MANAGEMENT
FIELD OF INVENTION
[0001] The present disclosure relates to the field of wireless communication
systems. In particular, the present disclosure relates to retrieval of missing performance management (PM) data. More particularly, embodiments of the present disclosure relate to method and system for network performance management.
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past
few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect

multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] Existing network performance monitoring systems had a significant
problem with performance management (PM) data loss. This loss could occur due to a variety of reasons, including network connectivity issues, hardware failures, network element failures, database failures, and faults in the network management system itself. Prior systems often lacked a robust tracking system for every counter reset request sent by the Network Management System. This lack of detailed tracking made it difficult to understand the flow and progress of requests, as well as to identify and address issues when they arose. Further, earlier systems fail to notify the users about missing performance management data in a timely manner. This lack of communication could lead to problems going unnoticed and unresolved for extended periods of time. The previous arts lacked flexibility in allowing customization for various aspects, such as audit intervals, expiry time of counter reset requests, batch size, and batch intervals for auditing counter reset requests. Furthermore, in the previous systems, managing missing performance management data was often inefficient and manual. They lacked a systematic approach to identify, request, and receive missing data. The absence of an automatic resending mechanism in case of non-receipt of performance management data was another limitation of prior arts.
[0005] Therefore, in light of the foregoing discussion, there exists a need to
overcome the aforementioned drawbacks.
[0006] Thus, there exists an imperative need in the art to provide a method and
system for network performance management. The present invention significantly improves upon previous methods by providing a more robust, reliable, flexible, and user-friendly network performance monitoring solution.

OBJECTS OF THE INVENTION
[0007] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
[0008] It is an object of the present disclosure to provide a method and system
for network performance management.
[0009] It is another object of the present disclosure to provide a method and
system for network performance management that is designed to minimize or eliminate data loss, thereby enhancing the integrity and completeness of the set of PM data collected. This allows for more accurate and reliable network performance monitoring and analysis.
[0010] It is another object of the present disclosure to provide a method and
system for network performance management that includes a mechanism for tracking each counter reset request sent by the Network Management System, using unique flow-ids. This detailed tracking is aimed at providing better visibility into the status and progression of each request.
[0011] It is another object of the present disclosure to provide a method and
system for network performance management that is developed such that to ensure that users are promptly notified about missing set of PM data. This alarm and notification feature is expected to enable quicker response and resolution times for any issues affecting the data collection process.
[0012] It is another object of the present disclosure to provide a method and
system for network performance management that allows for multiple aspects of the system to be customized, including the audit intervals, expiry time of counter reset requests, batch size, and batch intervals for auditing counter reset requests.

This increased flexibility is intended to make the system more adaptable to various usage scenarios and user requirements.
[0013] It is another object of the present disclosure to provide a method and
system for network performance management that streamlines and optimizes the procedure for identifying, requesting, and receiving missing data.
[0014] It is yet another object of the present disclosure to provide a method and
system for network performance management that includes a queuing mechanism for resending requests that have not been responded to, thereby increasing the chances of successful data retrieval even in case of temporary issues or delays.
SUMMARY
[0015] This section is provided to introduce certain aspects of the present
disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0016] According to an aspect of the present disclosure, a method for network
performance management is disclosed. The method includes configuring, by a configuring unit, a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate a set of performance management (PM) data collection from a plurality of network nodes. The method further includes periodically auditing, by an auditing unit, the IO Cache to check a set of counter reset request entries. The method further includes retrieving, by a retrieving unit, the set of counter reset request entries from the IO Cache based on the audit. Thereafter, the method includes sending, by a processing unit, a set of performance management (PM) data requests associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data.

[0017] In an aspect, the method comprises removing, by the processing unit, the
set of counter reset request entries from the IO Cache for which the set of PM data is received during the audit.
[0018] In an aspect, the method comprises, allowing, by the processing unit, the
set of counter reset request entries to persist in the IO Cache until a predefined expiry time period is reached if the set of PM data is again not received, for subsequent audit, from the plurality of network nodes.
[0019] In an aspect, the method includes tracking, by the processing unit, each
of the set of counter reset request entries by storing the set of counter reset request entries in the Input/Output (IO) Cache, wherein each of the set of counter reset request entries is associated with a unique flow-identifier (ID). The method further includes receiving, by the processing unit, the set of PM data from the plurality of network nodes using the unique flow-ID to correlate the received set of PM data with a corresponding counter reset request entry. Thereafter, the method further includes removing, by the processing unit, a counter reset request entry of the set of counter reset request entries, from the IO Cache, for which the set of PM data is received.
[0020] In an aspect, the method comprises raising, by the processing unit, an
alarm if the missing set of PM data is detected based on the retrieved set of counter reset request entries.
[0021] In an aspect, the method comprises validating, by the processing unit,
time period of the set of counter reset request entries in the IO Cache, wherein entries that have exceeded the predefined expiry time period are removed from the IO Cache.

[0022] In an aspect, the method comprises configuring, by the configuring unit,
a set of parameters associated with the plurality of nodes for the auditing the set of counter reset requests.
[0023] In an aspect, the set of parameters comprises at least one of a counter
reset scheduler interval, an expiry time of counter reset requests, a batch size, and a batch interval.
[0024] Another aspect of the present disclosure provides a system for network
performance management. The system comprises a configuring unit configured to configure a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate a set of performance management (PM) data collection from a plurality of network nodes. The system further includes an auditing unit configured to periodically audit the IO Cache to check a set of counter reset request entries. The system further includes a retrieving unit configured to retrieve the set of counter reset request entries from the IO Cache based on the audit. The system further includes a processing unit configured to send a set of performance management (PM) data requests associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data.
[0025] Yet another aspect of the present disclosure may relate to a non-transitory
computer readable storage medium storing instructions for network performance management, the instructions include executable code which, when executed by one or more units of a system, causes: a configuring unit to configure a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate a set of performance management (PM) data collection from a plurality of network nodes; an auditing unit to periodically audit the IO Cache to check a set of counter reset request entries; a retrieving unit to retrieve the set of counter reset request entries from the IO Cache based on the audit; and a processing unit configured to send a set of performance management (PM) data requests

associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data.
DESCRIPTION OF THE DRAWINGS
[0026] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
[0027] FIG. 1 illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture.
[0028] FIG. 2 illustrates an exemplary block diagram of a computing device
upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0029] FIG. 3 illustrates an exemplary block diagram of a system for network
performance management, in accordance with exemplary implementations of the present disclosure.

[0030] FIG. 4 illustrates an exemplary block diagram of a system architecture
for network performance management, in accordance with exemplary embodiments of the present disclosure.
5 [0031] FIG. 5 illustrates a method flow diagram for network performance
management in accordance with exemplary implementations of the present disclosure.
[0032] FIG. 6 illustrates an exemplary method flow diagram indicating the
10 process for network performance management, in accordance with exemplary
embodiments of the present disclosure.
[0033] The foregoing shall be more apparent from the following more detailed
description of the disclosure. 15
DETAILED DESCRIPTION
[0034] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of
20 embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the
25 problems discussed above.
[0035] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
30 the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
9

arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0036] Specific details are given in the following description to provide a thorough
5 understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
10
[0037] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or
15 concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not included in a figure.
[0038] The word “exemplary” and/or “demonstrative” is used herein to mean
20 serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
25 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
any additional or other elements.
30
10

[0039] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
5 of microprocessors, one or more microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
10 the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0040] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
15 “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant,
20 tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may contain at least one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
25
[0041] As used herein, “storage unit” or “memory unit” refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”),
30 magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
11

that may be required by one or more units of the system to perform their respective functions.
[0042] As used herein “interface” or “user interface refers to a shared boundary
5 across which two or more separate components of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
10
[0043] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more
15 microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0044] As used herein the transceiver unit include at least one receiver and at least
20 one transmitter configured respectively for receiving and transmitting data, signals,
information, or a combination thereof between units/components within the system and/or connected with the system.
[0045] As used herein, PM data refers to Performance Management data, which
25 encompasses various metrics and statistics related to the performance and health of
network elements such as base stations, routers, switches, servers, and other
infrastructure components. The PM data includes, but is not limited to, information
on traffic volume, error rates, latency, uptime, and other critical performance
indicators. The PM data is collected at regular intervals to monitor the network's
30 functionality, diagnose issues, optimize performance, and ensure reliable and
efficient operation.
12

[0046] As used herein, counter reset scheduler refers to a mechanism within the
Network Management System (NMS) that periodically initiates and manages the
resetting of performance counters across various network elements. The counter
5 reset scheduler facilitates in sending out counter reset requests at configured
intervals to network devices such as base stations, routers, switches, and servers. By resetting the counters, the system ensures that the performance data collected is accurate and up-to-date.
10 [0047] As used herein, flow-identifier (ID) refers to a unique identifier assigned to
each counter reset request within the network management system (NMS). The flow-identifier (ID) serves as a distinct tag that allows the system to accurately track and correlate the performance management (PM) data received from various network elements with the corresponding requests initially sent out.
15
[0048] As used herein, counter reset request entries refer to specific records stored in the Input/Output (IO) Cache, each uniquely identified by a flow-identifier (ID). These entries are created when the Collector component of the Network Management System (NMS) sends requests to network elements, such as routers or
20 switches, to reset their performance counters and initiate the collection of
performance management (PM) data. The purpose of these entries is to track the status and progress of each counter reset request, ensuring that the NMS can accurately monitor and retrieve the required PM data. The entries remain in the IO Cache until the corresponding PM data is received, after which they are removed,
25 or until they expire if the data is not received within a predefined time frame.
[0049] As used herein, network node refers to any active electronic device that is
connected to a network and is capable of sending, receiving, or forwarding
information. The network node encompasses devices including, but not limited to,
30 base stations, routers, switches, servers, hubs, modems, and network interface
cards. The network nodes are the primary sources and destinations of performance
13

data, which is collected, monitored, and analysed to optimize network efficiency and detect any potential issues.
[0050] As used herein, cache refers to a high-speed storage layer that temporarily
5 holds data to quickly serve future requests. This storage mechanism allows for the
retrieval of frequently accessed data faster than retrieving it from the primary storage location, thereby improving overall system performance and efficiency. The IO Cache specifically stores counter reset request entries with unique identifiers, enabling efficient tracking and auditing of these requests.
10
[0051] As used herein, a network management system, or NMS, refers to a set of applications, tools, and protocols designed to monitor, manage, and optimize the performance, reliability, and security of a network. The NMS encompasses various functions including fault management, configuration management, performance
15 management, and security management, ensuring the seamless operation of
networking devices such as routers, switches, servers, and other infrastructure components.
[0052] As used herein, a load balancer directs incoming traffic to one or more
20 backend servers, known as "targets" or "nodes." The load balancer distributes
incoming traffic across multiple targets.
[0053] The load balancer serves requests to the targets with the fewest active connections. Load Balancer maps client internet protocol (IP) addresses to specific
25 targets based on a hash function. Load balancers often maintain session
information, such as cookies or HTTP headers, to ensure that subsequent requests from a client are directed to the same backend server. Load balancers perform health checks on each target to detect and remove any unresponsive or faulty servers from the rotation. Load balancers typically provide real-time monitoring and reporting
30 capabilities, allowing administrators to track performance metrics, such as response
times, error rates, and server utilization.
14

[0054] As discussed in the background section, the current known solutions have several shortcomings. The present disclosure aims to overcome the above-mentioned and other existing problems in this field of technology by providing method and system for network performance management 5
[0055] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture, in accordance with exemplary implementation of the present disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment (UE) [102], a radio access network
10 (RAN) [104], an access and mobility management function (AMF) [106], a Session
Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a
15 Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122],
a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to be connected to each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
20
[0056] Radio Access Network (RAN) [104] is the part of a mobile telecommunications system that connects user equipment (UE) [102] to the core network (CN) and provides access to different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable
25 wireless communication.
[0057] Access and Mobility Management Function (AMF) [106] is a 5G core
network function responsible for managing access and mobility aspects, such as UE
registration, connection, and reachability. It also handles mobility management
30 procedures like handovers and paging.
15

[0058] Session Management Function (SMF) [108] is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement. 5
[0059] Service Communication Proxy (SCP) [110] is a network function in the 5G core network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces. 10
[0060] Authentication Server Function (AUSF) [112] is a network function in the 5G core responsible for authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens.
15 [0061] Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
20 [0062] Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
[0063] Network Exposure Function (NEF) [118] is a network function that
25 exposes capabilities and services of the 5G network to external applications,
enabling integration with third-party services and applications.
[0064] Network Repository Function (NRF) [120] is a network function that acts
as a central repository for information about available network functions and
30 services. It facilitates the discovery and dynamic registration of network functions.
16

[0065] Policy Control Function (PCF) [122] is a network function responsible for policy control decisions, such as QoS, charging, and access control, based on subscriber information and network policies.
5 [0066] Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication, authorization, and subscription information.
[0067] Application Function (AF) [126] is a network function that represents
10 external applications interfacing with the 5G core network to access network
capabilities and services.
[0068] User Plane Function (UPF) [128] is a network function responsible for
handling user data traffic, including packet routing, forwarding, and QoS
15 enforcement.
[0069] Data Network (DN) [130] refers to a network that provides data services to user equipment (UE) in a telecommunications system. The data services may include but are not limited to Internet services, private data network related services.
20
[0070] FIG. 2 illustrates an exemplary block diagram of a computing device [200] (also referred to herein as a computer system [200]) upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure. In an implementation, the computing
25 device [200] may also implement a method for network performance management
utilising the system. In another implementation, the computing device [200] itself implements the method for network performance management using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
30
17

[0071] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
5 computing device [200] may also include a main memory [206], such as a random-
access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the
10 processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static
15 information and instructions for the processor [204].
[0072] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a
20 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for displaying information to a computer user. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor
25 [204]. Another type of user input device may be a cursor controller [216], such as
a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to the processor [204], and for controlling cursor movement on the display [212]. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
30 the device to specify positions in a plane.
18

[0073] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware,
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
5 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
10 contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of or in combination with software instructions.
15 [0074] The computing device [200] also may include a communication interface
[218] coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222]. For example, the communication interface [218] may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or
20 a modem to provide a data communication connection to a corresponding type of
telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the communication interface [218] sends and receives electrical,
25 electromagnetic, or optical signals that carry digital data streams representing
various types of information.
[0075] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
30 communication interface [218]. In the Internet example, a server [230] might
transmit a requested code for an application program through the Internet [228], the
19

ISP [226], the local network [222], host [224], and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
5
[0076] The computing device [200] encompasses a wide range of electronic
devices capable of processing data and performing computations. Examples of
computing device [200] include, but are not limited only to, personal computers,
laptops, tablets, smartphones, servers, and embedded systems. The devices may
10 operate independently or as part of a network and can perform a variety of tasks
such as data storage, retrieval, and analysis. Additionally, computing device [200] may include peripheral devices, such as monitors, keyboards, and printers, as well as integrated components within larger electronic systems, showcasing their versatility in various technological applications.
15
[0077] Referring to FIG. 3, an exemplary block diagram of a system [300] for network performance management is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises a configuring unit [302], an auditing unit [304], a retrieving unit [306], and a
20 processing unit [308], wherein all the components are assumed to be connected to
each other in a manner as obvious to the person skilled in the art for implementing features of the present disclosure. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features
25 of the present disclosure.
[0078] The system [300] is configured for network performance management, with the help of the interconnection between the components/units of the system [300].
30 [0079] The system [300] includes the configuring unit [302] configured to
configure a counter reset scheduler and an Input/Output (IO) Cache in a Network
20

Management System (NMS) to initiate a set of performance management (PM) data
collection from a plurality of network nodes. The counter reset scheduler operates
at predefined intervals (such as hourly), thus facilitating the periodic collection of
the set of PM data from the plurality of network nodes. For example, if the NMS is
5 configured for collecting PM data every hour, the configuring unit [302] will set
the counter reset scheduler accordingly to trigger the operation each hour.
[0080] Additionally, the configuring unit [302] is configured to set up the IO Cache to store all counter reset requests, associating each with a unique flow-identifier
10 (ID). This configuration allows for efficient tracking and retrieval of PM data
requests. For example, if a network node fails to send the required PM data, the corresponding request entry remains in the IO Cache, enabling subsequent audits to identify and address the missing data. This process ensures that the NMS can maintain comprehensive and accurate performance monitoring across all network
15 nodes, even in the event of data transmission failures or network issues.
[0081] The system [300] includes the auditing unit [304] communicatively coupled to the configuring unit [302]. The auditing unit [304] is configured to periodically audit the IO Cache to check a set of counter reset request entries. The auditing unit
20 [304] performs the audits at regular intervals (such as hourly), to ensure that the set
of counter reset request entries are accounted for and any missing PM data is identified. For example, if the auditing unit [304] is set to perform audits every hour, it will audit the IO Cache at the specified times to locate and assess the set of counter reset request entries that have not yet received the set of PM data from the
25 plurality of network nodes. During each audit, the auditing unit [304] retrieves the
set of counter reset request entries and determines their status, identifying which of set of counter reset request entries have been fulfilled and which remain outstanding.
30 [0082] The system [300] includes the retrieving unit [306] communicatively
coupled to the auditing unit [304]. The retrieving unit [306] is configured to retrieve
21

the set of counter reset request entries from the IO Cache based on the audit. Upon
receiving the audit results from the auditing unit [304], the retrieving unit [306]
accesses the IO Cache to extract the set of counter reset request entries identified
during the audit. For example, if the auditing unit [304] determines that certain
5 counter reset request entries have not received the corresponding PM data, the
retrieving unit [306] will locate the specific entries within the IO Cache.
[0083] The system [300] includes the processing unit [308] communicatively coupled to the retrieving unit [306]. The processing unit [308] is configured to send
10 a set of performance management (PM) data requests associated with the retrieved
set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data. Upon receiving the set of counter reset request entries from the retrieving unit [306], the processing unit [308] generates and sends the set of PM data requests that correspond to the retrieved set
15 of counter reset request entries. For example, if the retrieved counter reset request
entries indicate that PM data was not received from specific network nodes due to a prior network issue, the processing unit [308] will create requests directed at those particular nodes. The set of PM data requests are sent out to the at least one network node of the plurality of network nodes to collect the missing PM data.
20
[0084] The processing unit [308] is further configured to remove the set of counter reset request entries from the IO Cache for which the set of PM data is received during the audit. During the auditing process, when the auditing unit [304] identifies that the set of PM data has been received for the set of counter reset request entries,
25 the processing unit [308] clears the set of counter reset request entries from the IO
Cache for which the set of PM data is received during the audit. For example, if the auditing unit [304] identifies that the set of PM data corresponding to the counter reset request entries has been delivered by the network nodes, the processing unit [308] will then proceed to remove those specific entries from the IO Cache such
30 that only pending or unresolved counter reset request entries are retained. It would
be appreciated by the person skilled in the art that the processing unit [308]
22

optimizes the management of the IO Cache, preventing unnecessary accumulation of fulfilled requests and thereby enhancing the overall performance and responsiveness of the network management system.
5 [0085] The processing unit [308] is further configured to allow the set of counter
reset request entries to persist in the IO Cache until a predefined expiry time period is reached if the set of PM data is again not received for subsequent audit, from the plurality of network nodes. If, during a subsequent audit, the set of PM data corresponding to the set of counter reset request entries is still not received from
10 the plurality of network nodes, the processing unit [308] allows that the set of
counter reset request entries to remain in the IO Cache. For example, if a predefined expiry time period of six hours is set, and the PM data is not received within this period, the processing unit [308] will keep these counter reset request entries in the IO Cache for up to six hours.
15
[0086] The processing unit [308] is further configured to track each of the counter reset request entries by storing the set of counter reset request entries in the Input/Output (IO) Cache, wherein each of the set of counter reset request entries is associated with a unique flow-identifier (ID). Further, the processing unit [308] is
20 configured to receive the set of PM data from the plurality of network nodes using
the unique flow-ID to correlate the received set of PM data with a corresponding counter reset request entry. Thereafter, the processing unit is configured to remove a counter reset request entry of the set of counter reset request entries, from the IO Cache, for which the set of PM data is received. For example, when a counter reset
25 request entry is generated, it is stored in the IO Cache with a unique flow-ID. The
unique flow-ID is then used to match the received set of PM data to correlate the set of counter reset request entry. When the set of PM data is received from the plurality of network nodes, the processing unit [308] uses the unique flow-ID to identify which counter reset request entry in the IO Cache corresponds to the
30 received set of PM data. Upon successful correlation, the processing unit [308]
23

removes the matched counter reset request entry from the IO Cache, thereby ensuring that the cache is kept current and only contains outstanding requests.
[0087] The processing unit [308] is further configured to raise an alarm if the
5 missing set of PM data is detected based on the retrieved set of counter reset request
entries. For example, during an audit, if the processing unit [308] identifies that the
set of counter reset request entries in the IO Cache have not received their
corresponding PM data from the network nodes, it will trigger an alarm. This alarm
serves as an immediate notification of missing PM data, allowing administrators to
10 take timely action to investigate and resolve the underlying issues. The alarm can
be configured to include specific details about the missing data, such as the affected network nodes and the time period of the missing data, providing valuable information for troubleshooting.
15 [0088] The processing unit [308] is further configured to validate time period of
the set of counter reset request entries in the IO Cache, wherein entries that have exceeded the predefined expiry time period are removed from the IO Cache. The validation process involves continuously monitoring the timestamps associated with each of the set of counter reset request entries stored in the IO Cache. For
20 example, each counter reset request entry is initially assigned a timestamp when it
is stored in the IO Cache. The processing unit [308] regularly checks the timestamps to determine how long each entry has been in the cache. If an entry's time period exceeds the predefined expiry time, such as six hours, the processing unit [308] will identify it as expired. Consequently, the expired set of entries are automatically
25 removed from the IO Cache.
[0089] The configuring unit [302] is further configured to configure a set of
parameters associated with the plurality of nodes for the auditing the set of counter
reset requests. The set of parameters comprises at least one of a counter reset
30 scheduler interval, an expiry time of counter reset requests, a batch size, and a batch
interval. For example, the counter reset scheduler interval determines how
24

frequently the counter reset operations are scheduled, which can be adjusted based
on the network's data collection requirement. The expiry time of counter reset
requests specifies the duration for which these requests remain valid in the IO Cache
before being automatically removed if the PM data is not received. The batch size
5 parameter defines the number of counter reset request entries to be processed in a
single audit cycle, optimizing the load and performance of the auditing process. Similarly, the batch interval parameter sets the time delay between successive batches of counter reset requests, allowing for controlled and efficient processing.
10 [0090] Referring to FIG. 4, an exemplary block diagram of a system architecture
[400] for network performance management is shown. The system architecture [400] comprises a Collector [402], IO cache [404], a network element [406], a load balancer [408], a PM Auditor [410], a Fault Management (FM) System [412], a stream [414], a performance manager (PM) [416], and a database [418], wherein
15 all the components are assumed to be connected to each other in a manner as
obvious to the person skilled in the art for implementing features of the present disclosure. Also, in FIG. 4 only a few units are shown, however, the system architecture [400] may comprise multiple such units or the system architecture [400] may comprise any such numbers of said units, as required to implement the
20 features of the present disclosure.
[0091] The system architecture [400] is configured for network performance management, with the help of the interconnection between the components/units of the system architecture [400].
25
[0092] The Configuring Unit [302] configures a counter reset scheduler and an Input/Output (IO) Cache [404] within the NMS to initiate the set of PM data collection from various network elements such as routers, switches, servers, and other networking devices.
30
25

[0093] Each of the set of counter reset request entries sent by the Collector [402] is
assigned a unique flow-identifier (ID) and stored in the IO Cache [404]. The unique
flow-ID allows the system to track each request. The IO Cache [404] holds the set
of counter reset request entries until the corresponding set of PM data is received
5 from the network elements [406].
[0094] Upon receiving the set of counter reset request entries, the network elements [406] generate and send the required PM data back to the Collector [402]. The Load Balancer [408] distributes incoming traffic across multiple targets. The load
10 balancer serves requests to the targets with the fewest active connections. Load
Balancer maps client internet protocol (IP) addresses to specific targets based on a hash function. Load balancers often maintain session information, such as cookies or HTTP headers, to ensure that subsequent requests from a client are directed to the same backend server. Load balancers perform health checks on each target to
15 detect and remove any unresponsive or faulty servers from the rotation. Load
balancers typically provide real-time monitoring and reporting capabilities, allowing administrators to track performance metrics, such as response times, error rates, and server utilization.
20 [0095] When the set of PM data arrives, the Collector [402] uses the flow-ID to
identify the corresponding request in the IO Cache [404] and removes the entry, indicating that the set of PM data has been successfully collected.
[0096] The PM Auditor [410] periodically audits the IO Cache [404] to check for
25 any missing set of PM data. The auditing process be repeated after predefined time
period (such as every hour), as configured, to ensure timely detection of any
discrepancies. The PM Auditor [410] retrieves the set of counter reset request
entries from the IO Cache [404] and checks if any data is missing. If the set of PM
data is received, no further action is required. However, if some data is missing, the
30 system takes additional steps to address the issue.
26

[0097] When the PM Auditor [410] identifies missing set of PM data, it validates
the time period of the set of counter reset request entries. If the requests have
expired based on a predefined expiry time period (such as six hours), the set of
counter reset request entries are removed from the IO Cache [404] without any
5 further action. For non-expired requests, the PM Auditor [410] sends an alarm
request to the Fault Management (FM) System [412] to notify users of the missing set of PM data. Additionally, the set of counter reset request entries are added to a batched queue system, and the system attempts to resend the requests to the network elements based on configured batch sizes and delays.
10
[0098] The FM System [412] facilitates in raising alarms based on the requests (such as missing set of PM data based on the retrieved set of counter reset request entries) received from the PM Auditor [410]. The alarms alert users to the missing PM data, allowing them to take necessary actions to resolve the issue.
15
[0099] Once the missing PM data is received, the Collector [402] removes the corresponding entries from the IO Cache [404]. The PM data is then uploaded to a data Stream [414] for further processing by the Performance Manager (PM) [416].
20 [00100] The PM Auditor [410] regularly audits the IO Cache [404] and
manages the set of counter reset request entries. If PM data is not received within the configured time frame, the set of counter reset request entries will persist in the IO Cache [404] until they expire.
25 [00101] Referring to FIG. 5, an exemplary method [500] flow diagram for
network performance management, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [500] is performed by the system [300]. Also, as shown in FIG. 5, the method [500] starts at step [502].
30
27

[00102] Next, at step [504], the method [500] as disclosed by the present
disclosure comprises configuring, by a configuring unit [302], a counter reset
scheduler and an Input/Output (IO) Cache in a Network Management System
(NMS) to initiate a set of performance management (PM) data collection from a
5 plurality of network nodes. The counter reset scheduler operates at predefined
intervals (such as hourly), thus facilitating the periodic collection of the set of PM data from the plurality of network nodes. For example, if the NMS is configured for collecting PM data every hour, the configuring unit [302] will set the counter reset scheduler accordingly to trigger the operation each hour.
10
[00103] Next, at step [506], the method [500] as disclosed by the present
disclosure comprises periodically auditing, by an auditing unit [304], the IO Cache to check a set of counter reset request entries. The auditing unit [304] performs the audits at regular intervals (such as hourly), to ensure that the set of counter reset
15 request entries are accounted for and any missing PM data is identified. For
example, if the auditing unit [304] is set to perform audits every hour, it will audit the IO Cache at the specified times to locate and assess the set of counter reset request entries that have not yet received the set of PM data from the plurality of network nodes. During each audit, the auditing unit [304] retrieves the set of counter
20 reset request entries and determines their status, identifying which of set of counter
reset request entries have been fulfilled and which remain outstanding.
[00104] Next, at step [508], the method [500] as disclosed by the present
disclosure comprises retrieving, by a retrieving unit [306], the set of counter reset
25 request entries from the IO Cache based on the audit. Upon receiving the audit
results from the auditing unit [304], the retrieving unit [306] accesses the IO Cache to extract the set of counter reset request entries identified during the audit. For example, if the auditing unit [304] determines that certain counter reset request entries have not received the corresponding PM data, the retrieving unit [306] will
30 locate the specific entries within the IO Cache.
28

[00105] Next, at step [510], the method [500] as disclosed by the present
disclosure comprises sending, by a processing unit [308], a set of performance
management (PM) data requests associated with the retrieved set of counter reset
request entries to at least one network node of the plurality of network nodes for
5 retrieval of missing set of PM data. Upon receiving the set of counter reset request
entries from the retrieving unit [306], the processing unit [308] generates and sends
the set of PM data requests that correspond to the retrieved set of counter reset
request entries. For example, if the retrieved counter reset request entries indicate
that PM data was not received from specific network nodes due to a prior network
10 issue, the processing unit [308] will create requests directed at those particular
nodes. The set of PM data requests are sent out to the at least one network node of the plurality of network nodes to collect the missing PM data.
[00106] The method [500] terminates at step [512].
15
[00107] FIG. 6 illustrates an exemplary process for network performance
management, in accordance with exemplary embodiments of the present disclosure.
[00108] The process [600] starts at step [602].
20
[00109] At step [604], the process begins with the collector running a counter
reset job, which involves the collector sending out counter reset requests to various network nodes.
25 [00110] At step [606], the requests are tracked by inserting the request
details, including a unique flow-identifier (ID), into the input/Output (IO) Cache. The unique flow-ID ensures that each counter reset request can be accurately monitored and identified throughout the process.
30 [00111] At step [608], the PM Auditor runs hourly, to check the IO Cache
for counter reset request entries.
29

[00112] At step [610], the PM Auditor check if the IO Cache does not contain
any counter reset request entries,
5 [00113] At step [612], the PM Auditor assumes that data has been received
for every request and does nothing.
[00114] At step [614], if the IO Cache does contain counter reset request
entries, the PM Auditor checks if these entries have expired. Entries are considered
10 expired if they have been present in the IO Cache for longer than the predefined
expiry time period.
[00115] At step [616], if it is found that the request entries are expired, the
PM Auditor removes the entries from the IO Cache. 15
[00116] At step [618], if the request entries are not expired, the PM Auditor
adds the counter reset requests for the found entries to a bulk queue.
[00117] At step [620], if the missing set of PM data is detected based on the
20 retrieved set of counter reset request entries, the processing unit raises an alarm to
the Fault Management system if configured.
[00118] At step [622], the counter reset requests are then sent from the bulk
queue based on the configured batch size and delay.
25
[00119] At step [624], when the collector receives the PM data from the
network nodes using the flow-ID, it correlates the received data with the corresponding counter reset request entry. Once the PM data is successfully received, the collector removes the respective request entry from the IO Cache,
30 ensuring that only outstanding requests remain in the cache.
30

[00120] Thereafter, the process [600] terminates at step [624].
[00121] The present disclosure further discloses a non-transitory computer readable storage medium storing instructions for network performance management, the instructions include executable code which, when executed by one or more units of a system, causes: a configuring unit to configure a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate a set of performance management (PM) data collection from a plurality of network nodes; an auditing unit to periodically audit the IO Cache to check a set of counter reset request entries; a retrieving unit to retrieve the set of counter reset request entries from the IO Cache based on the audit; and a processing unit configured to send a set of performance management (PM) data requests associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data.
[00122] As is evident from the above, the present disclosure provides a
technically advanced solution for a method and a system for network performance management. The present solution highlighting the mechanisms for tracking, validation, alarm generation, queuing, and data retrieval ensures accurate and reliable performance data collection in the NMS.
[00123] Further, in accordance with the present disclosure, it is to be
acknowledged that the functionality described for the various the components/units can be implemented interchangeably. While specific embodiments may disclose a particular functionality of these units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended

functionality described herein, are considered to be encompassed within the scope of the present disclosure.
[00124] While considerable emphasis has been placed herein on the
disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

I/We Claim:
1. A method for network performance management, comprising:
configuring, by a configuring unit [302], a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate a set of performance management (PM) data collection from a plurality of network nodes;
periodically auditing, by an auditing unit [304], the IO Cache to check a set of counter reset request entries;
retrieving, by a retrieving unit [306], the set of counter reset request entries from the IO Cache based on the audit; and
sending, by a processing unit [308], a set of performance management (PM) data requests associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data.
2. The method as claimed in claim 1, wherein the method comprises removing, by the processing unit [308], the set of counter reset request entries from the IO Cache for which the set of PM data is received during the audit.
3. The method as claimed in claim 1, wherein the method comprises, allowing, by the processing unit [308], the set of counter reset request entries to persist in the IO Cache until a predefined expiry time period is reached if the set of PM data is again not received, after subsequent audit, from the plurality of network nodes.
4. The method as claimed in claim 1, wherein the method comprises:
tracking, by the processing unit [308], each of the set of counter reset request entries by storing the set of counter reset request entries in the Input/Output (IO) Cache, wherein each of the set of counter reset request entries is associated with a unique flow-identifier (ID);

receiving, by the processing unit [308], the set of PM data from the plurality of network nodes using the unique flow-ID to correlate the received set of PM data with a corresponding counter reset request entry; and
removing, by the processing unit [308], a counter reset request entry of the set of counter reset request entries, from the IO Cache, for which the set of PM data is received.
5. The method as claimed in claim 1, wherein the method comprises raising, by the processing unit [308], an alarm if the missing set of PM data is detected based on the retrieved set of counter reset request entries.
6. The method as claimed in claim 3, wherein the method comprises validating, by the processing unit [308], time period of the set of counter reset request entries in the IO Cache, wherein the set of counter reset request entries that have exceeded the predefined expiry time period are removed from the IO Cache.
7. The method as claimed in claim 1, wherein the method comprises configuring, by the configuring unit [302], a set of parameters associated with the plurality of network nodes for the auditing the set of counter reset requests.
8. The method as claimed in claim 7, wherein the set of parameters comprises at least one of a counter reset scheduler interval, an expiry time of counter reset requests, a batch size, and a batch interval.
9. A system [300] for network performance management, comprising:
a configuring unit [302] configured to configure a counter reset scheduler and an Input/Output (IO) Cache in a Network Management System (NMS) to initiate a set of performance management (PM) data collection from a plurality of network nodes;

an auditing unit [304] configured to periodically audit the IO Cache to check a set of counter reset request entries;
a retrieving unit [306] configured to retrieve the set of counter reset request entries from the IO Cache based on the audit; and
a processing unit [308] configured to send a set of performance management (PM) data requests associated with the retrieved set of counter reset request entries to at least one network node of the plurality of network nodes for retrieval of missing set of PM data.
10. The system [300] as claimed in claim 9, wherein the processing unit [308] is further configured to remove the set of counter reset request entries from the IO Cache for which the set of PM data is received during the audit.
11. The system [300] as claimed in claim 9, wherein the processing unit [308] is further configured to allow the set of counter reset request entries to persist in the IO Cache until a predefined expiry time period is reached if the set of PM data is again not received after subsequent audit, from the plurality of network nodes.
12. The system [300] as claimed in claim 9, wherein the processing unit [308] is further configured to:
track, each of counter reset request entries by storing the set of counter reset request entries in the Input/Output (IO) Cache, wherein each of the set of counter reset request entries is associated with a unique flow-identifier (ID);
receive the set of PM data from the plurality of network nodes using the unique flow-ID to correlate the received set of PM data with a corresponding counter reset request entry; and
remove a counter reset request entry of the set of counter reset request entries, from the IO Cache, for which the set of PM data is received.

13. The system [300] as claimed in claim 9, wherein the processing unit [308] is further configured to raise an alarm if the missing set of PM data is detected based on the retrieved set of counter reset request entries.
14. The system [300] as claimed in claim 11, wherein the processing unit [308] is further configured to validate time period of the set of counter reset request entries in the IO Cache, wherein the set of counter reset request entries that have exceeded the predefined expiry time period are removed from the IO Cache.
15. The system [300] as claimed in claim 9, wherein the configuring unit [302] is further configured to configure a set of parameters associated with the plurality of network nodes for the auditing the set of counter reset requests.
16. The system [300] as claimed in claim 15, wherein the set of parameters comprises at least one of a counter reset scheduler interval, an expiry time of counter reset requests, a batch size, and a batch interval.

Documents

Application Documents

# Name Date
1 202321047308-STATEMENT OF UNDERTAKING (FORM 3) [13-07-2023(online)].pdf 2023-07-13
2 202321047308-PROVISIONAL SPECIFICATION [13-07-2023(online)].pdf 2023-07-13
3 202321047308-FORM 1 [13-07-2023(online)].pdf 2023-07-13
4 202321047308-FIGURE OF ABSTRACT [13-07-2023(online)].pdf 2023-07-13
5 202321047308-DRAWINGS [13-07-2023(online)].pdf 2023-07-13
6 202321047308-FORM-26 [14-09-2023(online)].pdf 2023-09-14
7 202321047308-Proof of Right [06-10-2023(online)].pdf 2023-10-06
8 202321047308-ORIGINAL UR 6(1A) FORM 1 & 26)-231023.pdf 2023-11-06
9 202321047308-FORM-5 [10-07-2024(online)].pdf 2024-07-10
10 202321047308-ENDORSEMENT BY INVENTORS [10-07-2024(online)].pdf 2024-07-10
11 202321047308-DRAWING [10-07-2024(online)].pdf 2024-07-10
12 202321047308-CORRESPONDENCE-OTHERS [10-07-2024(online)].pdf 2024-07-10
13 202321047308-COMPLETE SPECIFICATION [10-07-2024(online)].pdf 2024-07-10
14 202321047308-FORM 3 [01-08-2024(online)].pdf 2024-08-01
15 Abstract-1.jpg 2024-08-13
16 202321047308-Request Letter-Correspondence [14-08-2024(online)].pdf 2024-08-14
17 202321047308-Power of Attorney [14-08-2024(online)].pdf 2024-08-14
18 202321047308-Form 1 (Submitted on date of filing) [14-08-2024(online)].pdf 2024-08-14
19 202321047308-Covering Letter [14-08-2024(online)].pdf 2024-08-14
20 202321047308-CERTIFIED COPIES TRANSMISSION TO IB [14-08-2024(online)].pdf 2024-08-14