Abstract: The present disclosure relates to a method [400] and a system [300] for extracting latency metrics. The method comprises receiving, by a transceiver unit [301] from a node server [302], an event mapping. The method [400] further comprises determining, by a determination unit [303], one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in an event of a presence of a key in the received event mapping, and the negative result is determined in an event of an absence of a key in the received event mapping. Thereafter the method [400] comprises performing, by a latency metrics module [304], one of: a new key procedure in an event of determination of the positive result, and an old key procedure in an event of determination of the negative result. [FIG. 3]
FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR EXTRACTING LATENCY METRICS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.
METHOD AND SYSTEM FOR EXTRACTING LATENCY METRICS
FIELD OF INVENTION
[0001] Embodiments of the present disclosure generally relate to network management systems. More particularly, embodiments of the present disclosure relate to a solution for extracting latency metrics in order to facilitate enhancing the network management process.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second-generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third-generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth-generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth-generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] Recent advancements in telecommunications have led to the development and deployment of 5G technology, which promises significantly lower latency, higher data transfer rates, and increased connectivity compared to its predecessors. Traditional latency metrics and technologies have become outdated with the advent of 5G, necessitating new approaches and innovations to fully leverage the potential of this cutting-edge technology. Latency metrics
ensures efficient application operation by proactively managing latency issues. Latency metrics enable network operators to identify bottlenecks and areas of high latency within an application or networks. Further, latency metrics can pinpoint specific module or processes causing delays, thus helping the network operator in taking necessary actions to optimize network performance. By monitoring latency metrics during peak hours, it helps to assess the impact of increased load on latency, thus providing information for capacity of a server in the network. However, in conventional implementations of networks, latency metrics are not utilized or analyzed by network operators.
[0005] Thus, there exists an imperative need in the art for latency metrics extraction, which the present disclosure aims to address.
SUMMARY
[0006] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.
[0007] An aspect of the present disclosure may relate to a method for extracting latency metrics. The method comprises receiving, by a transceiver unit from a node server, an event mapping. The method further comprises determining, by a determination unit, one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in an event of a presence of a key in the received event mapping, and the negative result is determined in an event of an absence of a key in the received event mapping. The method further comprises performing, by a latency metrics module, one of: a new key procedure in an event of determination of the positive result, and an old key procedure in an event of determination of the negative result.
[0008] In an exemplary aspect of the present disclosure, the new key procedure comprises creating, by the latency metrics module, a new bucket array; incrementing, by the latency metrics module, a value of a corresponding slot index; and storing, by the latency metrics module in a storage unit, the new bucket array and an associated key.
[0009] In an exemplary aspect of the present disclosure, the creating a new bucket array comprises retrieving, by the transceiver unit from the node server, a set of data for creating the new bucket array; determining, by the latency metrics module, one or more latency value based on a current time and a request time from the set of data; and creating, by the latency metrics module, one or more slots based on the one or more latency values.
[0010] In an exemplary aspect of the present disclosure, the old key procedure comprises retrieving, by the transceiver unit, a bucket array related to an existing module key entry; and incrementing, by the latency metrics module, a value of a corresponding slot index.
[0011] In an exemplary aspect of the present disclosure, the present disclosure comprises setting, by the latency metrics module, a bucket size parameter during runtime using interface module.
[0012] In an exemplary aspect of the present disclosure, the present disclosure comprises maintaining, by the node server, a mapping of one or more module names and a set of one or more values associated with one or more modules.
[0013] In an exemplary aspect of the present disclosure, the method further comprises sending, by the transceiver unit to a network management system (NMS), a latency metrics data based on the performance of the one of the new key procedure and the old key procedure.
[0014] In an exemplary aspect of the present disclosure, the method further comprising resetting, by the latency metrics module, the latency metrics at a predefined time interval.
[0015] Another aspect of the present disclosure may relate to a system for extracting latency metrics, the system comprising a transceiver unit configured to receive from a node server, an event mapping. The system further comprises a determination unit connected at least to the transceiver unit, wherein the determination unit is configured to determine one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in an event of a presence of a key in the received event mapping, and the negative result is determined in an event of an absence of a key in the received event mapping. The system further comprises a latency metrics module connected at least to the determination unit, wherein the latency metrics module is configured to perform one of: a new key procedure
in an event of determination of the positive result, and an old key procedure in an event of determination of the negative result.
[0016] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium storing instruction for extracting latency metrics the instructions include executable code which, when executed by one or more units of a system, causes: a transceiver unit of the system to receive from a node server, an event mapping. Further, the instructions include executable code which, when executed by one or more units of a system, causes a determination unit of the system to determine one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in an event of a presence of a key in the received event mapping, and the negative result is determined in an event of an absence of a key in the received event mapping. Further, the instructions include executable code which, when executed by one or more units of a system, causes a latency metrics module of the system to perform one of a new key procedure in an event of determination of the positive result, and an old key procedure in an event of determination of the negative result.
OBJECTS OF THE INVENTION
[0017] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.
[0018] It is an object of the present disclosure to provide a system and a method for latency metrics extraction.
[0019] It is another object of the present disclosure to provide a solution that provides capacity estimation and scalability assessment.
[0020] It is yet another object of the present disclosure to provide a solution that enables root cause analysis, wherein use of latency metrics along with application logs helps to identify underlying cause of latency issue.
[0021] It is yet another object of the present disclosure to provide a solution that provides performance insights.
DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated herein, and constitute a part of
5 this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in
which like reference numerals refer to the same parts throughout the different drawings.
Components in the drawings are not necessarily to scale, emphasis instead being placed upon
clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the
figures are not to be construed as limiting the disclosure, but the possible variants of the method
10 and system according to the disclosure are illustrated herein to highlight the advantages of the
disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.
15 [0023] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core
(5GC) network architecture.
[0024] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the
features of the present disclosure may be implemented in accordance with exemplary
20 implementation of the present disclosure.
[0025] FIG. 3 illustrates an exemplary block diagram of a system for extracting latency metrics, in accordance with exemplary implementations of the present disclosure.
25 [0026] FIG. 4 illustrates an exemplary flow diagram of a method for extracting latency metrics,
in accordance with exemplary implementations of the present disclosure.
[0027] FIG. 5 illustrates an exemplary signaling flow diagram for extracting latency metrics, in accordance with exemplary embodiments of the present disclosure. 30
[0028] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
6
[0029] In the following description, for the purposes of explanation, various specific details
are set forth in order to provide a thorough understanding of embodiments of the present
disclosure. It will be apparent, however, that embodiments of the present disclosure may be
5 practiced without these specific details. Several features described hereafter may each be used
independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
10 [0030] The ensuing description provides exemplary embodiments only, and is not intended to
limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and
15 scope of the disclosure as set forth.
[0031] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of ordinary skill in
the art that the embodiments may be practiced without these specific details. For example,
20 circuits, systems, processes, and other components may be shown as components in block
diagram form in order not to obscure the embodiments in unnecessary detail.
[0032] Also, it is noted that individual embodiments may be described as a process which is
depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block
25 diagram. Although a flowchart may describe the operations as a sequential process, many of
the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure.
30 [0033] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an
example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition, any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary
7
structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent
that the terms “includes”, “has”, “contains” and other similar words are used in either the
detailed description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any additional or other
5 elements.
[0034] As used herein, a “processing unit” or “processor” or “operating processor” includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general-purpose processor, a special purpose processor, a
10 conventional processor, a digital signal processor, a plurality of microprocessors, one or more
microprocessors in association with a (Digital Signal Processing) DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the
15 working of the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0035] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless
20 communication device”, “a mobile communication device”, “a communication device” may
be any electrical, electronic and/or computing device or equipment, capable of implementing the features of the present disclosure. The user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of
25 implementing the features of the present disclosure. Also, the user device may contain at least
one input means configured to receive an input from at least one of a transceiver unit, a processing unit, a storage unit, a detection unit and any other such unit(s) which are required to implement the features of the present disclosure.
30 [0036] As used herein, “storage unit” or “memory unit” refers to a machine or computer-
readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The
8
storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
[0037] As used herein “interface” or “user interface refers to a shared boundary across which
5 two or more separate components of a system exchange information or data. The interface may
also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with each other, which also includes the methods, functions, or procedures that may be called.
10 [0038] Latency metrics signifies a time duration for which the peer end has to wait for response
after sending request. Slots are categories to differentiate all counters in latency metrics. In an exemplary embodiment, each slots have a bucket size of pre-defined time interval such as say 1000ms. A node server is a server in a network implementing a specific set of instructions. A node server can have one or more interfaces for communicating with other node servers in the
15 network.
[0039] All modules, units, components used herein, unless explicitly excluded herein, may be
software modules or hardware processors, the processors being a general-purpose processor, a
special purpose processor, a conventional processor, a digital signal processor (DSP), a
20 plurality of microprocessors, one or more microprocessors in association with a DSP core, a
controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
[0040] As used herein the transceiver unit include at least one receiver and at least one
25 transmitter configured respectively for receiving and transmitting data, signals, information or
a combination thereof between units/components within the system and/or connected with the system.
30 [0041] The present invention is a new technology that works directly with 5G nodes to improve
5G network performance. Unlike existing technologies that depend on existing latency measurements, the present invention is specifically designed for the unique needs of 5G networks. It integrates smoothly with 5G nodes to process data and communicate in real-time, ensuring the network runs efficiently and effectively. It uses advanced techniques to reduce
9
latency and improve user experience. The present invention is also scalable, meaning it can handle more devices and higher data demands as they grow. Additionally, it includes strong security measures to protect against cyber threats, ensuring the network remains reliable and secure. 5
[0042] The present invention provides a method to manage and analyse latency metrics. It starts by receiving detailed event mappings from node servers through a transceiver unit. These event mappings, allow the system to monitor and respond to specific occurrences effectively. A determination unit analyses these event mappings to see if specific keys are present or absent.
10 The method helps to identify areas with high latency and the specific parts or processes causing
the delays. Based on whether these keys are found, the method performs different actions to address the issues. If a positive result (new key) is found, the method creates new data structures, like bucket arrays, to organize and manage latency data. If a negative result (old key) is found, it retrieves and updates existing data structures. This flexible approach ensures
15 the network can adapt to changing conditions and maintain good performance.
[0043] Additionally, the present invention allows for dynamic adjustments, such as changing
bucket sizes during operation. This helps the system respond to different network loads in real
time. It also keeps a detailed mapping of module names and their values, regularly sending
20 latency data to a network management server for continuous monitoring and optimization. The
system also resets latency metrics at set intervals to keep the data accurate and relevant.
[0044] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture, in accordance with exemplary implementation of the present
25 disclosure. As shown in FIG. 1, the 5GC network architecture [100] includes a user equipment
(UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific Authentication and Authorization Function (NSSAAF) [114], a Network Slice Selection
30 Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository
Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the components are assumed to be connected to each other in
10
a manner as obvious to the person skilled in the art for implementing features of the present disclosure.
[0045] Radio Access Network (RAN) [104] is the part of a mobile telecommunications system
5 that connects user equipment (UE) [102] to the core network (CN) and provides access to
different types of networks (e.g., 5G network). It consists of radio base stations and the radio access technologies that enable wireless communication.
[0046] Access and Mobility Management Function (AMF) [106] is a 5G core network function
10 responsible for managing access and mobility aspects, such as UE registration, connection, and
reachability. It also handles mobility management procedures like handovers and paging.
[0047] Session Management Function (SMF) [108] is a 5G core network function responsible
for managing session-related aspects, such as establishing, modifying, and releasing sessions.
15 It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address
allocation and QoS enforcement.
[0048] Service Communication Proxy (SCP) [110] is a network function in the 5G core
network that facilitates communication between other network functions by providing a secure
20 and efficient messaging service. It acts as a mediator for service-based interfaces.
[0049] Authentication Server Function (AUSF) [112] is a network function in the 5G core responsible for authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens. 25
[0050] Network Slice Specific Authentication and Authorization Function (NSSAAF) [114] is a network function that provides authentication and authorization services specific to network slices. It ensures that UEs can access only the slices for which they are authorized.
30 [0051] Network Slice Selection Function (NSSF) [116] is a network function responsible for
selecting the appropriate network slice for a UE based on factors such as subscription, requested services, and network policies.
11
[0052] Network Exposure Function (NEF) [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
5 [0053] Network Repository Function (NRF) [120] is a network function that acts as a central
repository for information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[0054] Policy Control Function (PCF) [122] is a network function responsible for policy
10 control decisions, such as QoS, charging, and access control, based on subscriber information
and network policies.
[0055] Unified Data Management (UDM) [124] is a network function that centralizes the
management of subscriber data, including authentication, authorization, and subscription
15 information.
[0056] Application Function (AF) [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services.
20 [0057] User Plane Function (UPF) [128] is a network function responsible for handling user
data traffic, including packet routing, forwarding, and QoS enforcement.
[0058] Data Network (DN) [130] refers to a network that provides data services to user
equipment (UE) in a telecommunications system. The data services may include but are not
25 limited to Internet services, private data network related services.
[0059] FIG. 2 illustrates an exemplary block diagram of a computing device [200] upon which
the features of the present disclosure may be implemented in accordance with exemplary
implementation of the present disclosure. In an implementation, the computing device [200]
30 may also implement a method for extracting latency metrics, utilising the system. In another
implementation, the computing device [200] itself implements the method for extracting the latency metrics using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure.
12
[0060] The computing device [200] may include a bus [202] or other communication
mechanism for communicating information, and the processor [204] coupled with bus [202]
for processing information. The processor [204] may be, for example, a general-purpose
5 microprocessor. The computing device [200] may also include a main memory [206], such as
a random-access memory (RAM), or other dynamic storage device, coupled to the bus [202] for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed by the processor [204]. Such
10 instructions, when stored in non-transitory storage media accessible to the processor [204],
render the computing device [200] into a special-purpose machine that is customized to perform the operations specified in the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static information and instructions for the processor [204].
15
[0061] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD), Light Emitting Diode (LED) display, Organic LED
20 (OLED) display, etc. for displaying information to a computer user. An input device [214],
including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command selections to the processor [204]. Another type of user input device may be a cursor controller [216], such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to
25 the processor [204], and for controlling cursor movement on the display [212]. This input
device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow the device to specify positions in a plane.
[0062] The computing device [200] may implement the techniques described herein using
30 customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic
which in combination with the computing device [200] causes or programs the computing device [200] to be a special-purpose machine. According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory
13
[206]. Such instructions may be read into the main memory [206] from another storage
medium, such as the storage device [210]. Execution of the sequences of instructions contained
in the main memory [206] causes the processor [204] to perform the process steps described
herein. In alternative implementations of the present disclosure, hard-wired circuitry may be
5 used in place of or in combination with software instructions.
[0063] The computing device [200] also may include a communication interface [218] coupled to the bus [202]. The communication interface [218] provides a two-way data communication coupling to a network link [220] that is connected to a local network [222]. For example, the
10 communication interface [218] may be an integrated services digital network (ISDN) card,
cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any such implementation, the
15 communication interface [218] sends and receives electrical, electromagnetic or optical signals
that carry digital data streams representing various types of information.
[0064] The computing device [200] can send messages and receive data, including program code, through the network(s), the network link [220] and the communication interface [218].
20 In the Internet example, a server [230] might transmit a requested code for an application
program through the Internet [228], the ISP [226], the local network [222], the host [224] and the communication interface [218]. The received code may be executed by the processor [204] as it is received, and/or stored in the storage device [210], or other non-volatile storage for later execution.
25
[0065] Referring to FIG. 3, an exemplary block diagram of a system [300] for extracting latency metrics, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises at least one transceiver unit [301] at least one determination unit [303], at least one node server [302], at least one interface module [306] and
30 at least one latency metrics module [304]. Further, in an implementation of the present
disclosure, the system [300] may be connected to a Network Management system (NMS) [307] Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units shown within the system should also be assumed to be connected to each other. Also, in Fig. 1 only a few units
14
are shown, however, the system [300] may comprise multiple such units or the system [300]
may comprise any such numbers of said units, as required to implement the features of the
present disclosure. Further, in an implementation, the system [300] may be present in a user
device to implement the features of the present disclosure. The system [300] may be a part of
5 the user device / or may be independent of but in communication with the user device (may
also referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/ network entity and partly in the user device.
10 [0066] The system [300] is configured for extracting the latency metrics, with the help of the
interconnection between the components/units of the system [300].
[0067] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various the components/units can be implemented
15 interchangeably. While specific embodiments may disclose a particular functionality of these
units for clarity, it is recognized that various configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the
20 intended functionality described herein, are considered to be encompassed within the scope of
the present disclosure.
[0068] The system [300] is configured to extract the latency metrics, with the help of the interconnection between the components/units of the system [300]. 25
[0069] Further, for extracting latency metrics, the transceiver unit [301] is configured to receive from a node server [302], an event mapping.
[0070] As used herein event mapping refers to associating specific events or triggers within a
30 telecommunications network with their corresponding latency measurements. The term latency
metrics refer to the data and measurements that quantify the time delays and performance characteristics of events or data packets within the system.
15
[0071] The node server [302] is configured to maintain a mapping of one or more module
names and a set of one or more values associated with one or more modules. The node server
[302] is tasked with maintaining the mapping of one or more module names along with a set
of values associated with these modules, wherein the set of one or more values refers to a
5 collection of data points or parameters that are linked to specific modules within the system.
Each module in the system can have one or more associated values that represent various metrics or attributes relevant to the module’s performance or state.
[0072] The transceiver unit [301] is further configured to send to the network management
10 system (NMS) [307], a latency metrics data based on the performance of one of the new key
procedure and the old key procedure. Further, in an implementation of the present disclosure
as disclosed herein, the latency metrics module [304] is further configured to reset the latency
metrics at a predefined time interval. The NMS [307] is a centralized system responsible for
monitoring and managing various aspects of the network infrastructure. The metrics data is
15 based on the performance of either the new key procedure or the old key procedure. The latency
metrics module [304] is configured to reset the latency metrics at the predefined time interval at which the latency metrics are reset and/or recalculated. The interval could be defined in terms of seconds, minutes, hours, or any other suitable unit of time depending on the system's requirements ensuring that the system can provide updated and accurate metrics over time.
20
[0073] Further, in an implementation of the present solution, for the performance of the old key procedure by the latency metrics module [304] the transceiver unit [301] is further configured to retrieve a bucket array related to an existing module key entry. Further, for the performance of the old key procedure the latency metrics module [304] is further configured
25 to increment a value of a corresponding slot index. Further, in an implementation of the present
disclosure as disclosed herein, the latency metrics module [304] is configured to set a bucket size parameter during runtime using an interface module [306]. During the performance of the old key procedure, the latency metrics module [304] increments the value of the corresponding slot index that may be based on a data retrieved by the transceiver unit [301] related to the
30 existing module key entry. The latency metrics module [304] is also configured to set the
bucket size parameter during runtime where, the bucket size in latency metrics module [304] can be changed through CLI command at runtime of the process. The latency metrics module
16
[304] is configured to perform key functions: retrieving and updating latency data using module
key entries, dynamically adjusting bucket size parameters during runtime via an interface
module [306] and organizing latency metrics into structured bucket arrays. The existing module
key entry is a key that the system uses to identify and access previously stored data related to
5 a specific module. When the system performs the old key procedure, it retrieves the bucket
array associated with this existing module key entry to update the latency metrics. The
incrementing the value of the corresponding slot index is performed in an event the value falls
under an existing module key entry, further, to increment the value the solution as disclosed
herein retrieves the corresponding bucket array and increments the value of the appropriate slot
10 index. The Bucket Array is a data structure used to store and categorize latency metrics. It
consists of multiple slots, each representing a range of latency values and the slot index is the position or identifier for each slot within the bucket array. The time interval/slots can be as follows:
Slot 0 for 0 to 1000ms,
15 Slot 1 for 1001 to 2000ms,
Slot 2 for 2001 to 3000ms,
Slot 3 for 3001 to 4000 ms,
Slot 4 for 4001 to 5000 ms,
Slot 5 for 5001 to 6000 ms,
20 Slot 6 for 6001 to 7000 ms,
Slot 7 for 7001 to 8000 ms,
Slot 8 for 8001 to 9000 ms,
Slot 9 for 9001 to 10000 ms,
Slot 10 for more than 10000 ms. 25
[0074] Each slot may have a bucket size of 1000ms as default. It will be appreciated by
those skilled in the art that the bucket size may be configured basis user requirement. It
is noted that bucket size represents the size of time intervals in milliseconds for which
latency metrics are collected. The bucket size parameter is a configurable setting that
30 determines the capacity of each bucket within the bucket array. This parameter is set by
the latency metrics module during runtime using an interface module.
17
[0075] Each slot corresponds to a specific range of latency values. For example, if the latency values range from 0 to 100 milliseconds, the system may define slots in increments of, for example, 10 milliseconds. This means there would be slots representing latency ranges like 0-10 ms, 10-20 ms, and so on.
5
[0076] Further, a determination unit [303] connected at least to the transceiver unit [301], is
configured to determine one of a positive result and a negative result based on an analysis of
the event mapping, wherein the positive result is determined in an event of a presence of a key
in the received event mapping, and the negative result is determined in an event of an absence
10 of a key in the received event mapping.
[0077] The key as used herein refers to mapping of an interface/module in the network infrastructure. The present disclosure encompasses the determination unit [303] is designed to analyse the event mapping received by the transceiver unit [301] and make a decision based on
15 this analysis. Specifically, it evaluates whether a particular key is present or absent within the
received event mapping. The absence of a key indicates that the received event mapping does not contain the specific key that the system is looking for. If the key is present, the determination unit [303] yields the positive result; and, if the key is absent, it produces the negative result. This process enables the system to distinguish between events where certain
20 keys are included and those where they are not, providing valuable insights into the
composition and characteristics of the received event mappings. The event refers to any occurrence that is mapped and transmitted by the node server to the transceiver unit. This may be including data requests, network communications, system updates, or any other activity that the system needs to track and respond to.
25
[0078] Further, a latency metrics module [304] connected at least to the determination unit
[303], the latency metrics module [304] configured to perform one of a new key procedure in
an event of determination of the positive result, and an old key procedure in an event of
determination of the negative result. In cases where the determination yields the positive result,
30 indicating the presence of the key within the event mapping, the latency metrics module [304]
initiates a new key procedure. The new key procedure is initiated when the system detects the presence of a key in the received event mapping. The system creates a new bucket array,
18
increments the value of the corresponding slot index, and stores the new bucket array along
with the associated key in the storage unit. This process essentially sets up a new data structure
to handle future events related to the new key. On the other hand, the old key procedure is used
when the key is absent in the event mapping. The system retrieves an existing bucket array
5 associated with the key and increments the value of the corresponding slot index. This means
the old key procedure updates an existing data structure rather than creating a new one.
[0079] The latency metrics module [304] for performing the new key procedure, is configured to create a new bucket array. The latency metrics module [304] for performing the new key
10 procedure, is further configured to increment a value of the corresponding slot index.
Thereafter, the latency metrics module [304] for performing the new key procedure, is configured to store, in a storage unit [305], the new bucket array and an associated key. An associate key refers to a unique identifier used to track and manage data within the system. The incrementing refers to the process of increasing the value of a corresponding slot index within
15 the bucket array. This is done by the latency metrics module, which adds to the count or value
of the slot index each time a relevant event is processed.
[0080] Further, for creation of the new bucket array by the latency metrics module [304] the transceiver unit [301] is further configured to retrieve, from the node server [302], a set of data
20 for creating the new bucket array. Further, for creation of the new bucket array the latency
metrics module [304] is further configured to determine one or more latency value based on a current time and a request time from the set of data. Furthermore, for creation of the new bucket array the latency metrics module [304] is configured to create one or more slots based on the one or more latency values. The latency value is the duration between timestamp at which
25 request is received, and the current time.
[0081] In other words, the process of creating the new bucket array involves steps facilitated
by both the transceiver unit [301] and the latency metrics module [304]. The transceiver unit
[301] retrieves the set of data required for generating the new bucket array from the node server
30 [302]. Later, the latency metrics module [304] analyses this data to determine one or more
latency values, utilizing both the current time and the request time provided within the data set. The current time is duration between a timestamp at which request is sent, and a timestamp at
19
which a response is received. The set of data refers to a collection of information retrieved from
the node server [302] by the transceiver unit [301] to facilitate the creation of a new bucket
array within the latency metrics module [304]. This set of data typically includes relevant
parameters necessary for configuring the new bucket array, such as timestamps indicating the
5 current time and the request time associated with the received event mapping. Based on these
latency values, the module then proceeds to create one or more slots within the new bucket array, ensuring that the data is organized effectively to accurately capture and analyse latency metrics.
10 [0082] Referring to FIG. 4, an exemplary flow diagram of a method [400] for extracting
latency metrics, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present disclosure. Also, as shown in FIG. 4, the method [400] starts at step
15 [402].
[0083] At step 404, the method [400] comprises receiving, by a transceiver unit [301] from a node server [302], an event mapping. As used herein event mapping refers to pair of associating specific events or triggers within a telecommunications network with their corresponding
20 latency measurements The term latency metrics refer to the data and measurements that
quantify the time delays and performance characteristics of events or data packets within the system. Further, in an implementation of the present disclosure as disclosed herein, the method comprises maintaining, by the node server [302], a mapping of one or more module names and a set of one or more values associated with one or more modules. The node server [302] is
25 tasked with maintaining the mapping of one or more module names along with a set of values
associated with these modules, wherein the "set of one or more values refers to a collection of data points or parameters that are linked to specific modules within the system. Each module in the system can have one or more associated values that represent various metrics or attributes relevant to the module’s performance or state. The method further comprising sending, by the
30 transceiver unit [301] to a network management system (NMS) [307], a latency metrics data
based on the performance of the one of the new key procedure and the old key procedure. Further, the method [400] as disclosed by the present disclosure further comprises resetting, by the latency metrics module [304], the latency metrics at a predefined time interval. The NMS
20
[307] is a centralized system responsible for monitoring and managing various aspects of the
network infrastructure. The metrics data is based on the performance of either the new key
procedure or the old key procedure. The latency metrics module [304] is configured to reset
the latency metrics at the predefined time intervals, at which the latency metrics are reset and/or
5 recalculated. The interval could be defined in terms of seconds, minutes, hours, or any other
suitable unit of time depending on the system's requirements ensuring that the system can provide updated and accurate metrics over time.
[0084] Further, in an implementation of the present solution, the old key procedure comprises
10 retrieving, by the transceiver unit [301], a bucket array related to an existing module key entry.
Further, the old key procedure further comprises incrementing, by the latency metrics module
[304], a value of a corresponding slot index. Further the method as disclosed by the present
disclosure further comprises setting, by the latency metrics module [304], a bucket size
parameter during runtime using interface module [306]. During the performance of the old key
15 procedure, the latency metrics module [304] increments the value of the corresponding slot
index that me be based on data retrieved by the transceiver unit [301] related to the existing
module key entry. The latency metrics module [304] is also configured to set the bucket size
parameter during runtime and bucket size in latency metrics module [304] can be changed
through CLI command at runtime of the process. The existing module key entry is a key that
20 the system uses to identify and access previously stored data related to a specific module. When
the method [400] performs the old key procedure, it retrieves the bucket array associated with
this existing module key entry to update the latency metrics. The incrementing that the value
of the corresponding slot index is performed in an event the value falls under an existing
module key entry, further, to increment the value the solution as disclosed herein it retrieves
25 the corresponding bucket array and increments the value of the appropriate slot index. The
Bucket Array is a data structure used to store and categorize latency metrics. It consists of multiple slots, each representing a range of latency values. The time interval/slots can be as follows:
Slot 0 for 0 to 1000ms,
30 Slot 1 for 1001 to 2000ms,
Slot 2 for 2001 to 3000ms, Slot 3 for 3001 to 4000 ms, Slot 4 for 4001 to 5000 ms, Slot 5 for 5001 to 6000 ms,
21
Slot 6 for 6001 to 7000 ms,
Slot 7 for 7001 to 8000 ms,
Slot 8 for 8001 to 9000 ms,
Slot 9 for 9001 to 10000 ms,
5 Slot 10 for more than 10000 ms.
[0085] Each slot may have a bucket size of 1000ms as default. It will be appreciated by those
skilled in the art that the bucket size may be configured basis user requirement. It is noted that
bucket size represents the size of time intervals in milliseconds for which latency metrics are
10 collected. The bucket size parameter is a configurable setting that determines the capacity of
each bucket within the bucket array. This parameter is set by the latency metrics module during runtime using an interface module.
[0086] Each slot corresponds to a specific range of latency values. For example, if the latency
15 values range from 0 to 100 milliseconds, the system may define slots in increments of, for
example, 10 milliseconds. This means there would be slots representing latency ranges like 0-10 ms, 10-20 ms, and so on.
[0087] At step 406, the method [400] comprises determining, by the determination unit [303],
20 one of a positive result and a negative result based on an analysis of the event mapping, wherein
the positive result is determined in an event of a presence of a key in the received event
mapping, and the negative result is determined in an event of an absence of a key in the received
event mapping. The key as used herein refers to mapping of a interface/module in the network
infrastructure. The present disclosure encompasses the determination unit [303] is designed to
25 analyse the event mapping received by the transceiver unit [301] and make a decision based on
this analysis. Specifically, it evaluates whether a particular key is present or absent within the
received event mapping. The absence of a key indicates that the received event mapping does
not contain the specific key that the system is looking for. If the key is present, the
determination unit [303] yields the positive result; and, if the key is absent, it produces the
30 negative result. This process enables the system to distinguish between events where certain
keys are included and those where they are not, providing valuable insights into the composition and characteristics of the received event mappings. The event refers to any occurrence that is mapped and transmitted by the node server to the transceiver unit. This may
22
be including data requests, network communications, system updates, or any other activity that the system needs to track and respond to.
[0088] At step 408, the method [400] comprises performing, by a latency metrics module
5 [304], one of a new key procedure in an event of determination of the positive result, and an
old key procedure in an event of determination of the negative result. In cases where the determination yields a positive result, indicating the presence of the key within the event mapping, the latency metrics module [304] initiates a new key procedure. The new key procedure is initiated when the system detects the presence of a key in the received event
10 mapping. The system creates a new bucket array, increments the value of the corresponding
slot index, and stores the new bucket array along with the associated key in the storage unit. This process essentially sets up a new data structure to handle future events related to the new key. On the other hand, the old key procedure is used when the key is absent in the event mapping. The system retrieves an existing bucket array associated with the key and increments
15 the value of the corresponding slot index. This means the old key procedure updates an existing
data structure rather than creating a new one.
[0089] Further, the new key procedure comprises creating, by the latency metrics module [304], a new bucket array. Further, the new key procedure further comprises incrementing, by
20 the latency metrics module [304], a value of a corresponding slot index. Thereafter, the new
key procedure comprises storing, by the latency metrics module [304] in a storage unit [305], the new bucket array and an associated key. An associate key refers to a unique identifier used to track and manage data within the system. The incrementing refers to the process of increasing the value of a corresponding slot index within the bucket array. This is done by the
25 latency metrics module, which adds to the count or value of the slot index each time a relevant
event is processed.
[0090] Further, creating a new bucket array comprises retrieving, by the transceiver unit [301]
from the node server [302], a set of data for creating the new bucket array. Further, the creating
30 the new bucket array comprises determining, by the latency metrics module [304], one or more
latency value based on a current time and a request time from the set data. Further, the creating the new bucket array comprises creating, by the latency metrics module [304], one or more
23
slots based on the one or more latency values. The latency value is a duration between a timestamp at which request is received, and the current time.
[0091] In other words, the process of creating the new bucket array involves steps facilitated
5 by both the transceiver unit [301] and the latency metrics module [304]. The transceiver unit
[301] retrieves the set of data required for generating the new bucket array from the node server [302]. Later, the latency metrics module [304] analyses this data to determine one or more latency values, utilizing both the current time and the request time provided within the data set. The latency value is the duration between a timestamp at which request is sent, and a timestamp
10 at which a response is received. The set of data refers to a collection of information retrieved
from the node server [302] by the transceiver unit [301] to facilitate the creation of a new bucket array within the latency metrics module [304]. This set of data typically includes relevant parameters necessary for configuring the new bucket array, such as timestamps indicating the current time and the request time associated with the received event mapping.
15 Based on these latency values, the module then proceeds to create one or more slots within the
new bucket array, ensuring that the data is organized effectively to accurately capture and analyse latency metrics.
[0092] Thereafter, the method [400] terminates at step [410].
20
[0093] Referring to FIG. 5, an exemplary method flow diagram [500] for extracting latency metrics, in accordance with exemplary embodiments of the present disclosure. In an implementation the method [500] is performed by the system [300]. Also, as shown in FIG. 5, method flow diagram [500] starts at step 1. At step S1: A request is received from a node server
25 [302]. The method [500] thereafter checks if the request is associated with a new
module/interface/key entry.
[0094] At step 2: If yes, then the method [500] creates a new bucket array and increments value of appropriate slot/index and stores bucket array in map with the key. 30
[0095] At step 3: If no, the method [500] creates bucket Array for the key, increments value of appropriate index/slot in the bucket array.
24
[0096] At step 4: The method [500] sends the Latency metrics data/counter to a Network Management System (NMS) [307].
[0097] At step 5: The method [500] resets the latency metrics value at collection interval. 5
[0098] Thereafter, the method [500] stops.
[0099] The present disclosure further discloses a non-transitory computer readable storage medium storing instruction for extracting latency metrics, the instructions include executable
10 code which, when executed by a one or more units of a system, causes: a transceiver unit [301]
of the system to receive from a node server [302], an event mapping. Further, the instructions include executable code which, when executed by a one or more units of a system, causes a determination unit [303] of the system to determine one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in
15 an event of a presence of a key in the received event mapping, and the negative result is
determined in an event of an absence of a key in the received event mapping. Further, the instructions include executable code which, when executed by a one or more units of a system, causes a latency metrics module [304] of the system to perform one of a new key procedure in an event of determination of the positive result, and an old key procedure in an event of
20 determination of the negative result.
[0100] As is evident from the above, the present disclosure provides a technically advanced solution for extracting latency metrics. The key advantages for the solution disclosed by the present disclosure are as follows:
25
[0101] Capacity estimation and scalability: It helps to evaluate the impact of increased load on latency during peak hours, thus helps in identifying potential scalability concerns and air in ensuring capacity estimation without compromising latency and performance. In an example, during peak usage times, the present solution can analyse how increased traffic affects response
30 times, allowing for proactive adjustments to maintain optimal performance.
[0102] Root Cause Analysis: In scenarios where network node is taking longer time to answer and network node has dependency on multiple interfaces before sending answer, latency
25
metrics can help identify the interface which is responsible for the delay. Use of latency metrics along with network node logs help to identify underlying cause of latency issue.
[0103] Provides Performance Insights: Since latency metrics can be monitored in real time, it
5 provides information about responsiveness of network node, its components and networks.
This continuous monitoring enables to detect performance degradation as it happens, allowing for immediate corrective actions. For example, if a particular node or component starts to slow down, the system can alert to rectify the issue before it impacts users. Thus, latency metrics help to identify performance bottlenecks and delays in application.
10
[0104] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to
15 those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to
be implemented is illustrative and non-limiting.
26
WE CLAIM:
1. A method [400] for extracting latency metrics, the method comprising:
- receiving, by a transceiver unit [301] from a node server [302], an event mapping;
- determining, by a determination unit [303], one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in an event of a presence of a key in the received event mapping, and the negative result is determined in an event of an absence of a key in the received event mapping; and
- performing, by a latency metrics module [304], one of: a new key procedure in an event of determination of the positive result, and an old key procedure in an event of determination of the negative result.
2. The method [400] as claimed in claim 1, wherein the new key procedure comprises:
- creating, by the latency metrics module [304], a new bucket array;
- incrementing, by the latency metrics module [304], a value of a corresponding slot index; and
- storing, by the latency metrics module [304] in a storage unit [305], the new bucket array and an associated key.
3. The method [400] as claimed in claim 2, wherein the creating a new bucket array
comprises:
- retrieving, by the transceiver unit [301] from the node server [302], a set of data for creating the new bucket array;
- determining, by the latency metrics module [304], one or more latency value based on a current time and a request time from the set of data; and
- creating, by the latency metrics module [304], one or more slots based on the one or more latency values.
4. The method [400] as claimed in claim 1, wherein the old key procedure comprises:
- retrieving, by the transceiver unit [301], a bucket array related to an existing module
key entry; and
- incrementing, by the latency metrics module [304], a value of a corresponding slot
index.
5. The method [400] as claimed in claim 1, wherein the method further comprises setting, by the latency metrics module [304], a bucket size parameter during runtime using interface module [306].
6. The method [400] as claimed in claim 1, wherein the method further comprises maintaining, by the node server [302], a mapping of one or more module names and a set of one or more values associated with one or more modules.
7. The method [400] as claimed in claim 1, wherein the method further sending, by the transceiver unit [301] to a network management system (NMS) [307], a latency metrics data based on the performance of the one of the new key procedure and the old key procedure.
8. The method [400] as claimed in claim 1, wherein the method further resetting, by the latency metrics module [304], the latency metrics at a predefined time interval.
9. A system [300] for extracting latency metrics, the system comprising:
- a transceiver unit [301] configured to receive from a node server [302], an event mapping;
- a determination unit [303] connected at least to the transceiver unit [301], the determination unit [303] configured to determine one of a positive result and a negative result based on an analysis of the event mapping, wherein the positive result is determined in an event of a presence of a key in the received event mapping, and the negative result is determined in an event of an absence of a key in the received event mapping; and
- a latency metrics module [304] connected at least to the determination unit [303], the latency metrics module [304] configured to perform one of: a new key procedure in an event of determination of the positive result, and an old key procedure in an event of determination of the negative result.
10. The system [300] as claimed in claim 9, wherein the latency metrics module [304] for
performing the new key procedure, is configured to:
- create a new bucket array;
- increment a value of a corresponding slot index; and
- store, in a storage unit [305], the new bucket array and an associated key.
11. The system [300] as claimed in claim 10, wherein for creation of the new bucket array
by the latency metrics module [304]:
- the transceiver unit [301] is further configured to retrieve, from the node server [302], a set of data for creating the new bucket array; and
- the latency metrics module [304] is further configured to:
o determine one or more latency value based on a current time and a request
time from the set of data, and o create one or more slots based on the one or more latency values.
12. The system [300] as claimed in claim 9, wherein for the performance of the old key
procedure by the latency metrics module [304]:
- the transceiver unit [301] is further configured to retrieve a bucket array related to an existing module key entry; and
- the latency metrics module [304] is further configured to increment a value of a corresponding slot index.
13. The system [300] as claimed in claim 9, wherein the latency metrics module [304] is configured to set a bucket size parameter during runtime using an interface module [306].
14. The system [300] as claimed in claim 9, wherein the node server [302] is configured to maintain a mapping of one or more module names and a set of one or more values associated with one or more modules.
15. The system [300] as claimed in claim 9, wherein the transceiver unit [301] is further configured to send to a network management system (NMS) [307], a latency metrics
data based on the performance of the one of the new key procedure and the old key procedure.
16. The system [300] as claimed in claim 9, wherein the latency metrics module [304] is further configured to reset the latency metrics at a predefined time interval.
| # | Name | Date |
|---|---|---|
| 1 | 202321047315-STATEMENT OF UNDERTAKING (FORM 3) [13-07-2023(online)].pdf | 2023-07-13 |
| 2 | 202321047315-PROVISIONAL SPECIFICATION [13-07-2023(online)].pdf | 2023-07-13 |
| 3 | 202321047315-FORM 1 [13-07-2023(online)].pdf | 2023-07-13 |
| 4 | 202321047315-FIGURE OF ABSTRACT [13-07-2023(online)].pdf | 2023-07-13 |
| 5 | 202321047315-DRAWINGS [13-07-2023(online)].pdf | 2023-07-13 |
| 6 | 202321047315-FORM-26 [14-09-2023(online)].pdf | 2023-09-14 |
| 7 | 202321047315-Proof of Right [11-10-2023(online)].pdf | 2023-10-11 |
| 8 | 202321047315-ORIGINAL UR 6(1A) FORM 1 & 26)-041223.pdf | 2023-12-09 |
| 9 | 202321047315-ENDORSEMENT BY INVENTORS [07-07-2024(online)].pdf | 2024-07-07 |
| 10 | 202321047315-DRAWING [07-07-2024(online)].pdf | 2024-07-07 |
| 11 | 202321047315-CORRESPONDENCE-OTHERS [07-07-2024(online)].pdf | 2024-07-07 |
| 12 | 202321047315-COMPLETE SPECIFICATION [07-07-2024(online)].pdf | 2024-07-07 |
| 13 | 202321047315-FORM 3 [02-08-2024(online)].pdf | 2024-08-02 |
| 14 | Abstract-1.jpg | 2024-08-08 |
| 15 | 202321047315-Request Letter-Correspondence [14-08-2024(online)].pdf | 2024-08-14 |
| 16 | 202321047315-Power of Attorney [14-08-2024(online)].pdf | 2024-08-14 |
| 17 | 202321047315-Form 1 (Submitted on date of filing) [14-08-2024(online)].pdf | 2024-08-14 |
| 18 | 202321047315-Covering Letter [14-08-2024(online)].pdf | 2024-08-14 |
| 19 | 202321047315-CERTIFIED COPIES TRANSMISSION TO IB [14-08-2024(online)].pdf | 2024-08-14 |
| 20 | 202321047315-FORM 18A [05-03-2025(online)].pdf | 2025-03-05 |
| 21 | 202321047315-FER.pdf | 2025-04-17 |
| 22 | 202321047315-FORM 3 [20-06-2025(online)].pdf | 2025-06-20 |
| 23 | 202321047315-FER_SER_REPLY [23-06-2025(online)].pdf | 2025-06-23 |
| 1 | 202321047315_SearchStrategyNew_E_202321047315SearchHistoryE_11-04-2025.pdf |