Abstract: The present disclosure discloses a system (106) for generating performance metrics reports. The system (106) includes a first layer (120) to: receive a report generation request from UI (116); determine availability of precomputed network data for a received time period in a hot cache (124); compare the received time period with retention period of the precomputed network data available in a first database (126) when the precomputed network data for the received time period is absent in the hot cache (124); determine if the time period is less than or equal to the retention period, then the first layer (120) queries the first database (126) for fetching the precomputed network data of the received time period for generating the performance metrics report; and query a second layer (122) for computed network data for generating the performance metrics report if the time period is greater than the retention period. Figure.1B
FORM 2
HE PATENTS ACT, 1970
(39 of 1970) PATENTS RULES, 2003
COMPLETE SPECIFICATION
REPORTS
APPLICANT
of Office-101, Saffron, Nr JO PLATFORMS LIMITED.—™-
380006, Gujarat, India; Nationality : India
following specification particularly describes the invention and the manner in which it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade 5 dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully 10 reserved by the owner.
FIELD OF THE DISCLOSURE
[0002] The present disclosure relates to network coverage platforms, and
specifically to a system and a method for network performance analysis and 15 reporting.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context 20 in which they are used to indicate otherwise.
[0004] The term “Performance metrics” as used herein, refers to specific
measurements derived from network data that provide insights into network performance and health. The performance metrics are, but not limited to, 25 throughput, latency, packet loss, jitter, error rates, bandwidth, traffic patterns, Quality of Service (QoS) metrics, busiest periods (e.g. peak usage time such as, busiest hour of a day or busiest quarter of a year), and so forth.
[0005] The term “retention period” refers to a predefined duration for which
30 the data is stored and maintained in a database before the data is archived or deleted.
The retention period is determined based on relevance, usage frequency and
2
regulatory requirements of the data. The retention period is applied to raw network data and precomputed network data.
[0006] The term “time period” refers to a specific period for which network
5 data is requested to be analysed in a network performance report. The time period defines a start date and an end date of a data collection period, that allows to fetch and process the network data within such period.
[0007] The term “raw data” refers to unprocessed data collectively directly
10 from network sources. The network sources are, but not limited to, the network devices, network interfaces, network traffic, and so forth.
[0008] The term “precomputed network data” refers to data that has been
processed, aggregated, or analysed to some extent based on defined criteria or a 15 reporting template.
[0009] The term “performance metrics reports” refer to documents
presenting information about operational performance of a computer network over a specified period of time. The performance metrics reports include various metrics 20 and Key Performance Indicators (KPIs) that reflect quality, efficiency and reliability of a network.
BACKGROUND OF THE DISCLOSURE
[0010] The following description of related art is intended to provide
25 background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
30
[0011] In modern telecommunications and network management, analysing
network performance is crucial for maintaining optimal service quality and efficiency. Network operators often need to monitor various Key Performance
3
Indicators (KPIs) such as throughput, latency, and packet loss across numerous
network nodes like base stations, routers, and switches. This analysis helps in
identifying bottlenecks, planning capacity upgrades, and ensuring compliance with
Service Level Agreements (SLAs).
5
[0012] Conventionally, network performance analysis has been a manual
and a time-consuming process. Network engineers would collect data from the
network nodes, compile it into spreadsheets, and analyze it to derive insights. This
method is prone to human error and inconsistency, leading to inaccurate or delayed
10 decisions.
[0013] Several standalone tools and software solutions also exist for
network performance monitoring. These tools collect the data and provide basic
analysis capabilities. However, they often lack integration with other network
15 management systems and do not offer automated reporting or advanced analytical
features. Some systems provide basic automated reporting capabilities, allowing
users to schedule regular reports. These systems often have limited flexibility in
terms of customization and scalability. They may not support complex data analysis
or advanced visualization features.
20
[0014] Moreover, conventional systems may struggle with handling large
volumes of the data efficiently. They may not be designed to process and analyze
the data in real-time or near-real-time, leading to delays in identifying and resolving
network issues.
25
[0015] Thus, there is a need to provide a comprehensive and efficient
solution for network performance analysis, enhancing an ability of network
operators to maintain optimal service quality and make informed decisions.
30 SUMMARY OF THE DISCLOSURE
[0016] In an exemplary embodiment, the present invention discloses a
system for generating performance metrics reports. The system includes a first layer connected to a User Interface (UI). The first layer is configured to: receive a report
4
generation request from the UI where the report generation request comprises a time period for which information is needed. The first layer is further configured to determine an availability of precomputed network data for the time period in a hot cache. The first layer is further configured to compare the received time period with 5 a retention period of the precomputed network data available in a first database when the precomputed network data for the received time period is absent in the hot cache. The first layer is further configured to determine if the received time period is less than or equal to the retention period, then the first layer is configured to query the first database for fetching the precomputed network data of the received 10 time period. If the received time period is greater than the retention period, then the first layer is configured to query a second database for computed network data for generating the performance metrics report.
[0017] In some embodiments, the first layer and the second layer are an
15 Integrated Performance Management (IPM) and a computation layer respectively.
[0018] In some embodiments, the report generation request comprises
reporting parameters and a reporting template.
20 [0019] In some embodiments, the reporting template is one of, a user-
created reporting template or an automatically generated reporting template.
[0020] In some embodiments, a computing model is configured to generate
the reporting template based on an analysis of report generation requests.
25
[0021] In some embodiments, the second layer is configured to generate the
computed network data by performing one or more computations on raw data based
on reporting parameters and a reporting template.
30 [0022] In some embodiments, the second layer is configured to store the
computed network data in the first database.
5
[0023] In some embodiments, the first layer is configured to generate the
performance metrics report using a reporting template, the computed data received from the second layer and insights received from a computing model.
5 [0024] In some embodiments, the first layer is configured to store the
generated performance metrics reports in a second database.
[0025] In some embodiments, the second database is a Distributed File
System (DFS).
10
[0026] In some embodiments, the first database is a Distributed Data Lake
(DDL) configured to store reporting templates, reporting parameters, the
precomputed network data, or a combination thereof.
15 [0027] In some embodiments, a computing model is configured to generate
the performance metrics reports on a scheduled time interval based on an analysis of report generation requests.
[0028] In some embodiments, the hot cache is configured to store the
20 precomputed network data for a predefined time duration.
[0029] In another exemplary embodiment, the present invention discloses a
method for generating performance metrics reports. The method includes a step of
25 receiving, by a first layer, a report generation request from a User Interface (UI), wherein the report generation request comprises a time period for which information is needed. The method includes a step of determining, by the first layer, an availability of precomputed network data for the time period received in the report generation request in a hot cache. The method includes a step of comparing,
30 by the first layer, the received time period with a retention period of the pre-computed network data available in a first database when the precomputed network data for the received time period is absent in the hot cache. The method includes a step of querying, by the first layer, the first database for fetching the precomputed network data of the received time period for generating the performance metrics
6
report if the received time period is less than or equal to the retention period. The method further includes a step of querying, by the first layer, a second database for computed network data for generating the performance metrics report if the received time period is greater than the retention period. 5
[0030] In some embodiments, the first layer and the second layer are an
Integrated Performance Management (IPM) and a computation layer respectively.
[0031] In some embodiments, the report generation request comprises
10 reporting parameters and a reporting template.
[0032] In some embodiments, the reporting template is one of, a user-
created reporting template or an automatically generated reporting template.
15 [0033] In some embodiments, the method includes a step of generating, by
a computing model, the reporting template based on an analysis of report generation requests.
[0034] In some embodiments, the method includes a step of generating, by
20 the second layer, the computed network data by performing one or more
computations on raw data based on reporting parameters and a reporting template.
[0035] In some embodiments, the method includes a step of storing, by the
second layer, the computed network data in the first database.
25
[0036] In some embodiments, the method includes a step of generating, by
the first layer, the performance metrics report using a reporting template, the
computed data received from the second layer and insights received from a
computing model.
30
[0037] In some embodiments, the method includes a step of storing, by the
first layer, the generated performance metrics reports in a second database.
7
[0038] In some embodiments, the second database is a Distributed File
System (DFS).
[0039] In some embodiments, the first database is a Distributed Data Lake
5 (DDL) to store reporting templates, reporting parameters, the precomputed network data, or a combination thereof.
[0040] In some embodiments, the method includes a step of generating, by
a computing model, the performance metrics reports on a scheduled time interval 10 based on an analysis of report generation requests.
[0041] In some embodiments, the hot cache stores the precomputed network
data for a predefined time duration.
15 [0042] In an exemplary embodiment, the present invention discloses a User
Equipment (UE) configured for interacting with a system for generating performance metrics reports. The UE includes: a main processor. The UE further includes a computer readable storage medium storing one or more instructions for execution by the main processor to receive one or more inputs of a user through a
20 User Interface (UI) for creating a reporting template. The one or more inputs includes one or more reporting parameters and a time period for which information is needed. The main processor is further configured to transmit the reporting template including the one or more reporting parameters to the system for report generation. The main processor is further configured to receive a performance
25 metrics report from the system. The performance metrics report is generated based on the reporting template and the one or more reporting parameters. The main processor is further configured to display the performance metrics report on the User Interface (UI).
30 [0043] In some embodiments, the main processor is configured to receive
one or more status updates associated with a report generation process from the system.
8
[0044] In some embodiments, the main processor is configured to display
the one or more received status updates on the User Interface (UI).
5 [0045] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTS OF THE DISCLOSURE
10 [0046] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0047] It is an object of the present disclosure to provide a system and a
method for analysing network performance using analysing techniques such as, 15 Artificial Intelligence/Machine Learning (AI/ML) techniques.
[0048] It is an object of the present disclosure to provide a system and a
method that includes an AI/ML model to generate performance metrics out of large volumes of network data in a particular format.
20
[0049] It is an object of the present disclosure to provide a system and a
method that allows a user to schedule viewing of different parameters, creating of different views and extracting computation of various metrics out of network data at different times, for example, hourly, weekly, daily, etc.
25
[0050] It is an object of the present disclosure to provide a system and a
method that collects all parameters or metrics of network data at one place and generates reports using same templates at a time frequency as per user requirement for network performance analysis.
30
9
[0051] It is an object of the present disclosure to provide a system and a
method to assess overall performance of a network infrastructure to identify areas of improvement and ensure optimal network functioning.
5 [0052] It is an object of the present disclosure to provide a system and a
method to detect and identify performance issues, bottlenecks, and anomalies in the network that may be causing degradation or disruptions.
[0053] It is an object of the present disclosure to provide a system and a
10 method to optimize network resources, configurations, and capacity based on performance analysis to ensure efficient and scalable network operations.
[0054] It is an object of the present disclosure to provide a system and a
method to track performance trends over time to identify patterns, predict future 15 performance, and make proactive decisions to avoid potential performance problems.
[0055] It is an object of the present disclosure to assess network
performance against defined Service Level Agreements (SLAs) or compliance 20 standards to ensure compliance and meet performance targets.
[0056] It is an object of the present disclosure to establish a continuous
monitoring system to track network performance in real-time or at regular intervals, enabling proactive identification of performance issues and prompt remediation.
25
BRIEF DESCRIPTION OF DRAWINGS
[0057] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same
30 parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components
10
using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components or circuitry commonly used to implement such components. 5
[0058] FIG. 1A illustrates an exemplary network architecture in which or
with which embodiments of the present disclosure may be implemented.
[0059] FIG. 1B illustrates an exemplary architecture of a system depicting
10 a flow of data and a relationship between different components, in accordance with an embodiment of the disclosure.
[0060] FIG. 1C illustrates an exemplary block diagram of the system, in
accordance with an embodiment of the present disclosure. 15
[0061] FIG. 2 illustrates an exemplary flow diagram of a process depicting
a performance metrics report generation, in accordance with an embodiment of the present disclosure.
20 [0062] FIG. 3 illustrates an exemplary computer system in which or with
which embodiments of the present disclosure may be implemented.
[0063] FIG. 4 illustrates a flowchart of a method for generating performance
metrics reports, in accordance with an embodiment of present disclosure. 25
[0064] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCE NUMERALS 100 – Network architecture 30 102-1, 102-2…102-N – User equipment 104-1, 104-2…104-N – Users
11
106 – System
108 – Network
110-1, 110-2… 110-N – Main processors
112-1, 112-2… 112-N - Computer readable storage mediums 5 114 – Computing model
116 – User Interface (UI)
118 – Load balancer
120 – First layer
122 – Second layer 10 124 – Hot cache
126 – First database
128 – Second database
130 – Legends
132 – Receiving unit 15 134 – Memory
136 – Interfacing unit
138 – Database
140 – Processing unit
142 – Training module 20 144 – Allocation module
146 – Report generation module
148 – Computation module
200 – Process
300 – Computer system 25 310 – External storage device
320 – Bus
330 – Main memory
340 – Read only memory
350 – Mass storage device 30 360 – Communication port(s)
370 – Processor
400 – Method
12
DETAILED DESCRIPTION OF THE DISCLOSURE
[0065] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of 5 embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the 10 problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.
[0066] The ensuing description provides exemplary embodiments only, and
is not intended to limit the scope, applicability, or configuration of the disclosure.
15 Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
of the disclosure as set forth.
20
[0067] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, networks, processes, and other
25 components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known
circuits, processes, algorithms, structures, and techniques may be shown without
unnecessary detail in order to avoid obscuring the embodiments.
30 [0068] Also, it is noted that individual embodiments may be described as a
process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the
13
operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a 5 procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0069] The word “exemplary” and/or “demonstrative” is used herein to
10 mean serving as an example, instance, or illustration. For the avoidance of doubt,
the subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
15 known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive in a manner similar
to the term “comprising” as an open transition word without precluding any
additional or other elements.
20
[0070] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature,
structure, or characteristic described in connection with the embodiment is included
in at least one embodiment of the present disclosure. Thus, the appearances of the
25 phrases “in one embodiment” or “in an embodiment” in various places throughout
this specification are not necessarily all referring to the same embodiment.
Furthermore, the particular features, structures, or characteristics may be combined
in any suitable manner in one or more embodiments.
30 [0071] The terminology used herein is for the purpose of describing
particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further
14
understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or 5 groups thereof. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0072] Typically analysing and deriving insights from network performance
data may be a time-consuming and a tedious task. Also, bringing different network
10 nodes, Key Performance Indicators (KPIs), attributes, time hierarchy, network hierarchy, and their different views altogether may be a very difficult task. It is challenging to create a report again and again after a certain interval of time such as, 30 minutes, hourly, daily, or weekly. This feature not only allows a user to view different parameters, create different views, or extract computation of various
15 important metrics such as busy hour of a day, busiest quarter of a year, and so forth out of network data in various ways but also allows the user to schedule the same for different times say, hourly, weekly, daily, etc. This feature has brought all these parameters to one place for performance analysis and automated report generation on different time intervals as per the user for their continuous monitoring. An entire
20 template to generate important performance metrics out of large volumes of the network data in a particular format is completely unique. This unique step involves designing and implementing an algorithm that can efficiently process the large volumes of the network data, extract relevant information, and calculate metrics such as busiest hours or quarters of the day. The algorithm also uses Machine
25 Learning (ML) algorithms, and other analytical methods to analyse the network data and generate insights for generated reports.
[0073] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGS. 1-4.
30
[0074] FIG. 1A illustrates an exemplary network architecture (100) in
which or with which embodiments of the present disclosure may be implemented.
15
[0075] Referring to the FIG. 1A, the network architecture (100) may include
one or more computing devices or one or more User Equipment (UE) (102-1, 102-2…102-N) that may be associated with one or more users (104-1, 104-2…104-N) 5 and a system (106) in an environment. In an embodiment, the one or more UE (102-1, 102-2…102-N) may be communicated to the system (106) through a network (108). A person of ordinary skill in the art will understand that the one or more UE (102-1, 102-2…102-N) may be individually referred to as the UE (102) and collectively referred to as the UE (102). A person of ordinary skill in the art will
10 appreciate that the terms “computing device(s)” and “UE” may be used interchangeably throughout the disclosure. Although three UE (102) are depicted in the FIG. 1A, however any number of the UE (102) may be included without departing from the scope of the ongoing description. Similarly, a person of ordinary skill in the art will understand that the one or more users (104-1, 104-2…104-N)
15 may be individually referred to as the user (104) and collectively referred to as the users (104).
[0076] In an embodiment, the UE (102) may include smart devices
operating in a smart environment, for example, an Internet of Things (IoT) system. 20 In such embodiment, the UE (102) may include, but not limited to, smart phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, Smart Television (TV), computers, a smart 25 security system, a smart home system, other devices for monitoring or interacting with or for the users (104) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the UE (102) may include, but not limited to, an intelligent multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-30 computing system or any other device that is network-connected.
16
[0077] In an embodiment, the UE (102) may include, but not limited to, a
handheld wireless communication device (e.g., a mobile phone, a smart phone, a phablet device, and so on), a wearable computer device (e.g., a head-mounted display computer device, a head-mounted camera device, a wristwatch computer 5 device, and so on), a Global Positioning System (GPS) device, a laptop, a tablet computer, or another type of portable computer, a media playing device, a portable gaming system, and/or any other type of computer device with wireless communication capabilities, and the like.
10 [0078] In an embodiment, the UE (102) may include, but is not limited to,
any electrical, electronic, electro-mechanical, or an equipment, or a combination of one or more of the above devices such as Virtual Reality (VR) devices, Augmented Reality (AR) devices, a general-purpose computer, a desktop, a personal digital assistant, a mainframe computer, or any other computing device. In another
15 embodiment, the UE (102) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, an audio aid, a microphone, a keyboard, and input devices for receiving input from the user (104) or the entity such as a touch pad, a touch enabled screen, an electronic pen, and the like. A person of ordinary skill in the art will appreciate that the UE
20 (102) may not be restricted to the mentioned devices and various other devices may be used.
[0079] Further, each of the UE (102) may include main processors (110-1,
110-2…110-N) (hereinafter collectively referred to as the main processors (110)
25 and individually referred to as the main processor (110)) that refers to any logic circuitry for processing instructions. The main processor (110) may be, but not limited to, a conventional processor, a Digital Signal Processor (DSP), a plurality of microprocessors, one or more microprocessors in association with the DSP, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field
30 Gate Programmable Array (FGPA) circuits, any other type of integrated circuits, and so forth. More specifically, the main processor (110) is a hardware processor. The main processor (110) may perform a signal coding data processing, an
17
input/output processing, and/or any other functionality that enables a working of the system (106), according to the present disclosure.
[0080] Further, each of the UE (102) may include computer readable
5 storage mediums (112-1, 112-2…112-N) (hereinafter collectively referred to as the computer readable storage mediums (112) and individually referred to as the computer readable storage medium (112) to store the instructions. As used herein, the term “instructions” may refer to a sequence of commands that are written in a programming language and may be executed by the main processor (110) to 10 perform tasks associated with the UE (102). In an exemplary embodiment, the computer readable storage mediums (112) may be, but not limited to, Hard Disk Drives (HDDs), Solid State Drives (SSD), a flash memory, a Random-Access Memory (RAM), and so forth.
15 [0081] Referring to the FIG. 1A, the UE (102) may communicate with the
system (106) via a set of executable instructions residing on any operating system. The system (106) may be for example, the system (106) for generating performance metrics reports.
20 [0082] In an embodiment, the system (106) may include a framework for
analysis of a network performance. The system (106) may ensure that the user (104) may have a same format throughout for analysis of the network performance so that reading network performance reports is easy, consistent, and less prone to human errors. The system (106) may also be configured to allow the user (104) to view the
25 performance metrics reports that may be generated based on performance metrics. As used herein, the term “performance metrics” may refer to specific measurements derived from network data that provide insights into the network performance and health. The performance metrics may be, but not limited to, throughput, latency, packet loss, jitter, error rates, bandwidth, traffic patterns, Quality of Service (QoS)
30 metrics, busiest periods (e.g. peak usage time such as, busiest hour of a day or busiest quarter of a year), and so forth. Embodiments are present invention are intended to include or otherwise cover any type of the performance metrics including known related art and/or later developed metrics.
18
[0083] The system (106) may be configured to provide a flexibility to select
certain performance metrics, analyze the performance metrics at regular intervals, and to compute other significant performance metrics of the network data. Further, 5 the system (106) may be configured to analyse issues quickly and provide a solution to the user (104).
[0084] In an embodiment, the system (106) may include a computing model
(114) that may be executed for enhancing efficiency and accuracy of report
10 generation and data management. In a preferred embodiment, the computing model (114) may be an Artificial Intelligence (AI)/Machine Learning (ML) model. In an embodiment, the computing model (114) may automatically generate reporting templates based on frequently requested performance metrics reports. For example, if multiple users (104) consistently request for the performance metrics reports with
15 similar reporting parameters, then the computing model (114) may automatically create the reporting template to streamline future requests.
[0085] Further, in an embodiment, the computing model (114) may also be
executed to identify the performance metrics reports that are frequently requested
20 and accordingly schedule the performance metrics reports for automatic generation at a scheduled time interval. For example, if a weekly performance metrics report is consistently requested by the users (104), then the computing model (114) may automate generation of the performance metrics reports every week without user intervention. Further, in an embodiment, computing model (114) may also be
25 executed to perform predictive analysis to forecast future network performance
issues. In an embodiment, the computing model (114) may also be executed to
provide different visualization of the performance metrics and generate insights on
the generated reports. In an embodiment, components of the system (106) may be
explained in detail in conjunction with FIG. 1B and FIG. 1C.
30
[0086] In an embodiment, the network (108) may include, at least one of a
4G network, a 5G network, a 6G network, or the like. The network (108) may
enable the UE (102) to communicate with other devices in the network architecture
19
(100) and/or with the system (106). The network (108) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (108) may be implemented as, or include any of a variety of different communication technologies such as a Wide Area Network (WAN), a 5 Local Area Network (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, a Public Switched Telephone Network (PSTN), or the like.
[0087] Although the FIG. 1A shows exemplary components of the network
10 architecture (100); however, in other embodiments, the network architecture (100) may include fewer components, different components, differently arranged components, or additional functional components than depicted in the FIG. 1A. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other 15 components of the network architecture (100).
[0088] FIG. 1B illustrates an exemplary architecture of the system (106),
depicting a flow of data and a relationship between different components, in accordance with an embodiment of the present disclosure. The components may be
20 a User Interface (UI) (116), a load balancer (118), a first layer (120), a second layer (122), a hot cache (124), a first database (126) and a second database (128). In an embodiment, the components may be interacted with each other via legends (130). The legends (130) may be protocol interfaces such as, but not limited to, a Hypertext Transfer Protocol (HTTP) interface, a Transmission Control Protocol
25 (TCP) interface, a File Input-Output (IO) interface, and so forth.
[0089] As shown in the FIG. 1B, the system (106) may be configured to
streamline a process of monitoring and analysing the network data. In an exemplary embodiment, the network data may be raw data or precomputed network data that 30 encompass a wide range of the performance metrics and information related to performance and operation of the network (108). In an embodiment, the network data may include data related to the performance metrics such as, but not limited to, the throughput, the latency, the packet loss, the jitter, the error rates, the
20
bandwidth, the traffic patterns, the Quality of Service (QoS) metrics, the busiest periods, and so forth. In another embodiment, the network data may include data related to a setup of the network (108), operational status of network devices, and so forth. Embodiments are present invention are intended to include or otherwise 5 cover any type of the network data including known related art and/or later developed data.
[0090] As used herein, the term “raw data” may refer to unprocessed data
collectively directly from network sources. The network sources may be, but not 10 limited to, the network devices, network interfaces, network traffic, and so forth. Also, as used herein, the term “precomputed network data” may refer to data that has been processed, aggregated, or analysed to some extent based on previously defined criteria or the reporting template.
15 [0091] In an embodiment, the UI (116) may be an intuitive interface for the
users (104) to interact with the system (106). In an embodiment, the users (104) may interact with UI (116) for performing various tasks such as, but not limited to, creating the reporting template, requesting for generating the network performance reports, scheduling the network performance reports, and so forth. The UI (116)
20 may be designed to streamline a process of creating analysis templates, defining analysis parameters, and managing analysis workflow.
[0092] In an embodiment, the UI (116) may allow the user (104) to provide
a first set of inputs and a second set of inputs for creating the reporting template.
25 In an exemplary embodiment, the user (104) may create the reporting template using preconfigured design tools. In an exemplary embodiment, the preconfigured design tools may be, but not limited to, graphic design tools, visual content creation tools, cloud-based presentation tools, design and layout tools, and so forth. As used herein, the term “preconfigured design tools” may refer to software that may come
30 with built-in features, templates, functionalities tailored for specific tasks. In another embodiment, the user (104) may create the reporting template via a Command Line Interface (CLI).
21
[0093] The first set of inputs may be, but not limited to, a structure, a layout,
components, and so forth. In an exemplary embodiment, the components may be, but not limited to, headers, footers, tables, charts, text sections, and so forth. 5 Further, in an embodiment, the second set of inputs may be the reporting parameters that may help in determining how the network data should be analyzed. In an exemplary embodiment, the reporting parameters may be, but not limited to, a start date, an end date, network nodes, a topological hierarchy, a categorical hierarchy, a time bucket (e.g. how often the report should be generated), BusyHourMax, 10 BusyHourValue, Key Performance Indicators (KPI) Names, threshold values for each KPI, a username, a user group, a domain group, a report name, a report generation mode, views, configuration settings, and so forth.
[0094] In an exemplary embodiment, the start date may be a starting point
15 in time for the data to be included in the performance metrics report and the end date may be an ending point in time for the data to be included in the performance metrics report. For example, if the user (104) wants to generate the performance metrics report for network performance data beginning from March 1, 2024, and ending to June 22, 2024, then the start date and the end date may be set to the 20 respective dates.
[0095] Further, the network nodes may be network elements such as, but
not limited to, base stations, routers, switches, and so forth that needs to be included
in the performance metrics report. For example, the user (104) may specify that the
25 performance metrics report needs to include the data from the base station located
at X city. Further, the topological hierarchy may represent a structural arrangement
of the network elements, representing their connections and relationships. The
categorical hierarchy may represent a classification of the network elements or data
points into categories for better analysis.
30
[0096] The time bucket may specify time intervals for which the data needs
to be aggregated in the performance metrics report. For example, the data may be
aggregated on hourly, daily, or weekly intervals. Further, in an exemplary
22
embodiment, the BusyHourMax may represent a maximum value of the KPI observed during the busiest hour. For example, if the KPI is network traffic, then the BusyHourMax may show traffic volume recorded during the busiest hour.
5 [0097] Also, the BusyHourValue may represent a specific KPI value that
may be recorded during the busiest hour. For example, for a busy hour from 12 PM or 1 PM, the BusyHourValue may show the traffic volume during such hour.
[0098] In an exemplary embodiment, the KPI names may be names of the
10 performance metrics that needs to be calculated and reported. The KPIs may be, but
not limited to, the throughput, the latency, the packet loss, the busiest hour of the
day, the busiest quarter of the year, and so forth. The threshold values for each KPI
may represent predefined limits for the KPIs that help in identifying whether the
network performance is within acceptable ranges.
15
[0099] The username may be an identifier of the user (104) creating or
requesting the performance metrics report. The user group may be a group to which
the user (104) belongs. The domain group may represent a domain (i.e. a subset of
network or organization) within which the performance metrics report needs to be
20 generated.
[00100] In an exemplary embodiment, the report name may be used to enable
the users (104) to easily identify different performance metrics reports in the system (106). Further, the report generation mode may be on demand or on the scheduled 25 time interval. In an exemplary embodiment, the scheduled time interval may be, but not limited to, daily, weekly, monthly, hourly, and so forth.
[00101] In another exemplary embodiment, the views may define different
visualizations for the performance metrics report. The views may be, but not limited 30 to, graphs, charts, tables, summaries and so forth. Embodiments of the present disclosure are intended to include or otherwise cover any type of the views that may help in understanding the data better.
23
[00102] In an exemplary embodiment, the configuration settings may be, but
not limited to, a format of the performance metrics report, alerts, and so forth. In an exemplary embodiment, the format of the performance metrics report may be, but not limited to, Portable Document Format (PDF), Comma Separated Values (CSV), 5 and so forth. Similarly, the alerts may be, for example, highlighting any network node with latency > 100ms.
[00103] In an exemplary embodiment, the user (104) may need to fill in fields
of the reporting parameters in the reporting template, for example: the report name: 10 Report A, the KPI names: latency and throughput, the network nodes: Node A, Node B, the view: line chart, bar graph, the report generation mode: on demand.
[00104] In an embodiment, the user (104) may customize the created
reporting template based on a requirement through the UI (116). For example, the 15 user (104) may provide the input of additional sections, metrics, graphs, and so forth to include such inputs in the created reporting template.
[00105] In another embodiment, the user (104) may customize an
automatically created reporting template provided by the computing model (114) 20 (as shown in the FIG. 1A) of the system (106) based on the first set of inputs and the second set of inputs.
[00106] In such embodiment, the user (104) may receive suggestions on a
template creation process from the computing model (114) of the system (106). For 25 example, if the user (104) provides the input that the user (104) wants to analyze the network performance, then the system (106) may suggest relevant KPIs such as, but not limited to, the latency, the packet loss and the throughput that needs to be included in the reporting template for analysing the network performance.
30 [00107] Further, the UI (116) may be used by the user (104) to send the
reporting template along with the report generation request to the system (106). In an embodiment, the report generation request may be an HTTP request that may be transmitted via the HTTP interface. In another embodiment, the UI (116) may be
24
used by the user (104) to only transmit the report generation request into the system (106). In such embodiment, the system (106) may utilize the automatically generated reporting template. In an embodiment, the report generation request may include a time period. The time period may define a timer period (for example, a 5 start date and an end date) period for which the network data is requested to be analyzed in the performance metrics report. In another embodiment, the time period may be one of the reporting parameters that may be included in the reporting template.
10 [00108] The time period may refer to a specific period for which the network
data is requested to be analyzed in the performance metrics report. The time period defines the start date and the end date of a data collection period, that allows the system (106) to fetch and process the network data within such period. For example, if the user (104) wants to generate the performance metrics report on the reporting
15 parameters such as, the throughput and the latency of the network data having the time period such as, the past week. In such case, the user (104) may provide the throughput and the latency as the reporting parameters in the reporting template along with the past week as the time period in the report generation request. In an embodiment, the system (106) may be configured to transmit the report generation
20 request received from the UI (116) to the load balancer (118).
[00109] In an embodiment, the load balancer (118) may be connected to the
UI (116). In an embodiment, the load balancer (118) may be configured to receive
the report generation request and assigns the received report generation request to
25 a suitable instance of the first layer (120). In an exemplary embodiment, the load
balancer (118) may determine which instance of the first layer (120) is best suited
to handle the corresponding report generation request based on a current load and
availability. The load balancer (118) may further transmit the report generation
request to the selected instance of the first layer (120).
30
[00110] In an embodiment, the first layer (120) may be connected to the load
balancer (118) and configured to receive the report generation request from the load
25
balancer (118). The first layer (120) may be an Integrated Performance Management (IPM) that may extract the reporting parameters that may be included in the reporting template and may interact with the first database (126) to store the reporting parameters and the reporting template (hereinafter, the reporting template 5 and the reporting parameters are collectively referred to as report generation information) in the first database (126).
[00111] In an embodiment, the first layer (120) may determine an availability
of the precomputed network data in the hot cache (124) based on the report 10 generation information.
[00112] In an embodiment, the first layer (120) may fetch the precomputed
network data from the hot cache (124) to generate the performance metrics report if the precomputed network data for the received time period is available in the hot 15 cache (124).
[00113] In another embodiment, the first layer (120) may interact with the
first database (126) for the precomputed network data if the precomputed network data for the received time period is not available in the hot cache (124). In an
20 embodiment, the first layer (120) may compare the received time period with a retention period of the precomputed network data available in the first database (126). As used herein, the term “retention period” refers to a predefined duration for which the data is stored and maintained in a storage before the data is archived or deleted. In an embodiment, the retention period may be determined based on
25 relevance, usage frequency and regulatory requirements of the data. The retention period may be applied to the raw data and/or the precomputed network data. In an embodiment, the retention period may be a short-term retention period or a long-term retention period. The short-term retention period may be used for the data that is frequently accessed or needed for immediate reporting. The long-term retention
30 period may be used for the data that is less frequently used but may be required for historical analysis. In an exemplary embodiment, the hot cache (124) may store the precomputed network data for a predefined time duration such as, for a shorter duration (e.g. 3 days) to allow for the report generation. In another embodiment, a
26
cold storage such as, the first database (126) may store the less frequently accessed or historical data for longer periods (e.g. 6 months).
[00114] In an embodiment, the first layer (120) may determine if the received
5 time period is less than or equal to the retention period based on the compared result.
[00115] The first layer (120) may fetch the precomputed network data from
the first database (126) if the precomputed network data for the requested time period is available in the first database (126) and falls within the retention period.
10 In another embodiment, the first layer (120) may fetch the raw data from the first database (126) if the precomputed network data for the received time period is not available in the first database (126). In an embodiment, the raw data that may be stored in the first database (126) may be collected from various sources. In an embodiment, the first database (126) may be integrated with sources such as, but
15 not limited to, network devices, monitoring tools, logs, external APIs, and so forth. In an exemplary embodiment, an integration may involve configuring data collection agents, establishing network connections, and implementing data retrieval. The data collection may be performed in real-time to capture most up-to-date network performance information. The raw data may be continuously
20 monitored and collected from the network (108), allowing for immediate analysis and reporting.
[00116] In such embodiment, the first layer (120) may transmit a
computation request containing the fetched raw data and the report generation 25 information to the second layer (122). In an embodiment, the first layer (120) may transmit the computation request to the second layer (122) via the TCP interface as the TCP interface ensures error-checked delivery of the data.
[00117] The second layer (122) may be connected to the first layer (120) and
30 receives the computation request from the first layer (120). The second layer (122)
may be a computation layer that may perform computations on the raw data based
on the report generation information for generating the computed network data. In
27
an exemplary embodiment, the computations may be, but not limited to, simple aggregations, nested aggregations, complex graph aggregations, and so forth. The second layer (122) may transmit the computed network data as an output data to the first database (126) via the File IO interface. The File IO interface may be used to 5 write/read the computed network data in/from the first database (126). In an embodiment, the second layer (122) may format the computed network data into a structured format suitable for storage. The structured format may include converting the computing network data into file formats that may be optimized for storage and querying in the first database (126). In an embodiment, the second layer 10 (122) may also be connected to the second database (128) for storing the computed network data. In another embodiment, the second layer (122) may be connected to the second database (128) for fetching the raw data.
[00118] In an embodiment, the first layer (120) may fetch the computed
15 network data from the first database (126) via the File IO interface. In an
embodiment, the first layer (120) may store the computed network data in the hot
cache (124) for quick access. The first layer (120) may generate the performance
metrics report based on the computed network data and the report generation
information that is stored in the first database (126). Further, the first layer (120)
20 may store the generated performance metrics report(s) in the second database (128).
In an embodiment, the first layer (120) may format the performance metrics report
into the structured format. Further, the first layer (120) may establish a connection
with the second database (128) using APIs or storage connectors to store the
performance metrics report in the second database (128).
25
[00119] Further, in an embodiment, the first layer (120) may retrieve the
stored performance metrics report(s) from the second database (128) when a request
for further analysis is received from the user (104) via the UI (116). In another
embodiment, the first layer (120) may automatically retrieve the stored
30 performance metrics report(s) from the second database (128) for further analysis.
In an embodiment, the first layer (120) may transmit requests and parameters
associated with the generated performance metrics report(s) to the computing
28
model (114) for performing various tasks. The parameters may include, but not limited to, specific performance metrics to analyze, anomalies to detect, patterns to identify, future trends to predict, and so forth within the data of the generated performance metrics report(s). In an embodiment, the first layer (120) may transmit 5 the complete performance metrics report to the computing model (114) for analysis. In another embodiment, the first layer (120) may transmit specific reporting parameters and corresponding values to the computing model (114) for analysis.
[00120] Further, in an embodiment, based on the received request, the
10 computing model (114) may process the data and the parameters received from the
first layer (120). Based on the analysis, the computing model (114) may send
insights and results back to the first layer (120). The insights may be in a form of
updated reports, visualizations, structured data, and so forth. In an embodiment, the
first layer (120) may integrate the insights received from the computing model
15 (114) into the generated performance metrics report. In another embodiment, the
first layer (120) may use the insights received from the computing model (114) for
further analysis. In an embodiment, the insights may be presented to the user (104)
via the UI (116). In another embodiment, the insights may be used internally by the
system (106) for decision-making.
20
[00121] In an embodiment, the hot cache (124) may be connected to the first
layer (120) and may be a temporary storage that holds frequently accessed network
data to speed up retrieval and improve performance. The hot cache (124) may be
having the precomputed or processed network data and may be kept for a shorter
25 retention period. The hot cache (124) may be, but not limited to, in-memory cache,
distributed cache, persistent cache, and so forth.
[00122] The first database (126) may also be connected to the first layer (120)
and may be capable to store the computed network data received from the second 30 layer (122). In an embodiment, the first database (126) may also be capable to store the report generation information, the raw data, the precomputed network data, and so forth. In a preferred embodiment, the first database (126) may be a Distributed Data Lake (DDL) that provides a centralized repository for storing all types of data
29
such as, structured, semi-structured and unstructured data. As used herein, the DDL may refer to centralized data repository and analytics that ingests, stores, and allows for processing of large volume of data in its original form.
5 [00123] The second database (128) may be capable of storing the generated
performance metrics reports received from the first layer (120). In a preferred embodiment, the second database (128) may be Distributed File System (DFS) such as, but not limited to, a Hadoop Distributed File System (HDFS), a Google File System (GFS), Ceph, and so forth. As used herein, the DFS may refer to a file 10 storage and management system that primarily stores files and used with distributed computing frameworks.
[00124] FIG. 1C illustrates an exemplary block diagram of the system (106),
in accordance with an embodiment of the present disclosure. In an embodiment, the 15 system (106) may include a receiving unit (132), a memory (134), an interfacing unit (136), a database (138) and a processing unit (140). In an embodiment, the processing unit (140) may include a training module (142), an allocation module (144), a report generation module (146) and a computation module (148).
20 [00125] In an embodiment, the receiving unit (132) may be configured to
receive the report generation request from the UE (102) (as shown in the FIG. 1A). In such embodiment, the receiving unit (132) may be configured to receive the report generation information along with the report generation request from the UE (102). In another embodiment, the receiving unit (132) may be configured to
25 receive only the report generation request when the user (104) hits an execute button via the UI (116) (as shown in the FIG. 1B).
[00126] The memory (134) may be configured to store instructions or
routines in a non-transitory computer readable storage medium. In an aspect, the 30 memory (134) may be configured to store the instructions that may be executed to perform tasks associated with the system (106). The memory (134) may include any non-transitory storage device including, for example, but not limited to, a
30
volatile memory such as a Random-Access Memory (RAM), or a non-volatile memory such as an Erasable Programmable Read Only Memory (EPROM), a flash memory, and the like. Embodiments of the present invention are intended to include or otherwise type of the memory (134) including known related art and/or later 5 developed technologies.
[00127] In an embodiment, the interfacing unit (136) may comprise a variety
of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like. The interfacing unit (136) may facilitate communication 10 through the system (106). The interfacing unit (136) may also provide a communication pathway for various other units/modules of the system (106).
[00128] In an embodiment, the database (138) may be provided to store,
manage and retrieval of the network data and the performance metrics reports. In
15 an embodiment, the database (138) may be the first database (126) (as shown in the FIG. 1B) that may be configured for serving as a centralized repository for storing the raw data, the precomputed network data, the report generation information, the computed data and so forth. In another embodiment, the database (138) may be the hot cache (124) (as shown in the FIG. 1B) that may store the precomputed network
20 data for a shorter duration of time. In yet another embodiment, the database (138) may be the second database (128) (as shown in the FIG. 1B) that may be configured for storing the generated performance metrics reports. The database (138) is designed to interact seamlessly with other components of the system (106), such as the training (142), the allocation module (144), the report generation module (146)
25 and the computation module (148), to support a functionality of the system (106)
effectively. The database (138) may store the data that may be either stored or
generated as a result of functionalities implemented by any of the components of
the processing unit (140). In an embodiment, the database (138) may be separate
from the system (106).
30
[00129] The modules are controlled by the processing unit (140) which
execute the instructions retrieved from the memory (134). The processing unit (140)
further interact with the interfacing unit (136) to facilitate a user interaction and to
31
provide options for managing and configuring the system (106). The processing
unit (140) may be implemented as one or more microprocessors, microcomputers,
microcontrollers, digital signal processors, central processing units, logic
circuitries, and/or any devices that process data based on operational instructions.
5
[00130] In an embodiment, the training module (142) may be configured to
collect the data of the report generation requests made by the users (104) (as shown
in the FIG. 1A) over a period of time. The data may be the reporting parameters,
frequency of requests, specific KPIs being monitored, timing and frequency of each
10 request, and so forth. Further, in an embodiment, the training module (142) may be configured to preprocess the collected data to remove errors and inconsistencies. Further, the data may be structured in a suitable format for analysis such as, a dataset where entries may represent the report generation requests with associated reporting parameters. The training module (142) may be configured to feed the
15 dataset (trained data) into the computing model (114) (as shown in the FIG. 1A) to train the computing model (114) for generating the reporting templates and scheduling the automatic generation of the reports on the scheduled time interval.
[00131] Further, in an embodiment, the computing model (114) may be
20 configured to identify common patterns and trends in the report generation requests based on historical data analysis (i.e. trained data). The computing model (114) may also determine the frequency and specific times at which certain reports are requested. Based on the identified patterns and trends, the computing model (114) may be configured to generate the reporting templates that may capture 25 combinations of the reporting parameters. In an embodiment, the computing model (114) may also be configured to schedule the reports for automatic generation based on the identified patterns.
[00132] In an embodiment, the allocation module (144) may be
30 communicatively coupled to the receiving unit (132). The allocation module (144) may be configured to receive the report generation request from the receiving unit (132).
32
[00133] Based on the received request, the allocation module (144) may be
configured to enable the load balancer (118) (as shown in the FIG. 1B) to
communicate with the first layer (120) (as shown in the FIG. 1B) to determine the
current load and availability on each of the instances of the first layer (120). In an
5 exemplary embodiment, the current load may be compared with a predefined load
value stored in the database (138) to determine whether the corresponding instance
of the first layer (120) is underloaded, normal, or overloaded. In an exemplary
embodiment, the predefined load value may be a limit that every instance of the
first layer (120) can handle. The predefined load may be defined in terms of a
10 number of requests, data volume or computational tasks. The allocation module
(144) may be configured to select one of the instances of the first layer (120) based
on the determined current load and the availability. Further, the allocation module
(144) may be configured to allocate the received request of the report generation to
the selected instance of the first layer (120).
15
[00134] Further, in an embodiment, the report generation module (146) may
be communicatively coupled to the allocation module (144). The report generation
module (146) may be configured to parse the reporting template received from the
user (104) at the first layer (120) for extracting the reporting parameters and the
20 corresponding values from the reporting template. Further, in an embodiment, the report generation module (146) may be configured to validate the reporting parameters and the corresponding values. In an exemplary embodiment, the report generation module (146) may be configured to perform a series of validation checks to ensure that the reporting parameters are correct as per expectations. In an
25 embodiment, the validation checks may include format validation, range validation, completeness check, business logic validation, conditional check, and so forth.
[00135] In an embodiment, if all the reporting parameters pass the validation
checks, then the report generation module (146) may be configured to transmit an 30 acknowledgement as a request validation successful message from the first layer (120) to the UI (116) via the load balancer (118). In another embodiment, if any of the reporting parameters fail the validation checks, then the report generation
33
module (146) may be configured to transmit the acknowledgement as a validation failed message from the first layer (120) to the UI (116) via the load balancer (118).
[00136] Further, upon successful validation checks, the report generation
5 module (146) may be configured to structure data such as, the extracted reporting parameters and the corresponding values in a format that may be suitable for storage in the database (138). In an exemplary embodiment, structuring the data may include creating key-value pairs, formatting the data as records, and so forth. Further, in an embodiment, the report generation module (146) may be configured 10 to store the reporting template and the reporting parameters in the database (138) using an Application Programming Interface (API). The API may be, but not limited to, Restful API, Database API, Streaming API, cloud storage API, and so forth.
15 [00137] In an embodiment, the report generation module (146) may be
configured to determine the availability of the precomputed network data for the received time period in the database (138) based on the report generation information. In an embodiment, the report generation module (146) may be configured to interact with the database (138) to check if the precomputed network
20 data corresponding to the reporting parameters are available in the database (138). As discussed above, the database (138) may be the hot cache (124) (as shown in the FIG. 1B).
[00138] Further, in an embodiment, the report generation module (146) may
25 be configured to compare the time period provided in the report generation request with the retention period of the precomputed network data if available in the database (138) such as, the first database (126). In an exemplary embodiment, the time period such as, the start date and the end date may be converted into a standard format (e.g., specific date format). Similarly, the retention period may also be 30 converted into a same format that may usually be the end date indicating last date of the retention period. For example, if the requested time period is July 2nd to July 10th, a current date is July 3rd, and the retention period of the precomputed network
34
data available in the database (138) is 3 days. Then, as per conversion, the retention end date is July 5th. The report generation module (146) may be configured to compare the end date (July 10th) of the time period with the end date (July 5th) of the retention period to determine whether the time period provided by the user (104) 5 falls within the retention period or not.
[00139] In an embodiment, the report generation module (146) may be
configured to transmit a query to the database (138) for determining whether the precomputed network data for the received time period is available or not. Upon
10 receiving the query, the database (138) may check for the precomputed network data that falls within the received time period. In an embodiment, the database (138) may transmit a positive response with the precomputed network data to the report generation module (146) when the precomputed network data is available in the database (138) and the retention period of the available precomputed network data
15 in the database (138) is more than or equal to the time period. In another embodiment, the database (138) may transmit a negative response with the raw data to the report generation module (146) when the precomputed network data is not available in the database (138) or the retention period of the available precomputed network data in the database (138) is less than the time period. Based on the positive
20 response, the report generation module (146) may be configured to generate the performance metrics report based on the received precomputed network data and the report generation information. In another embodiment, the response generation module (146) may be configured to transmit the computation request with the raw data and the report generation information to the computation module (148).
25
[00140] In an embodiment, the computation module (148) may be
communicatively coupled to the report generation module (146) and configured to receive the computation request from the report generation module (146). The computation module (148) may be configured to perform the computations on the
30 raw data based on the report generation information for generating the computed network data. In an embodiment, the computation module (148) may be configured to perform the computations on the raw data by executing a series of processing
35
steps such as, data preparation, data aggregation, data analysis, data processing, result formatting, and so forth.
[00141] In an embodiment, the data preparation may be performed by the
5 second layer (122) (as shown in the FIG. 1B) for cleaning the raw data to remove the errors, inconsistencies and duplicates. Further, the raw data may be transformed into a format suitable for analysis. The transformation of the raw data may include normalizing or aggregating the data. In another embodiment, the data preparation may be performed by the first layer (120) and then transmit the prepared network 10 data to the second layer (122) for computation.
[00142] Further, in an embodiment, the computation module (148) may be
configured to perform the simple aggregations such as, but not limited to, sum, average, count and so forth for the specific KPIs. In another embodiment, the
15 computation module (148) may be configured to perform the nested aggregations. In yet another embodiment, the computation module (148) may be configured to perform the complex aggregations such as, weighted averages or multidimensional aggregations. Further, in an embodiment, the computation module (148) may be configured to filter and process the raw data based on the reporting parameters
20 specified in the reporting template to generate the computed network data. In an embodiment, the computation module (148) may be configured to store the computed network data to the database (138).
[00143] Upon storage of the computed network data into the database (138),
25 the report generation module (146) may be configured to transmit the query to the database (138) for fetching the computed network data and the report generation information. In an embodiment, the computed network data may be structured and formatted based on requirements specified in the report generation information. In an embodiment, the report generation module (146) may be configured to apply the 30 reporting template to the computed network data for generating the performance metrics report in a predefined format. The predefined format may be, the PDF, the CSV, and so forth.
36
[00144] In another embodiment, the report generation module (146) may be
configured to store the computed network data in the database (138) such as, the hot cache (124). In an embodiment, the report generation module (146) may be configured to store the generated performance metrics report in the second database 5 (128) (as shown in the FIG. 1B).
[00145] Although the FIG. 1C shows an exemplary block diagram of the
system (106); however, in other embodiments, the system (106) may include fewer components, different components, differently arranged components, or additional 10 functional components than depicted in the FIG. 1C. Additionally, or alternatively, one or more components of the system (106) may perform functions described as being performed by one or more other components of the system (106).
[00146] FIG. 2 illustrates an exemplary flow diagram of a process (200)
15 depicting the performance metrics report generation, in accordance with an embodiment of the present disclosure.
[00147] At step (202), the process (200) includes receiving a request to create
and store the performance metrics report by the load balancer (118) from the user 20 (104) via the UI (116).
[00148] At step (204), the process (200) includes transmitting the request in
a predefined file format to the first layer (120) by the load balancer (118). The predefined file format may be, but not limited to, a JavaScript Object Notation 25 (JSON) format, the CSV format, and so forth.
[00149] At step (206), the process (200) includes transmitting the request
validation successful message from the first layer (120) to the load balancer (118)
when the reporting parameters pass the validation checks.
30
[00150] At step (208), the process (200) includes transmitting the request
validation successful message from the load balancer (118) to the UI (116).
37
[00151] At step (210), the process (200) includes transmitting the validation
failed message from the first layer (120) to the load balancer (118) when any of the reporting parameters fail the validation checks.
5 [00152] At step (212), the process (200) includes transmitting the validation
failed message from the load balancer (118) to the UI (116).
[00153] At step (214), the process (200) includes determining an availability
of the precomputed network data for the received time period in the hot cache (124) 10 by the first layer (120). The precomputed network data may be fetched from the hot cache (124) if the precomputed network data for the received time period is available in the hot cache (124).
[00154] At step (216), the process (200) includes comparing the time period
15 with the retention period of the precomputed network data available in the first database (126) if the precomputed network data for the received time period is not available in the hot cache (124). The process (200) further includes fetching the precomputed network data from the first database (126) by the first layer (120) if the time period provided in the request is less than or equal to the retention period 20 of the precomputed network data in the first database (126).
[00155] At step (218), the process (200) includes creating the performance
metrics report based on the precomputed network data by the first layer (120).
25 [00156] At step (220), the process (200) includes fetching the computed
network data from the second layer (122) when the precomputed network data of the received time period is not available in the first database (126). The computed network data may further be stored in the first database (126).
30 [00157] At step (222), the process (200) includes generating the performance
metrics report by the first layer (120) using the computed network data that may be stored in the first database (126) by the second layer (122).
38
[00158] At step (224), the process (200) includes storing the generated
performance metrics report by the first layer (120) into the second database (128).
[00159] At step (226), the process (200) includes transmitting a request
5 successful message from the first layer (120) to the load balancer (118) when the request is successfully processed, and the reporting parameters are accepted by the system (106).
[00160] At step (228), the process (200) includes transmitting the request
10 successful message from the load balancer (118) to the UI (116).
[00161] At step (230), the process (200) includes transmitting a request failed
message from the first layer (120) to the load balancer (118) when either the request is not processed successfully or any of the reporting parameters are rejected by the 15 system (106).
[00162] At step (232), the process (200) includes transmitting the request
failed message from the load balancer (118) to the UI (116).
20 [00163] At step (234), the process (200) includes transmitting the parameters
associated with the generated performance metrics report to the computing model
(114). The parameters may include, but not limited to, the specific metrics to
analyze, the anomalies to detect, the patterns to identify, the future trends to predict,
and so forth within the data of the generated network performance report.
25
[00164] At step (236), the process (200) includes transmitting a request of
insights associated with the generated performance metrics report to the computing
model (114).
30 [00165] At step (238), the process (200) includes using the training data by
the computing model (114) for analysing and processing the data and the parameters to generate the insights on the performance metrics report.
39
[00166] At step (240), the process (200) includes sending the insights and
results back to the first layer (120) based on analysis. The insights may be in form of updated reports, visualizations, structured data, and so forth.
5 [00167] FIG. 3 illustrates an exemplary computer system (300) in which or
with which embodiments of the present disclosure may be implemented. As shown in the FIG. 3, the computer system (300) may include an external storage device (310), a bus (320), a main memory (330), a read only memory (340), a mass storage device (350), a communication port (360), and a processor (370). A person skilled 10 in the art will appreciate that the computer system (300) may include more than one processor (370) and the communication ports (360). The processor (370) may include various modules associated with embodiments of the present disclosure.
[00168] In an embodiment, the external storage device (310) may be any
15 device that is commonly known in the art such as, but not limited to, a memory card, a memory stick, a solid-state drive, a hard disk drive (HDD), and so forth.
[00169] In an embodiment, the bus (320) may be communicatively coupled
with the processor(s) (370) with the other memory, storage, and communication 20 blocks. The bus (320) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, a Small Computer System Interface (SCSI), a Universal Serial Bus (USB) or the like, for connecting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (370) to the computer system (300).
25
[00170] In an embodiment, the main memory (330) may be a Random-
Access Memory (RAM), or any other dynamic storage device commonly known in the art. The Read-only memory (340) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static
30 information e.g., start-up or Basic Input/Output System (BIOS) instructions for the processor (370).
40
[00171] In an embodiment, the mass storage device (350) may be any current
or future mass storage solution, which may be used to store information and/or instructions. Exemplary mass storage solutions include, but are not limited to, a Parallel Advanced Technology Attachment (PATA) or a Serial Advanced 5 Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of Independent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays).
10 [00172] Further, the communication port (360) may be any of an RS-232 port
for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (360) may be chosen depending on the network (108), such a Local Area Network (LAN), Wide Area Network
15 (WAN), or any network to which the computer system (300) connects.
[00173] Optionally, operator and administrative interfaces, e.g., a display, a
keyboard, a joystick, and a cursor control device, may also be coupled to the bus
(320) to support a direct operator interaction with the computer system (300). Other
20 operator and administrative interfaces may be provided through network
connections connected through the communication port (360). Components
described above are meant only to exemplify various possibilities. In no way should
the aforementioned exemplary computer system (300) limit the scope of the present
disclosure.
25
[00174] FIG. 4 illustrates a flowchart of a method (400) for generating
performance metrics reports, in accordance with an embodiment of present
disclosure.
30 [00175] At step (402), the method (400) includes a step of receiving a report
generation request from a UI (116) by a first layer (120). The report generation request includes a time range. In some embodiments, a reporting template and reporting parameters may be received along with or as a part of the report
41
generation request from the UI (116). In some embodiments, the reporting template may be a user-created reporting template or an automatically generated reporting template. In such embodiment, the automatically generated reporting template may be the reporting template that may be generated by a computing model (114) based 5 on an analysis of report generation requests received in the past. In a preferred embodiment, the computing model (114) may be an AI/ML model. In an embodiment, the first layer (120) may be an Integrated Performance Management (IPM).
10 [00176] At step (404), the method (400) includes a step of determining, by
the first layer (120), an availability of precomputed network data for a received time period in a hot cache (124). The hot cache (124) may be capable of storing the precomputed network data for a predefined time duration. In a preferred embodiment, the predefined time duration may be a shorter duration of time that
15 may depend on relevancy of data stored in the hot cache (124). If the precomputed network data for the requested time period is available in the hot cache (124), then the first layer (120) generates the performance metrics report using the precomputed network data fetched from the hot cache (124) and the reporting template.
20 [00177] At step (406), the method (400) includes a step of comparing the
time period received in the report generation request with a retention period of the precomputed network data available in a first database (126) by the first layer (120).
[00178] At step (408), the method (400) includes a step of querying, by the
25 first layer (120), the first database (126) for fetching the precomputed network data of the received time period if the received time period is less than or equal to the retention period based on a compared result. In some embodiments, the first database (126) is a Distributed Data Lake (DDL) to store the reporting templates, the reporting parameters, the precomputed network data, or a combination thereof. 30 In some embodiments, the first layer (120) may generate the performance metrics report using the precomputed network data fetched from the first database (126), the reporting template, and insights received from a computing model (114). Further, the first layer (120) may store the generated performance metrics report in
42
a second database (128). The second database (128) is a Distributed File System (DFS). In an embodiment, the computing model (114) may generate the performance metrics reports on a scheduled time interval based on the analysis of the report generation requests. 5
[00179] At step (410), the method (400) includes a step of querying, by the
first layer (120), a second database (128) for computed network data for generating the performance metrics report if the received time period is greater than the retention period. The first layer (120) may generate the performance metrics report
10 using the reporting template, the computed data received from the second layer (122), and the insights received from the computing model (114). In some embodiments, the second layer (122) may first receive the report generation request along with the reporting parameters, the reporting template, and raw data from the first layer (120). Further, the second layer (122) may perform computations on the
15 raw data based on the reporting parameters and the reporting template to generate the computed network data. The computations on the raw data may be done by performing simple aggregations, nested aggregations, complex aggregations, and so forth. In some embodiments, the method (400) includes a step of storing the computed network data in the first database (126) by the second layer (122).
20
[00180] In another exemplary embodiment, the present disclosure discloses
a User Equipment (UE) (102) configured for interacting with a system (106) for generating performance metrics reports. The UE (102) includes: a main processor (110). The UE (102) further includes a computer readable storage medium (112)
25 storing one or more instructions for execution by the main processor (110) to receive one or more inputs of a user (104) through a User Interface (UI) (116) for creating a reporting template. The one or more inputs includes one or more reporting parameters. The main processor (110) is further configured to transmit the reporting template comprising the one or more reporting parameters to the
30 system (106) for report generation. The main processor (110) is further configured to receive a performance metrics report from the system (106). The performance
43
metrics report is generated based on the reporting template and the one or more reporting parameters. The main processor (110) is further configured to display the performance metrics report on the User Interface (UI) (116).
5 [00181] In some embodiments, the main processor (110) of the UE (102) is
configured to receive status updates associated with a report generation process from the system (106). The status updates may include a request validation successful message, a validation failed message, a request failed message, a request successful message, and so forth. 10
[00182] In some embodiments, the main processor (110) of the UE (102) is
configured to display the one or more received status updates on the User Interface (UI) (116).
15 [00183] The present disclosure provides technical advancement related to a
field of network performance analysis. This advancement addresses the limitations of existing solutions by introducing a comprehensive system and method for automated report generation based on user-defined templates. The disclosure provides innovative aspects such as, an integration of various network nodes,
20 performance metrics, attributes into a unified framework, allowing for flexible scheduling and detailed performance insights. By implementing the invention, the disclosed system enhances efficiency and accuracy of network performance monitoring, reduces human error and significantly improves decision making process. The result is a more reliable and efficient network management system that
25 continuously monitors and reports on network health, leading to optimized network operations.
[00184] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that 30 many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the
44
disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
5 ADVANTAGES OF THE PRESENT DISCLOSURE
[00185] The present disclosure provides a system and a method for analysing
network performance using Artificial Intelligence/Machine Learning (AI/ML) techniques.
10 [00186] The present disclosure provides a system and a method that includes
an AI/ML model to generate performance metrics out of large volumes of network data in a particular format.
[00187] The present disclosure provides a system and a method that allows a
15 user to schedule viewing of different parameters, creating of different views and extracting computation of various metrics out of network data at different times, for example, hourly, weekly, daily, etc.
[00188] The present disclosure provides a system and a method that collects
20 all parameters or metrics of network data at one place and generates reports using same templates at a time frequency as per user requirement for network performance analysis.
[00189] The present disclosure provides a system and a method to assess
25 overall performance of a network infrastructure to identify areas of improvement and ensure optimal network functioning.
[00190] The present disclosure provides a system and a method to detect and
identify performance issues, bottlenecks, and anomalies in the network that may be 30 causing degradation or disruptions.
45
[00191] The present disclosure provides a system and a method to optimize
network resources, configurations, and capacity based on performance analysis to ensure efficient and scalable network operations.
5 [00192] The present disclosure provides a system and a method to track
performance trends over time to identify patterns, predict future performance, and make proactive decisions to avoid potential performance problems.
[00193] The present disclosure assesses network performance against
10 defined Service Level Agreements (SLAs) or compliance standards to ensure compliance and meet performance targets.
[00194] The present disclosure establishes a continuous monitoring system
to track network performance in real-time or at regular intervals, enabling proactive 15 identification of performance issues and prompt remediation.
46
We claim:
1. A system (106) for generating performance metrics reports, the system
(106) comprising:
5 a first layer (120) connected to a User Interface (UI) (116), wherein
the first layer (120) is configured to:
receive a report generation request from the UI (116),
wherein the report generation request comprises a time period for
which information is needed;
10 determine an availability of precomputed network data for
the time period received in the report generation request in a hot cache (124);
compare the received time period with a retention period of
the precomputed network data available in a first database (126)
15 when the precomputed network data for the received time period is
absent in the hot cache (124);
determine if the received time period is less than or equal to the retention period, then the first layer (120) is configured to:
query the first database (126) for fetching the
20 precomputed network data of the received time period for
generating the performance metrics report; and
if the received time period is greater than the
retention period, then the first layer (120) is configured to
query a second database (128) for computed network data
25 for generating the performance metrics report.
2. The system (106) of claim 1, wherein the first layer (120) and the second
layer (122) are an Integrated Performance Management (IPM) and a
computation layer respectively.
30 3. The system (106) of claim 1, wherein the report generation request
comprises reporting parameters and a reporting template.
47
4. The system (106) of claim 3, wherein the reporting template is one of, a
user-created reporting template or an automatically generated reporting
template.
5
5. The system (106) of claim 4, wherein a computing model (114) is
configured to generate the reporting template based on an analysis of report generation requests.
10 6. The system (106) of claim 1, wherein the second layer (122) is configured
to generate the computed network data by performing one or more computations on raw data based on reporting parameters and a reporting template.
15 7. The system (106) of claim 6, wherein the second layer (122) is configured
to store the computed network data in the first database (126).
8. The system (106) of claim 1, wherein the first layer (120) is configured to
generate the performance metrics report using a reporting template, the
20 computed data received from the second layer (122) and insights received
from a computing model (114).
9. The system (106) of claim 1, wherein the first layer (120) is configured to
store the generated performance metrics reports in a second database (128).
25
10. The system (106) of claim 9, wherein the second database (128) is a
Distributed File System (DFS).
11. The system (106) of claim 1, wherein the first database (126) is a Distributed
30 Data Lake (DDL) configured to store reporting templates, reporting
parameters, the precomputed network data, or a combination thereof.
48
12. The system (106) of claim 1, wherein a computing model (114) is
configured to generate the performance metrics reports on a scheduled time
interval based on an analysis of report generation requests.
5
13. The system (106) of claim 1, wherein the hot cache (124) is configured to
store the precomputed network data for a predefined time duration.
14. A method (400) for generating performance metrics reports, wherein the
10 method (400) comprising steps of:
receiving (402), by a first layer (120), a report generation request from a User Interface (UI) (116); wherein the report generation request comprises a time period for which information is needed;
determining (404), by the first layer (120), an availability of
15 precomputed network data for the time period received in the report
generation request in a hot cache (124);
comparing (406), by the first layer (120), the received time period
with a retention period of precomputed network data available in a first
database (126) when the precomputed network data for the received time
20 period is absent in the hot cache (124);
querying (408), by the first layer (120), the first database (126) for fetching the precomputed network data of the received time period if the received time period is less than or equal to the retention period; and
querying (410), by the first layer (120), a second database (128) for
25 computed network data for generating the performance metrics report if the
received time period is greater than the retention period.
15. The method (400) of claim 14, wherein the first layer (120) and the second
layer (122) are an Integrated Performance Management (IPM) and a
30 computation layer respectively.
16. The method (400) of claim 15, wherein the report generation request
comprises reporting parameters and a reporting template.
49
17. The method (400) of claim 16, wherein the reporting template is one of, a
user-created reporting template or an automatically generated reporting
template.
5
18. The method (400) of claim 17, comprising a step of generating, by a
computing model (114), the reporting template based on an analysis of report generation requests.
10 19. The method (400) of claim 14, comprising a step of generating, by the
second layer (122), the computed network data by performing one or more computations on raw data based on reporting parameters and a reporting template.
15 20. The method (400) of claim 19, comprising a step of storing, by the second
layer (122), the computed network data in the first database (126).
21. The method (400) of claim 14, comprising a step of generating, by the first
layer (120), the performance metrics report using a reporting template, the
20 computed data received from the second layer (122) and insights received
from a computing model (114).
22. The method (400) of claim 14, comprising a step of storing, by the first layer
(120), the generated performance metrics reports in a second database (128).
25
23. The method (400) of claim 22, wherein the second database (128) is a
Distributed File System (DFS).
24. The method (400) of claim 14, wherein the first database (126) is a
30 Distributed Data Lake (DDL) to store reporting templates, reporting
parameters, the precomputed network data, or a combination thereof
50
25. The method (400) of claim 14, further comprising a step of generating, by a computing model (114), performance metrics reports on a scheduled time interval based on an analysis of report generation requests.
5 26. The method (400) of claim 14, wherein the hot cache (124) stores the
precomputed network data for a predefined time duration.
10
27. A User Equipment (UE) (102) configured for interacting with a system (106) for generating performance metrics reports, the UE (102) comprising: a main processor (110);
a computer readable storage medium (112) storing one or more instructions for execution by the main processor (110) to:
15
receive one or more inputs of a user (104) through a User Interface (UI) (116) for creating a reporting template, wherein the one or more inputs comprises one or more reporting parameters and a time period for which information is needed;
transmit the reporting template comprising the one or more reporting parameters to the system (106) for report generation;
20
receive a performance metrics report from the system (106), wherein the performance metrics report is generated based on the reporting template and the one or more reporting parameters; and
display the performance metrics report on the User Interface (UI) (116).
25 28. The UE (102) as claimed in claim 27, wherein the main processor (110) is
configured to receive one or more status updates associated with a report generation process from the system (106).
29. The UE (102) as claimed in claim 27, wherein the main processor (110) is
30 configured to display the one or more received status updates on the User
Interface (UI) (116).
| # | Name | Date |
|---|---|---|
| 1 | 202321051143-STATEMENT OF UNDERTAKING (FORM 3) [29-07-2023(online)].pdf | 2023-07-29 |
| 2 | 202321051143-PROVISIONAL SPECIFICATION [29-07-2023(online)].pdf | 2023-07-29 |
| 3 | 202321051143-FORM 1 [29-07-2023(online)].pdf | 2023-07-29 |
| 4 | 202321051143-DRAWINGS [29-07-2023(online)].pdf | 2023-07-29 |
| 5 | 202321051143-DECLARATION OF INVENTORSHIP (FORM 5) [29-07-2023(online)].pdf | 2023-07-29 |
| 6 | 202321051143-FORM-26 [25-10-2023(online)].pdf | 2023-10-25 |
| 7 | 202321051143-FORM-26 [30-05-2024(online)].pdf | 2024-05-30 |
| 8 | 202321051143-FORM 13 [30-05-2024(online)].pdf | 2024-05-30 |
| 9 | 202321051143-AMENDED DOCUMENTS [30-05-2024(online)].pdf | 2024-05-30 |
| 10 | 202321051143-Request Letter-Correspondence [04-06-2024(online)].pdf | 2024-06-04 |
| 11 | 202321051143-Power of Attorney [04-06-2024(online)].pdf | 2024-06-04 |
| 12 | 202321051143-Covering Letter [04-06-2024(online)].pdf | 2024-06-04 |
| 13 | 202321051143-CORRESPONDENCE(IPO)-(WIPO DAS)-12-07-2024.pdf | 2024-07-12 |
| 14 | 202321051143-FORM-5 [25-07-2024(online)].pdf | 2024-07-25 |
| 15 | 202321051143-DRAWING [25-07-2024(online)].pdf | 2024-07-25 |
| 16 | 202321051143-CORRESPONDENCE-OTHERS [25-07-2024(online)].pdf | 2024-07-25 |
| 17 | 202321051143-COMPLETE SPECIFICATION [25-07-2024(online)].pdf | 2024-07-25 |
| 18 | 202321051143-ORIGINAL UR 6(1A) FORM 26-160924.pdf | 2024-09-23 |
| 19 | 202321051143-FORM 18 [04-10-2024(online)].pdf | 2024-10-04 |
| 20 | Abstract-1.jpg | 2024-10-07 |
| 21 | 202321051143-FORM 3 [11-11-2024(online)].pdf | 2024-11-11 |
| 22 | 202321051143-FORM 3 [13-11-2024(online)].pdf | 2024-11-13 |