Sign In to Follow Application
View All Documents & Correspondence

A System And Method For Pre Computation Of Network Performance Data

Abstract: The present disclosure may relate to a system (108) for pre-computation of network performance data. The system may comprise a memory (204) and one or more processors (202). The one or more processors (202) may be configured to execute instructions stored in the memory (204) to: receive, by a data collection engine (212), a request for network performance data from a user (102) via a graphical user interface (GUI) (402); process, by a computation engine (214), the received request to determine whether corresponding output data is present in a data lake (220); retrieve, by the computation engine (214), the corresponding output data from the data lake (220) when present; calculate, by the computation engine (214), new output data when the corresponding output data is not present; store the calculated new output data in the data lake (220). FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
02 August 2023
Publication Number
06/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. SAXENA, Gaurav
B1603, Platina Cooperative Housing Society, Casa Bella Gold, Kalyan Shilphata Road, Near Xperia Mall Palava City, Dombivli, Kalyan, Thane - 421204, Maharashtra, India.
4. SHOBHARAM, Meenakshi
2B-62, Narmada, Kalpataru, Riverside, Takka, Panvel, Raigargh - 410206, Maharashtra, India.
5. BHANWRIA, Mohit
39, Behind Honda Showroom, Jobner Road, Phulera, Jaipur - 303338, Rajasthan, India.
6. GAYKI, Vinay
259, Bajag Road, Gadasarai, District -Dindori - 481882, Madhya Pradesh, India.
7. KUMAR, Durgesh
Mohalla Ramanpur, Near Prabhat Junior High School, Hathras, Uttar Pradesh -204101, India.
8. BHUSHAN, Shashank
Fairfield 1604, Bharat Ecovistas, Shilphata, NH48, Thane - 421204, Maharashtra, India.
9. KHADE, Aniket Anil
X-29/9, Godrej Creek Side Colony, Phirojshanagar, Vikhroli East - 400078, Mumbai, Maharashtra, India.
10. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
11. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
12. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
13. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera District-Kota, Rajasthan - 324001, India.
14. SAHU, Kishan
Ajay Villa, Gali No. 2 Ambedkar Colony, Bikaner, Rajasthan - 334003, India.
15. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
16. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
17. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
18. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
19. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
20. KUSHWAHA, Avinash
SA 18/127, Mauza Hall, Varanasi - 221007, Uttar Pradesh, India.
21. GARG, Harshita
37A, Ananta Lifestyle, Airport Road, Zirakpur, Mohali, Punjab - 140603, India.
22. KUMAR, Yogesh
Village-Gatol, Post-Dabla, Tahsil-Ghumarwin, Distict-Bilaspur, Himachal Pradesh - 174021, India.
23. TALGOTE, Kunal
29, Nityanand Nagar, Nr. Tukaram Hosp., Gaurakshan Road, Akola - 444004, Maharashtra, India.
24. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli, Maharashtra - 421204, India.
25. VISHWAKARMA, Dharmendra Kumar
Ramnagar, Sarai Kansarai, Bhadohi - 221404, Uttar Pradesh, India.
26. SONI, Sajal
K. P. Nayak Market Mauranipur, Jhansi, Uttar Pradesh - 284204, India.

Specification

FORM 2
HE PATENTS ACT, 1970
(39 of 1970) PATENTS RULES, 2003
COMPLETE SPECIFICATION



FOR PRE-COMPUTATION OF NETWORK DATA
APPLICANT


JIO PLATFORMS LIMITED
Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, 380006, Gujarat, India; Nationality : India
following specification particularly describes the invention and the manner in which it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as, but are not limited to,
copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade
5 dress protection, belonging to Jio Platforms Limited (Jio) or its affiliates
(hereinafter referred as owner). The owner has no objection to the facsimile
reproduction by anyone of the patent document or the patent disclosure, as it
appears in the Patent and Trademark Office patent files or records, but otherwise
reserves all rights whatsoever. All rights to such intellectual property are fully
10 reserved by the owner.
FIELD OF DISCLOSURE
[0002] The present invention, in general, relates to the field of wireless
communication and more particularly, relates to a system and a method for pre-
15 computation of network performance data.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context
20 in which they are used to indicate otherwise.
[0004] The expression "network performance data" used hereinafter in the
specification refers to any quantitative or qualitative information related to the operation, efficiency, or effectiveness of a computer network or telecommunications system.
25 [0005] The expression "data lake" used hereinafter in the specification
refers to a centralized repository that allows storage of structured and unstructured data at any scale.
[0006] The expression "artificial intelligence/machine learning (AI/ML)
engine" used hereinafter in the specification refers to a software component that
30 utilizes artificial intelligence and machine learning algorithms to analyze data,
make predictions, or automate decision-making processes.
2

[0007] The expression "computation engine" used hereinafter in the
specification refers to a software or hardware component designed to perform
complex calculations or data processing tasks.
[0008] The expression "distributed manner" used hereinafter in the
5 specification refers to a method of processing where a task is divided into smaller
sub-tasks that are executed simultaneously across multiple computing nodes or
devices.
[0009] The expression "flow identifier (ID)" used hereinafter in the
specification refers to a unique identifier associated with a specific set of output
10 data or computational process.
[0010] The expression "graphical user interface (GUI)" used hereinafter in
the specification refers to a visual way of interacting with a computer using items
such as windows, icons, and menus, used by most modern operating systems.
[0011] The expression "user equipment" used hereinafter in the
15 specification refers to any device used directly by an end-user to communicate. It
can include a wide range of devices such as mobile phones, tablets, laptops, and
desktop computers.
[0012] The expression "pre-computation" used hereinafter in the
specification refers to the process of performing calculations or generating data in
20 advance of when it is actually needed, typically to improve system response times.
[0013] The expression "data collection engine" used hereinafter in the
specification refers to a software component designed to gather and process input
data from various sources.
[0014] These definitions are in addition to those expressed in the art.
25
BACKGROUND OF DISCLOSURE
[0015] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may
include certain aspects of the art that may be related to various features of the
30 present disclosure. However, it should be appreciated that this section be used only
3

to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0016] In the field of distributed computing and network performance
5 management, executing the same user request multiple times can lead to several
inefficiencies and resource consumption issues. When a request is repeatedly
executed in a distributed computing environment, it consumes additional
computing resources, potentially resulting in slower response times for other
concurrent requests. Moreover, if the same request is executed multiple times
10 without any changes in the input or conditions, it leads to redundant work being
performed, wasting valuable computing power and time, thereby impacting the overall efficiency of the system.
[0017] Conventional systems and methods in the prior art suffer from
15 several drawbacks and limitations when it comes to handling repeated user requests.
These systems often spend additional time repeating the same calculations or operations, which reduces the overall efficiency of the distributed system. This redundancy not only wastes computing resources but also leads to increased latency and slower response times for users. 20
[0018] Furthermore, existing solutions lack the capability to intelligently
detect and handle repeated requests. They do not have mechanisms in place to store
and retrieve previously computed results, leading to unnecessary re-computation of
the same output data. This lack of optimization results in increased resource
25 consumption and reduced performance of the distributed computing environment.
[0019] Another issue faced by users and companies in this domain is the
absence of a centralized storage system for storing the output data of executed
requests. Without such a centralized storage, the system is unable to reuse
30 previously computed results, leading to redundant computations and increased
4

processing time. This not only affects the efficiency of the system but also hinders the ability to provide quick and accurate responses to user requests.
[0020] Moreover, current systems do not leverage advanced techniques
5 such as artificial intelligence and machine learning to intelligently determine
whether a request has been previously executed. The lack of such intelligent mechanisms results in inefficient handling of repeated requests and missed opportunities for optimization.
10 [0021] It is therefore a pressing need for a system and method for pre-
computation of network performance data
SUMMARY
[0022] In an exemplary embodiment, a system for pre-computation of
15 network performance data is described. The system comprises a memory and one
or more processors configured to execute instructions stored in the memory. The one or more processors are configured to receive, by a data collection engine, a request for network performance data from a user via a graphical user interface (GUI). The one or more processors are further configured to process, by a
20 computation engine, the received request for network performance data to
determine whether corresponding output data is present in a data lake. The one or more processors are configured to retrieve, by the computation engine, the corresponding output data from the data lake when the corresponding output data is present in the data lake. The one or more processors are configured to calculate,
25 by the computation engine, new output data for the received request for network
performance data when the corresponding output data is not present in the data lake. The one or more processors are configured to store the calculated new output data in the data lake. The one or more processors are configured to display, via the graphical user interface (GUI), either the retrieved corresponding output data or the
30 calculated new output data to the user.
5

[0023] In some embodiments, the computation engine is further configured
to extract request parameters from the received request for network performance
data. The extracted request parameters comprise at least one of: a time range,
network segments, performance metrics, and device identifiers. The computation
5 engine is configured to generate a request identifier for the received request for
network performance data based on the extracted request parameters. The
computation engine is configured to search the data lake for the generated request
identifier. The computation engine is configured to determine that the
corresponding output data is present in the data lake if the generated request
10 identifier is found in the data lake.
[0024] In some embodiments, the computation engine is configured to
calculate the new output data when the generated request identifier is not found in the data lake.
15
[0025] In some embodiments, the data lake is configured to serve as a
centralized storage system for storing the calculated new output data. The data lake enables retrieval of the calculated new output data as the corresponding output data when a subsequent identical request for network performance data is received.
20
[0026] In some embodiments, the system is further configured to generate a
flow ID associated with the calculated new output data stored in the data lake.
[0027] In some embodiments, the computation engine is further configured
25 to generate the request identifier for the received request for network performance
data. The computation engine is configured to compare the generated request identifier with stored flow IDs in the data lake. The computation engine is configured to determine that the corresponding output data is present in the data lake if a matching flow ID is found. 30
6

[0028] In some embodiments, the computation engine is further configured
to generate a flow ID for the calculated new output data. The computation engine is configured to store the generated flow ID along with the calculated new output data in the data lake. 5
[0029] In some embodiments, the computation engine is a distributed
computation engine configured to divide the received request for network performance data into a plurality of sub-tasks. The distributed computation engine is configured to distribute the plurality of sub-tasks across multiple computing
10 nodes. The distributed computation engine is configured to execute the plurality of
sub-tasks in parallel across the multiple computing nodes to calculate partial output data for each sub-task of the plurality of sub-tasks. The distributed computation engine is configured to aggregate the calculated partial output data from the multiple computing nodes to obtain the calculated new output data.
15
[0030] In another exemplary embodiment, a method for pre-computation of
network performance data is described. The method comprises receiving, by a data collection engine, a request for network performance data from a user via a graphical user interface (GUI). The method further comprises processing, by a
20 computation engine, the received request for network performance data to
determine whether corresponding output data is present in a data lake. The method comprises retrieving, by the computation engine, the corresponding output data from the data lake when the corresponding output data is present in the data lake. The method comprises calculating, by the computation engine, new output data for
25 the received request for network performance data when the corresponding output
data is not present in the data lake. The method comprises storing the calculated new output data in the data lake. The method comprises displaying, via the graphical user interface (GUI), either the retrieved corresponding output data or the calculated new output data to the user.
30
7

[0031] In some embodiments, processing the received request for network
performance data comprises extracting, by the computation engine, request
parameters from the received request for network performance data. The extracted
request parameters comprise at least one of: a time range, network segments,
5 performance metrics, and device identifiers. Processing the received request for
network performance data further comprises generating, by the computation engine,
a request identifier for the received request for network performance data based on
the extracted request parameters. Processing the received request for network
performance data comprises searching the data lake for the generated request
10 identifier. Processing the received request for network performance data comprises
determining that the corresponding output data is present in the data lake if the generated request identifier is found in the data lake.
[0032] In some embodiments, calculating the new output data comprises
15 using the computation engine to calculate the new output data when the generated
request identifier is not found in the data lake.
[0033] In some embodiments, the data lake serves as a centralized storage
system for storing the calculated new output data. The data lake enables retrieval of
20 the calculated new output data as the corresponding output data when a subsequent
identical request for network performance data is received.
[0034] In some embodiments, the method further comprises generating a
flow ID associated with the calculated new output data stored in the data lake.
25
[0035] In some embodiments, the method further comprises generating, by
the computation engine, the request identifier for the received request for network performance data. The method comprises comparing the generated request identifier with stored flow IDs in the data lake. The method comprises determining
30 that the corresponding output data is present in the data lake if a matching flow ID
is found.
8

[0036] In some embodiments, the method further comprises generating, by
the computation engine, a flow ID for the calculated new output data. The method
comprises storing the generated flow ID along with the calculated new output data
5 in the data lake.
[0037] In some embodiments, calculating the new output data for the
received request for network performance data comprises dividing the received request for network performance data into a plurality of sub-tasks. Calculating the
10 new output data comprises distributing the plurality of sub-tasks across multiple
computing nodes. Calculating the new output data comprises executing the plurality of sub-tasks in parallel across the multiple computing nodes to calculate partial output data for each sub-task of the plurality of sub-tasks. Calculating the new output data comprises aggregating the calculated partial output data from the
15 multiple computing nodes to obtain the calculated new output data.
[0038] In a further exemplary embodiment, a computing device
communicatively coupled to a system for pre-computation of network performance data via a network is described. The system comprises a memory and one or more
20 processors configured to fetch and execute computer-readable instructions stored
in the memory to perform the method for pre-computation of network performance data. The method comprises receiving, by a data collection engine, a request for network performance data from a user via a graphical user interface (GUI). The method further comprises processing, by a computation engine, the received request
25 for network performance data to determine whether corresponding output data is
present in a data lake. The method comprises retrieving, by the computation engine, the corresponding output data from the data lake when the corresponding output data is present in the data lake. The method comprises calculating, by the computation engine, new output data for the received request for network
30 performance data when the corresponding output data is not present in the data lake.
9

The method comprises storing the calculated new output data in the data lake. The method comprises displaying, via the graphical user interface (GUI), either the retrieved corresponding output data or the calculated new output data to the user.
5 [0039] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTS OF THE PRESENT DISCLOSURE
10 [0040] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies, are as listed herein below.
[0041] An object of the present disclosure is to provide a system for pre-
computation of network performance data, where a request for network performance data from a user is received by a data collection engine via a graphical
15 user interface (GUI), and a computation engine processes the received request to
determine whether corresponding output data is present in a data lake.
[0042] An object of the present disclosure is to provide a system where the
computation engine retrieves the corresponding output data from the data lake when it is present, and calculates new output data when it is not present, thereby
20 optimizing the processing of network performance data requests.
[0043] An object of the present disclosure is to provide a system that stores
calculated new output data in a data lake, serving as a centralized storage system, enabling efficient retrieval of data for subsequent identical requests without re-execution.
25 [0044] An object of the present disclosure is to provide a system that
generates a flow ID associated with the calculated new output data stored in the
data lake, and uses this flow ID to efficiently determine whether a received request
was previously executed.
[0045] An object of the present disclosure is to provide a system where the
30 computation engine functions as a distributed computation engine, dividing
10

received requests into sub-tasks, distributing them across multiple computing
nodes, and executing them in parallel to enhance processing efficiency.
[0046] An object of the present disclosure is to provide a method for pre-
computation of network performance data that mirrors the functionality of the
5 system, including receiving requests, processing them to check for existing data,
retrieving or calculating data as needed, and displaying results to the user via a GUI.
[0047] An object of the present disclosure is to provide a non-transitory
computer-readable medium storing instructions that, when executed, perform the pre-computation of network performance data, ensuring consistent functionality
10 across different implementations of the invention.
[0048] An object of the present disclosure is to provide a computing device
that can be communicatively coupled to the system for pre-computation of network performance data, allowing for distributed access and utilization of the invention's capabilities.
15
BRIEF DESCRIPTION OF DRAWINGS
[0049] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same
20 parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such
25 drawings includes the disclosure of electrical components, electronic components
or circuitry commonly used to implement such components.
[0050] FIG. 1 illustrates an exemplary network architecture for
implementing a system, in accordance with an embodiment of the present disclosure.
30 [0051] FIG. 2 illustrates an exemplary architecture of a system, in
accordance with an embodiment of the present disclosure.
11

[0052] FIG. 3 illustrates an exemplary flow diagram for pre-computation of
network performance data, in accordance with an embodiment of the present disclosure.
[0053] FIG. 4 illustrates an exemplary architecture of a system for pre-
5 computation of network performance data, in accordance with an embodiment of
the present disclosure.
[0054] FIG. 5 illustrates an exemplary flowchart of a method for pre-
computation of network performance data, in accordance with an embodiment of
the present disclosure.
10 [0055] FIG. 6 illustrates a computer system in which or with which the
embodiments of the present disclosure may be implemented.
[0056] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
15 LIST OF REFERENCE NUMERALS
100 – Network Architecture
102-1, 102-2…102-N – User (s)
104-1, 104-2…104-N – User Equipment (s) / Computing device (s)
108 –System
20 106 –Network
202 – One or more processor(s)
204 – Memory
206 – I/O Interfaces
208 – Processing Engine
25 210 –Database
212 – Data collection Engine
214 – Computation Engine
216 – Other Engine (s)
220 – Data lake
30 300 – Flowchart
402– Graphical User Interface (GPU)
12

500 –Flowchart
610 – External Storage Device 620 – Bus
630 – Main Memory
5 640 – Read Only Memory
650 – Mass Storage Device 660 – Communication Port 670 – Processor
10 BRIEF DESCRIPTION OF THE INVENTION
[0057] In the following description, for the purposes of explanation, various
specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific
15 details. Several features described hereafter can each be used independently of one
another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein. Example embodiments of
20 the present disclosure are described below, as illustrated in various drawings in
which like reference numerals refer to the same parts throughout the different drawings.
[0058] The ensuing description provides exemplary embodiments only, and
25 is not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the
function and arrangement of elements without departing from the spirit and scope
30 of the disclosure as set forth.
13

[0059] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by
one of ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, networks, processes, and other
5 components may be shown as components in block diagram form in order not to
obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
10 [0060] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a
structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations can be performed in
parallel or concurrently. In addition, the order of the operations may be re-
15 arranged. A process is terminated when its operations are completed but could
have additional steps not included in a figure. A process may correspond to a
method, a function, a procedure, a subroutine, a subprogram, etc. When a process
corresponds to a function, its termination can correspond to a return of the function
to the calling function or the main function.
20
[0061] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt,
the subject matter disclosed herein is not limited by such examples. In addition,
any aspect or design described herein as “exemplary” and/or “demonstrative” is
25 not necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the
detailed description or the claims, such terms are intended to be inclusive like the
30 term “comprising” as an open transition word without precluding any additional
or other elements.
14

[0062] Reference throughout this specification to “one embodiment” or “an
embodiment” or “an instance” or “one instance” means that a particular feature,
structure, or characteristic described in connection with the embodiment is
5 included in at least one embodiment of the present disclosure. Thus, the
appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
10
[0063] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms
15 “comprises” and/or “comprising,” when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the
20 associated listed items. It should be noted that the terms “mobile device”, “user
equipment”, “user device”, “communication device”, “device” and similar terms are used interchangeably for the purpose of describing the invention. These terms are not intended to limit the scope of the invention or imply any specific functionality or limitations on the described embodiments. The use of these terms
25 is solely for convenience and clarity of description. The invention is not limited to
any particular type of device or equipment, and it should be understood that other equivalent terms or variations thereof may be used interchangeably without departing from the scope of the invention as defined herein.
30 [0064] As used herein, an “electronic device”, or “portable electronic
device”, or “user device” or “communication device” or “user equipment” or
15

“device” refers to any electrical, electronic, electromechanical, and computing
device. The user device is capable of receiving and/or transmitting one or
parameters, performing function/s, communicating with other user devices, and
transmitting data to the other user devices. The user equipment may have a
5 processor, a display, a memory, a battery, and an input-means such as a hard
keypad and/or a soft keypad. The user equipment may be capable of operating on any radio access technology including but not limited to IP-enabled communication, Zig Bee, Bluetooth, Bluetooth Low Energy, Near Field Communication, Z-Wave, Wi-Fi, Wi-Fi direct, etc. For instance, the user
10 equipment may include, but not limited to, a mobile phone, smartphone, virtual
reality (VR) devices, augmented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other device as may be obvious to a person skilled in the art for implementation of the features of the present disclosure.
15
[0065] Further, the user device may also comprise a “processor” or
“processing unit” includes processing unit, wherein processor refers to any logic circuitry for processing instructions. The processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal
20 processor, a plurality of microprocessors, one or more microprocessors in
association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of
25 the system according to the present disclosure. More specifically, the processor is
a hardware processor.
[0066] As portable electronic devices and wireless technologies continue to
improve and grow in popularity, the advancing wireless technologies for data
30 transfer are also expected to evolve and replace the older generations of
technologies. In the field of wireless data communications, the dynamic
16

advancement of various generations of cellular technology are also seen. The
development, in this respect, has been incremental in the order of second
generation (2G), third generation (3G), fourth generation (4G), and now fifth
generation (5G), and more such generations are expected to continue in the
5 forthcoming time.
[0067] While considerable emphasis has been placed herein on the
components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be
10 made in the preferred embodiments without departing from the principles of the
disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and
15 not as a limitation.
[0068] At present, executing the same user request multiple times in a
distributed computing environment can lead to increased resource consumption and inefficiencies. Conventional systems often face the challenge of redundant
20 work being performed when the same request is executed repeatedly without any
changes in input or conditions. This redundancy wastes computing power and time, impacting the overall efficiency of the system. The present disclosure addresses these challenges by providing a system and a method for pre-computation of network performance data that intelligently detects and handles
25 repeated user requests, stores and retrieves previously computed output data, and
utilizes advanced techniques such as artificial intelligence and machine learning to optimize request processing.
[0069] The present disclosure serves the purpose of enhancing the
30 efficiency and effectiveness of handling network performance data requests in a
distributed computing environment. The system and method provided by the
17

present disclosure enable the optimization of resource utilization, improvement of
response times, and enhancement of the overall efficiency of the system. By
leveraging a data lake as a centralized storage system for storing output data of
previously executed requests and employing a computation engine to determine
5 whether a request has been previously executed, the present disclosure reduces
redundant computations and enables quick retrieval of previously computed results. This ultimately leads to faster response times, improved user experience, and more efficient utilization of computing resources in a distributed computing environment.
10
[0070] The present disclosure relates to a system and a method for pre-
computation of network performance data. The system comprises a memory and one or more processors configured to execute computer-readable instructions stored in the memory. The system receives a request for network performance data
15 from a user through a data collection engine via a graphical user interface (GUI)
and processes the received request using a computation engine to determine whether corresponding output data is present in a data lake. If the corresponding output data is present, the computation engine retrieves it from the data lake. If the corresponding output data is not present, the computation engine calculates new
20 output data for the request and stores it in the data lake. The retrieved
corresponding output data or the calculated new output data is then displayed to the user via the GUI. The method involves the steps of receiving a request for network performance data, processing the request to determine the presence of corresponding output data in the data lake, retrieving or calculating the output data
25 accordingly, storing newly calculated output data, and displaying the output data
to the user.
[0071] The various embodiments throughout the disclosure will be
explained in more detail with reference to Figure 1- Figure 6. 30
18

[0072] FIG. 1 illustrates an example network architecture (100) for
implementing a system (108) for pre-computation of network performance data, in accordance with an embodiment of the present disclosure.
5 [0073] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) may be connected to a proposed system (108) through a network (106). A person of ordinary skill in the art will understand that the one or more computing devices (104-1, 104-2…104-N) may be collectively referred as computing devices (104) and individually referred as a computing device (104). One or more users
10 (102-1, 102-2…102-N) may provide one or more requests to the system (108). A
person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as users (102) and individually referred as a user (102). Further, the computing devices (104) may also be referred as a user equipment (UE) (104) or as UEs (104) throughout the disclosure.
15
[0074] In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, a laptop, etc. Further, the computing device (104) may include one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as a camera, audio aid, microphone, or
20 keyboard. Furthermore, the computing device (104) may include a mobile phone,
smartphone, virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, and a mainframe computer. Additionally, input devices for receiving input from the user (102) such as a touchpad, touch-enabled screen, electronic pen,
25 and the like may be used.
[0075] In an embodiment, the network (106) may include, by way of
example but not limitation, at least a portion of one or more networks having one
or more nodes that transmit, receive, forward, generate, buffer, store, route, switch,
30 process, or a combination thereof, etc. one or more messages, packets, signals,
waves, voltage or current levels, some combination thereof, or so forth. The
19

network (106) may also include, by way of example but not limitation, one or more
of a wireless network, a wired network, an internet, an intranet, a public network,
a private network, a packet-switched network, a circuit-switched network, an ad
hoc network, an infrastructure network, a Public-Switched Telephone Network
5 (PSTN), a cable network, a cellular network, a satellite network, a fiber optic
network, or some combination thereof.
[0076] In an embodiment, a user (102) may send a request to the system
(108) through a graphical user interface (GUI). Further, the system (108) may
10 check if output data for the request is present in a data lake configured in the system
(108). If the output data is already present in the data lake, the output data may be retrieved by the system (108) from the data lake and sent directly to the user (102). If the output data is not present in the data lake, then the output data may be calculated by the system (108) and stored in the data lake and finally sent to the
15 user (102).
[0077] Although FIG. 1 shows exemplary components of the network
architecture (100), in other embodiments, the network architecture (100) may
include fewer components, different components, differently arranged
20 components, or additional functional components than depicted in FIG. 1.
Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other components of the network architecture (100).
25 [0078] FIG. 2 illustrates an example block diagram (200) of a proposed
system (108), in accordance with an embodiment of the present disclosure.
[0079] Referring to FIG. 2, in an embodiment, the system (108) may include
one or more processor(s) (202). The one or more processor(s) (202) may be
30 implemented as one or more microprocessors, microcomputers, microcontrollers,
digital signal processors, central processing units, logic circuitries, and/or any
20

devices that process data based on operational instructions. Among other
capabilities, the one or more processor(s) (202) may be configured to fetch and
execute computer-readable instructions stored in a memory (204) of the system
(108). The memory (204) may be configured to store one or more computer-
5 readable instructions or routines in a non-transitory computer readable storage
medium, which may be fetched and executed to create or share data packets over
a network service. The memory (204) may comprise any non-transitory storage
device including, for example, volatile memory such as random-access memory
(RAM), or non-volatile memory such as erasable programmable read only memory
10 (EPROM), flash memory, and the like.
[0080] In an embodiment, the system (108) may include an interface(s)
(206). The interface(s) (206) may comprise a variety of interfaces, for example, interfaces for data input and output devices (I/O), storage devices, and the like.
15 The interface(s) (206) may facilitate communication through the system (108). The
interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing engine(s) (208), a database (210) and data lake (220). Further, the processing engine(s) (208) may include a data collection engine (212),
20 a computation engine (214) and other engine(s) (216). In an embodiment, the other
engine(s) (216) may include, but not limited to, a data ingestion engine, an input/output engine, and a notification engine.
[0081] The data collection engine (212) receives a request for network
25 performance data from a user (102) through a graphical user interface (GUI). The
computation engine (214) processes the received request to determine if
corresponding output data for the request is present in the data lake (220). If the
corresponding output data is already present in the data lake (220), the computation
engine (214) retrieves the corresponding output data from the data lake (220) and
30 sends it directly to the user (102). If the corresponding output data is not present
in the data lake (220), the computation engine (214) calculates new output data for
21

the received request in a distributed manner, stores the calculated new output data in the data lake (220), and finally sends the calculated new output data to the user (102) via the GUI.
5 [0082] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For
10 example, the programming for the processing engine(s) (208) may be processor-
executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may
15 store instructions that, when executed by the processing resource, implement the
processing engine(s) (208). In such examples, the system may comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system and the processing resource. In other
20 examples, the processing engine(s) (208) may be implemented by electronic
circuitry.
[0083] Although FIG. 2 shows exemplary components of the system (108),
in other embodiments, the system (108) may include fewer components, different
25 components, differently arranged components, or additional functional
components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the system (108) may perform functions described as being performed by one or more other components of the system (108). The details of the system architecture (108) may be described with reference to FIG. 2 in
30 subsequent paragraphs.
22

[0084] The present disclosure may relate to a system (108) for pre-
computation of network performance data. Network performance data may refer
to quantitative and qualitative metrics that describe the operational efficiency and
effectiveness of a computer network. These metrics may include, but are not
5 limited to, throughput (the amount of data transferred in a given time period),
latency (the delay between sending and receiving data), packet loss (the percentage of data packets that fail to reach their destination), jitter (variations in the delay of received packets), and bandwidth utilization (the percentage of available network capacity being used).
10
[0085] The system (108) may comprise a memory (204) and one or more
processors (202). The memory (204) may include various types of computer-readable storage media, such as random-access memory (RAM), read-only memory (ROM), solid-state drives, or hard disk drives. The one or more processors
15 (202) may be central processing units (CPUs), graphics processing units (GPUs),
or specialized network processors, capable of executing complex calculations and data processing tasks.
[0086] The one or more processors (202) may be configured to execute
20 instructions stored in the memory (204) to perform various operations related to
pre-computation of network performance data. Pre-computation, in this context, refers to the process of calculating and storing network performance metrics in advance, anticipating future requests for this data. This approach can significantly reduce response times for common queries and optimize system resources. 25
[0087] The system (108) may include a data collection engine (212) that
may be configured to receive a request for network performance data from a user
(102). The data collection engine (212) may act as the initial point of contact
between the user and the system, responsible for parsing and validating incoming
30 requests. It may employ various data validation techniques to ensure the integrity
and completeness of the received requests.
23

[0088] The request for network performance data may be received via a
graphical user interface (GUI) (402). The GUI (402) may be a web-based interface,
a desktop application, or a mobile app, designed with user experience principles in
5 mind. It may feature intuitive controls such as dropdown menus, sliders, and toggle
switches that allow users to easily specify their data requirements.
[0089] The graphical user interface (402) may provide a user-friendly
interface for the user (102) to input their request and view the results. For example,
10 the GUI might offer a dashboard where users can select network segments from a
network topology map, choose performance metrics from a predefined list, and specify time ranges using a calendar widget. The results might be displayed as interactive charts, heat maps, or tabular data, depending on the nature of the requested information.
15
[0090] The request for network performance data may include various
parameters such as time ranges, network segments, performance metrics, and device identifiers. A time range might be specified as "last 24 hours", "previous month", or a custom range like "March 1, 2023, to April 15, 2023". Network
20 segments could be identified by their IP address ranges or logical names like "East
Coast Data Center" or "European Sales Office Network". Performance metrics might include "average throughput", "peak latency", or "95th percentile packet loss". Device identifiers could be MAC addresses, hostnames, or unique identifiers assigned by the network management system.
25
[0091] The system (108) may further comprise a computation engine (214)
that may be configured to process the received request for network performance data. The computation engine (214) may be the core component of the system, responsible for interpreting the request, determining the most efficient way to
30 fulfill it, and either retrieving pre-computed data or initiating new calculations as
needed.
24

[0092] The computation engine (214) may determine whether
corresponding output data for the received request is present in a data lake (220).
A data lake, in this context, refers to a centralized repository that allows storage of
5 both structured and unstructured data at any scale. It's designed to store raw data
in its native format until it's needed, allowing for flexible data processing and analysis.
[0093] The data lake (220) may serve as a centralized storage system for
10 storing calculated output data. This centralized approach to data storage offers
several advantages. It provides a single source of truth for all network performance data, ensures data consistency, and facilitates easier data governance and access control.
15 [0094] The centralized nature of the data lake (220) may enable efficient
retrieval of previously calculated output data, potentially reducing redundant computations and improving system response times. For instance, if multiple users from different departments request the same network performance data for a specific time period, the system can retrieve this data once from the data lake and
20 serve it to all requesters, rather than recalculating it each time.
[0095] When the computation engine (214) determines that the
corresponding output data is present in the data lake (220), the computation engine
(214) may retrieve the corresponding output data from the data lake (220). This
25 retrieval process may involve querying the data lake using optimized search
algorithms, such as binary search for sorted data or hash-based lookups for keyed data.
[0096] This retrieval process may be more efficient than recalculating the
30 output data, particularly for complex or time-consuming computations. For
example, calculating the 99th percentile latency across a global network over a
25

month-long period might take several minutes of processing time. If this data has already been computed and stored in the data lake, it can be retrieved in a fraction of a second.
5 [0097] The ability to retrieve pre-computed data may significantly reduce
the response time for repeated or similar requests. This can be particularly beneficial in scenarios where multiple users or automated systems frequently request the same or similar network performance data, such as during a daily network health check routine or when generating monthly performance reports.
10
[0098] In cases where the computation engine (214) determines that the
corresponding output data is not present in the data lake (220), the computation engine (214) may calculate new output data for the received request for network performance data. This calculation process may involve complex computations
15 based on the parameters specified in the request.
[0099] The calculation of new output data may ensure that the system (108)
can handle novel requests or requests for which pre-computed data is not available.
For instance, if a user requests a unique combination of metrics or a time period
20 that hasn't been analyzed before, the system can perform the necessary calculations
on demand.
[00100] After calculating the new output data, the system (108) may store the
calculated new output data in the data lake (220). This storage process may enable
25 future retrieval of the calculated data, potentially improving the system's efficiency
for subsequent similar requests. The system may employ various data storage optimization techniques, such as compression, partitioning, or indexing, to ensure fast retrieval of stored data.
30 [00101] The storage of calculated data may contribute to the system's ability
to learn and improve its performance over time. By analyzing patterns in the stored
26

data and user requests, the system can optimize its pre-computation strategies, anticipating which data is likely to be requested and pre-computing it during off-peak hours.
5 [00102] The system (108) may then display either the retrieved
corresponding output data or the calculated new output data to the user (102) via
the graphical user interface (GUI) (402). This display process may provide the user
(102) with the requested network performance data in a visually accessible format.
The GUI might offer various visualization options, such as line graphs for time-
10 series data, bar charts for comparative analysis, or network diagrams for topology-
based metrics.
[00103] The use of a graphical user interface may enhance the user
experience and make the system more user-friendly. It allows users to interact with
15 complex network performance data in an intuitive manner, potentially uncovering
insights that might not be apparent from raw numbers alone. For example, a heat map of network latency across different geographical locations can quickly highlight problematic areas that might require attention.
20 [00104] The computation engine (214) of the system (108) may be further
configured to perform additional operations to enhance the efficiency and accuracy of the data retrieval and calculation processes. These additional operations may include data validation, error handling, and optimization of computational resources.
25
[00105] The computation engine (214) may extract request parameters from
the received request for network performance data. These extracted request parameters may comprise at least one of: a time range, network segments, performance metrics, and device identifiers. For example, a request might include
30 parameters like "time range: last 7 days", "network segment: corporate
27

headquarters", "performance metric: average throughput", and "device identifier: all routers".
[00106] The extraction of these parameters may enable more precise
5 matching of requests to stored data and more accurate calculations when new data
is required. By breaking down the request into its constituent parameters, the system can efficiently search for matching pre-computed data or determine the exact calculations needed to fulfill the request.
10 [00107] Based on the extracted request parameters, the computation engine
(214) may generate a request identifier for the received request for network performance data. This request identifier may serve as a unique tag for the specific combination of parameters in the request. It could be implemented as a hash value computed from the concatenated parameter values, ensuring that identical requests
15 always produce the same identifier.
[00108] The generation of a request identifier may facilitate efficient
searching and matching of requests to stored data. Instead of comparing the full
set of parameters for each request against all stored data, the system can simply
20 compare the request identifiers, which is a much faster operation.
[00109] The computation engine (214) may search the data lake (220) for the
generated request identifier. This search process may involve comparing the
generated request identifier with identifiers associated with previously stored
25 output data. The search might be implemented using efficient data structures like
hash tables or binary search trees, allowing for rapid lookups even in large datasets.
[00110] The use of request identifiers for searching may potentially improve
the speed and accuracy of data retrieval. It reduces the complexity of the search
30 process from comparing multiple parameters to comparing a single identifier,
significantly reducing the computational overhead of data retrieval.
28

[00111] If the generated request identifier is found in the data lake (220), the
computation engine (214) may determine that the corresponding output data is
present in the data lake (220). This determination may trigger the retrieval process,
5 potentially saving computational resources that would otherwise be used for
recalculation.
[00112] In cases where the generated request identifier is not found in the
data lake (220), the computation engine (214) may be configured to calculate the
10 new output data. This calculation process may ensure that the system can handle
unique or previously unseen combinations of request parameters. The system might employ various computational techniques depending on the nature of the request, such as statistical analysis for aggregating performance data or graph algorithms for analyzing network topology.
15
[00113] The data lake (220) in the system (108) may be configured to serve
as a centralized storage system for storing the calculated new output data. This centralized storage may enable retrieval of the calculated new output data as the corresponding output data when a subsequent identical request for network
20 performance data is received. The centralized nature of the data lake allows for
efficient data management, including version control, access logging, and data lifecycle management.
[00114] The ability to retrieve pre-computed data for identical requests may
25 significantly reduce response times and computational load for repeated queries.
This can be particularly beneficial in scenarios where the same network performance reports are generated periodically, such as daily health checks or monthly performance reviews.
30 [00115] The system (108) may be further configured to generate a flow ID
associated with the calculated new output data stored in the data lake (220). The
29

flow ID may serve as an additional identifier for the stored data, potentially enabling more flexible and efficient data retrieval processes. Unlike the request identifier which is based solely on input parameters, the flow ID might incorporate information about the computational process used to generate the output data. 5
[00116] The use of flow IDs may allow for more nuanced matching of
requests to stored data, potentially improving the system's ability to handle similar
but not identical requests. For example, if a user requests data for a specific
network segment over the last 30 days, and the system has stored data for the last
10 60 days, the flow ID could help identify that the stored data can partially fulfill the
new request without requiring a full recalculation.
[00117] The computation engine (214) may be further configured to generate
the request identifier for the received request for network performance data and
15 compare the generated request identifier with stored flow IDs in the data lake
(220). This comparison process may enable the system to identify not just identical requests, but also similar requests that may have relevant pre-computed data.
[00118] The computation engine (214) may determine that the corresponding
20 output data is present in the data lake (220) if a matching flow ID is found. This
matching process may potentially extend the benefits of pre-computation to a wider range of requests. It allows the system to leverage partially matching data, potentially reducing computation time even for requests that are not exactly identical to previous ones. 25
[00119] In addition to generating a flow ID for the calculated new output
data, the computation engine (214) may be configured to store the generated flow
ID along with the calculated new output data in the data lake (220). This storage
of both the data and its associated identifiers may facilitate future retrieval and
30 matching processes. It enables a multi-faceted approach to data retrieval, where
30

the system can match requests based on input parameters, output characteristics, or both.
[00120] The computation engine (214) in the system (108) may be
5 implemented as a distributed computation engine. This distributed architecture
may enable the system to handle large volumes of requests and perform complex calculations more efficiently. It allows the system to scale horizontally, adding more computational nodes as the demand for network performance analysis increases.
10
[00121] The distributed computation engine may be configured to divide the
received request for network performance data into a plurality of sub-tasks. This division of tasks may allow for parallel processing, potentially reducing the overall time required for complex calculations. For example, a request to calculate average
15 latency across a global network might be divided into sub-tasks for each
geographic region, with each sub-task processed independently.
[00122] The distributed computation engine may distribute the plurality of
sub-tasks across multiple computing nodes. These computing nodes may be
20 separate processors, separate machines, or virtual instances in a cloud computing
environment. The distribution of tasks across multiple nodes may enable the system to leverage greater computational resources than would be available on a single machine. This can be particularly beneficial for handling peak loads or particularly complex network analysis tasks.
25
[00123] The distributed computation engine may execute the plurality of sub-
tasks in parallel across the multiple computing nodes to calculate partial output data for each sub-task of the plurality of sub-tasks. This parallel execution may significantly reduce the time required for complex calculations, potentially
30 improving the system's ability to handle real-time or near-real-time requests for
network performance data. For instance, a task that might take an hour to process
31

sequentially could potentially be completed in minutes when divided across multiple nodes.
[00124] After the parallel execution of sub-tasks, the distributed computation
5 engine may aggregate the calculated partial output data from the multiple
computing nodes to obtain the calculated new output data. This aggregation
process may combine the results from all sub-tasks into a comprehensive output
that addresses the original request for network performance data. The aggregation
might involve simple operations like summing or averaging results, or more
10 complex processes like combining partial network graphs or merging time-series
data.
[00125] The system (108) for pre-computation of network performance data
may provide several benefits. By storing and retrieving pre-computed data, the
15 system may reduce the computational load and response time for repeated or
similar requests. This can lead to more responsive network management tools and faster decision-making processes based on network performance data.
[00126] The use of a centralized data lake may enable efficient storage and
20 retrieval of large volumes of data. This centralized approach ensures data
consistency across different parts of the system and facilitates easier data governance and access control. It also allows for more effective data lifecycle management, including archiving of older data and refreshing of frequently accessed data. 25
[00127] The distributed computation engine may allow the system to handle
complex calculations efficiently. This capability enables the system to perform
sophisticated network analysis tasks that might be impractical on a single-machine
system. For example, it could enable real-time analysis of network traffic patterns
30 across a global enterprise network, or predictive modeling of network performance
under various hypothetical scenarios.
32

[00128] These features may combine to create a system that can handle a high
volume of diverse requests for network performance data with improved efficiency
and reduced latency. The system's ability to balance pre-computation, on-demand
5 calculation, and distributed processing allows it to provide fast responses to
common queries while still maintaining the flexibility to handle unique or complex requests.
[00129] The system (108) for pre-computation of network performance data
10 may be designed to handle various types of data related to network performance.
The term "network performance data" may encompass a wide range of metrics and information that describe the operation and efficiency of a network. This may include, but is not limited to, data on network throughput, latency, packet loss, jitter, bandwidth utilization, error rates, and connection stability. 15
[00130] For example, network throughput data might measure the amount of
data successfully transferred between two points on the network over a specific
time period. Latency data could represent the time delay for a data packet to travel
from its source to its destination. Packet loss data might track the percentage of
20 data packets that fail to reach their destination. Jitter data could measure the
variation in latency over time. Bandwidth utilization data might show how much of the network's capacity is being used at any given time.
[00131] By focusing on network performance data, the system may provide
25 valuable insights for network administrators, IT professionals, and businesses
relying on robust network infrastructure. These insights can be used for various purposes, such as identifying network bottlenecks, planning network upgrades, troubleshooting connectivity issues, or ensuring compliance with service level agreements (SLAs). 30
33

[00132] The request received by the data collection engine (212) may
comprise several components that specify the exact nature of the network
performance data being sought. This request may include parameters such as the
time range for which data is needed, specific network segments to be analyzed,
5 particular performance metrics of interest, and identifiers for devices or nodes
within the network.
[00133] For example, a user might request data on the average latency and
packet loss rate for a specific network segment over the past 24 hours. This request
10 might include parameters like "time range: last 24 hours", "network segment: New
York to London link", "metrics: average latency, packet loss rate", and "device identifiers: all routers on the path". The system's ability to handle such detailed and varied requests may enable it to provide highly specific and relevant network performance data.
15
[00134] The graphical user interface (GUI) (402) may serve as the primary
point of interaction between the user (102) and the system (108). Through this interface, users may not only submit their requests for network performance data but also view the results in a visually intuitive format. The GUI may offer features
20 such as dropdown menus for selecting time ranges and metrics, input fields for
specifying network segments or device identifiers, and options for choosing the format of the output display.
[00135] For instance, the GUI might provide a network topology map where
25 users can click on specific nodes or links to select them for analysis. It could offer
a calendar widget for selecting date ranges, and checkboxes or multi-select dropdowns for choosing metrics. The results might be displayed as interactive line graphs for time-series data, bar charts for comparative analysis, or heat maps for visualizing performance across a network topology. 30
34

[00136] This user-friendly interface may make the system accessible to users
with varying levels of technical expertise, from network engineers who need detailed performance data to business managers who want high-level network health overviews. 5
[00137] The computation engine (214) in the system (108) may indeed
function as an artificial intelligence/machine learning (AI/ML) engine,
incorporating advanced AI and ML techniques to enhance its performance and
capabilities over time. This AI/ML-enabled computation engine may continuously
10 learn from the data it processes and the requests it receives, allowing it to adapt
and improve its operations.
[00138] The computation engine (214) incorporates advanced artificial
intelligence and machine learning (AI/ML) capabilities to enhance the system's
15 efficiency in processing network performance data requests. These capabilities
directly support the core invention in several keyways:
[00139] Pattern Recognition: The AI/ML engine analyses patterns in user
requests to optimize data storage and retrieval strategies. For example, it may
20 identify frequently requested metrics for specific network segments and pre-
emptively calculate and cache this data, reducing response times for common queries.
[00140] Predictive Analytics: By forecasting future network performance
25 based on historical data, the system can proactively calculate and store data likely
to be requested soon. This capability aligns with the pre-computation aspect of the invention, further improving response times.
[00141] Anomaly Detection: The AI/ML engine can identify unusual
30 patterns in network performance data, potentially flagging issues before they
35

impact users. This feature enhances the value of the pre-computed data by adding proactive monitoring capabilities.
[00142] Request Classification: By classifying incoming requests (e.g., as
5 "real-time monitoring" or "historical analysis"), the system can optimize its
processing strategy for each request type, improving overall efficiency.
[00143] Natural Language Processing (NLP): If implemented, NLP
capabilities allow the system to interpret plain language queries, extracting
10 relevant parameters to match with pre-computed data or initiate new calculations
as needed.
[00144] Optimization Techniques: The engine employs various optimization
methods, including reinforcement learning for storage decisions and adaptive
15 thresholding for performance metrics. These techniques help maintain an efficient
balance between data pre-computation, storage, and real-time processing.
[00145] The AI/ML engine continuously refines its models and strategies
based on new data and requests, enabling the system to adapt to changing network
20 conditions and user needs over time. This ongoing learning process enhances the
core invention's ability to provide fast, accurate, and insightful network performance data.
[00146] By integrating these AI/ML capabilities, the system not only
25 responds to user requests more efficiently but also provides proactive insights. For
instance, it might alert users to potential future issues based on predicted trends or suggest optimal times for planned maintenance based on historical performance patterns.
30 [00147] In an exemplary embodiment, when a new request is received, the
AI/ML engine checks if the received request is mapped with a flow ID of
36

previously executed requests stored in the data lake (220). The flow ID is a unique identifier that encapsulates key characteristics of a request. For example, a flow ID might be structured as follows:
"NET_SEG_001_METRIC_LATENCY_TIMERANGE_24H_DEVICE_ALL_T
5 IMESTAMP_20240620"
[00148] The flow ID represents a request for latency data
(METRIC_LATENCY) for network segment 001 (NET_SEG_001), covering the
last 24 hours (TIMERANGE_24H), for all devices (DEVICE_ALL), generated on
10 June 20, 2024 (TIMESTAMP_20240620).
[00149] The AI/ML engine breaks down the new request into similar
components and constructs a comparable identifier. It then compares this newly
constructed identifier with the stored flow IDs in the data lake (220). The
15 comparison process involves:
a. Parsing the components of both the new and stored flow IDs.
b. Comparing each component (e.g., network segment, metric, time
range) for exact or partial matches.
c. Using machine learning algorithms, such as similarity scoring or
20 fuzzy matching, to identify close matches even if there are slight
variations.
[00150] For instance, if a new request comes in for latency data on the same
network segment for the last 48 hours, the AI/ML engine might construct an
25 identifier like:
"NET_SEG_001_METRIC_LATENCY_TIMERANGE_48H_DEVICE_ALL_T
IMESTAMP_20240622"
The engine would then:
a. Recognize the matching network segment and metric.
30 b. Identify the difference in time range.
c. Determine if the existing data can partially fulfill the new request.
37

[00151] If an exact or sufficiently close match is found, it indicates that the
output data for the received request is already present in the data lake (220) and
can be retrieved directly. In cases of partial matches, the AI/ML engine may decide
5 to retrieve the existing data and supplement it with additional calculations only for
the missing time range, optimizing the response time and computational resources.
[00152] This AI/ML-driven approach allows for intelligent and flexible
matching, improving the system's ability to leverage pre-computed data even when
10 requests are not identical but significantly similar to previous ones. In exemplary
embodiments some specific examples of how the system processes different types
of network performance data requests are given below:
[00153] Example 1: Real-time Latency Monitoring
a. Request: A network administrator requests real-time latency data for
15 a critical network segment over the last hour, updated every minute.
b. System Process:
c. The data collection engine (212) receives the request via the GUI
(402).
d. The computation engine (214) generates a request identifier:
20 "NET_SEG_CRITICAL_METRIC_LATENCY_TIMERANGE_1
H_INTERVAL_1MIN_REALTIME".
e. The engine searches the data lake (220) for matching or similar flow
IDs.
f. If found, it retrieves the pre-computed data for the last hour.
25 g. The engine then initiates a real-time data collection process for the
most recent minute. h. It combines the historical and real-time data, updating the GUI every
minute with the latest information.
i. The new data is continuously stored in the data lake (220) for future
30 use.
38

[00154] Example 2: Historical Bandwidth Utilization Analysis
a. Request: A capacity planning team requests bandwidth utilization
data for all network segments over the past month, aggregated by
day.
5 b. System Process:
c. The request is received and a request identifier is generated:
"NET_SEG_ALL_METRIC_BANDWIDTH_TIMERANGE_1M
ONTH_AGGREGATE_DAILY".
d. The computation engine (214) searches for pre-computed data in the
10 data lake (220).
e. If complete data is not available, the engine: a. Retrieves any
available pre-computed daily aggregates. b. Calculates missing daily
aggregates from raw data stored in the data lake. c. Combines pre-
computed and newly calculated data.
15 f. The engine generates visualizations (e.g., line graphs) showing daily
bandwidth utilization trends for each network segment. g. The newly calculated daily aggregates are stored in the data lake (220) for future requests.
20 [00155] Example 3: Predictive Analysis of Network Congestion
a. Request: An operations team requests a prediction of potential
network congestion points for the next 24 hours.
b. System Process:
c. The request identifier is generated:
25 "NET_SEG_ALL_METRIC_CONGESTION_TIMERANGE_NE
XT24H_PREDICTIVE".
d. The AI/ML component of the computation engine (214): a. Retrieves
historical congestion data from the data lake (220). b. Analyzes
patterns using machine learning models (e.g., LSTM networks). c.
30 Incorporates current network status and known future events (e.g.,
39

scheduled backups). d. Generates predictions for each network
segment. e. The engine creates a heat map of the network, highlighting potential
congestion points.
5 f. The predictions and visualization are presented to the user via the
GUI (402). g. The prediction data is stored in the data lake (220) and compared
against actual results for model improvement.
10 [00156] Example 4: Cross-metric Performance Analysis
a. Request: A network optimization team requests a correlation
analysis between latency, packet loss, and application performance
for a specific business-critical application over the past week.
b. System Process:
15 c. The request identifier is generated:
"APP_CRITICAL1_METRICS_LATENCY_PACKETLOSS_APP PERF_TIMERANGE_1WEEK_CORRELATION".
d. The computation engine (214): a. Retrieves relevant pre-computed
data for each metric from the data lake (220). b. Performs correlation
20 analysis using statistical methods. c. Identifies any strong
correlations or anomalies.
e. The engine generates a correlation matrix and scatter plots
visualizing the relationships between metrics.
f. It also provides a summary of key findings, such as periods of high
25 correlation between latency spikes and application performance
degradation.
g. The analysis results are presented via the GUI (402) and stored in
the data lake (220) for future reference.
40

[00157] These examples demonstrate how the system handles various types
of network performance data requests, from real-time monitoring to historical
analysis and predictive modelling. They showcase the system's ability to combine
pre-computed data with real-time processing, leverage AI/ML capabilities for
5 advanced analytics, and provide meaningful visualizations and insights to users.
[00158] It is important to note that the embodiments described above are
merely exemplary, and various modifications and variations may be made to the
disclosed system (108) without departing from the scope of the present disclosure.
10 The specific components, algorithms, and techniques mentioned in the claims and
description may be replaced or combined with other suitable equivalents or alternatives, as deemed appropriate by those skilled in the art. A method of the present subject matter is described further with reference to FIG. 3
15 [00159] FIG. 3 illustrates an example flow diagram (300) for pre-
computation of network performance data, in accordance with an embodiment of the present disclosure.
[00160] A method for pre-computation of network performance data is
20 disclosed. The method may be implemented by a system (108) comprising a
memory (204) and one or more processors (202) configured to fetch and execute computer-readable instructions stored in the memory (204).
[00161] In step 302, a data collection engine (212) of the system (108) may
25 receive a request for network performance data from a user (102) via a graphical
user interface (GUI) (402). The request may pertain to specific network
performance metrics, time ranges, network segments, or device identifiers that the
user (102) seeks to analyze. The data collection engine (212) may be responsible
for gathering and processing the user request before forwarding it to other
30 components of the system (108) for further analysis and computation.
41

[00162] In step 304, a computation engine (214) of the system (108) may
process the received request for network performance data to determine whether
corresponding output data is already present in a data lake (220). The data lake
(220) may serve as a centralized storage system for storing output data of
5 previously executed requests. By maintaining a repository of previously computed
results, the data lake (220) may enable efficient retrieval of output data when the same or similar requests are received again, without the need to re-execute the request.
10 [00163] The computation engine (214) may employ advanced algorithms
and data processing techniques to accurately determine whether the received request was previously executed. These techniques may include pattern matching, hashing of request parameters, or other efficient search methods to quickly identify potential matches in the data lake (220). 15
[00164] In step 306, if the computation engine (214) determines that the
corresponding output data for the received request is present in the data lake (220),
it may retrieve the output data from the data lake (220). This retrieval process
involves accessing the appropriate storage location within the data lake (220) and
20 extracting the relevant output data associated with the received request. The
relevant data typically includes:
a. Network performance metrics: This could include specific
measurements such as latency, throughput, packet loss rates, jitter,
or bandwidth utilization, depending on the metrics requested.
25 b. Time-series data: The data points corresponding to the time range
specified in the request, which could span hours, days, weeks, or longer periods. c. Network segment information: Data pertaining to the specific network segments or devices identified in the request.
42

d. Aggregated statistics: Pre-computed summaries such as averages,
medians, percentiles, or other statistical measures relevant to the
requested metrics.
e. Anomaly flags: Any pre-identified abnormal patterns or threshold
5 breaches within the requested data set.
f. Metadata: Additional contextual information such as data collection
timestamps, data quality indicators, or processing annotations.
[00165] For example, if the original request was for average latency data of
10 a specific network segment over the past 24 hours, the relevant data would include
the time-stamped latency measurements for that segment, the pre-calculated average, and possibly additional statistics like minimum and maximum values for the specified time period.
15 [00166] The computation engine (214) ensures that only the data directly
relevant to fulfilling the specific request is extracted and prepared for presentation to the user, optimizing both retrieval speed and the relevance of the information provided.
20 [00167] In step 308, if the computation engine (214) determines that the
corresponding output data for the received request is not present in the data lake (220), it may calculate new output data for the request. The computation engine (214) may be responsible for executing the necessary computations and processing steps to generate the requested output data.
25
[00168] The computation engine (214) may perform the calculations in a
distributed manner, leveraging the capabilities of multiple computing nodes to process the request efficiently. Distributed computing may involve dividing the request into smaller sub-tasks, distributing these sub-tasks across multiple
43

computing nodes, and executing them in parallel. This parallel execution may allow for faster processing and improved overall performance.
[00169] When executing the request in a distributed manner, the computation
5 engine (214) may first divide the request into a plurality of sub-tasks. Each sub-
task may represent a portion of the overall computation required to generate the output data. The division of the request into sub-tasks may be based on various factors, such as the complexity of the request, the available computing resources, and the desired level of parallelism.
10
[00170] Once the request is divided into sub-tasks, the computation engine
(214) may distribute these sub-tasks across multiple computing nodes. The computing nodes may be separate physical machines or virtual instances capable of performing computations independently. The distribution of sub-tasks may be
15 done in a balanced manner to ensure optimal utilization of the available computing
resources.
[00171] After distributing the sub-tasks, the computation engine (214) may
initiate the execution of these sub-tasks in parallel across the multiple computing
20 nodes. Each computing node may process its assigned sub-task independently,
performing the necessary calculations and generating partial output data specific to that sub-task.
[00172] Once all the sub-tasks have been executed, and the partial output
25 data has been generated by each computing node, the computation engine (214)
may aggregate the partial output data to obtain the final calculated new output data. The aggregation process may involve collecting and combining the partial results from all the computing nodes to form a coherent and complete set of output data.
30 [00173] In step 310, after the new output data has been calculated by the
computation engine (214), it may be stored in the data lake (220) for future
44

reference. Storing the calculated new output data in the data lake (220) may allow for efficient retrieval and reuse of the results when the same or similar requests are received again in the future.
5 [00174] In step 312, the retrieved corresponding output data or the calculated
new output data may be displayed to the user (102) via the graphical user interface
(GUI) (402). The GUI may present the output data in a user-friendly and intuitive
manner, allowing the user to easily understand and interpret the results. The GUI
may include various visual elements, such as charts, graphs, tables, or other
10 relevant representations, to effectively convey the network performance data to the
user.
[00175] To facilitate efficient retrieval and mapping of requests to their
corresponding output data, the method may further include a step of generating a
15 flow identifier (ID) associated with each set of output data stored in the data lake
(220). The flow ID may serve as a unique identifier that links a specific request to its corresponding output data.
[00176] When a new request is received, the computation engine (214) may
20 generate a request identifier for the received request for network performance data.
It may then compare this generated request identifier with stored flow IDs in the
data lake (220). By comparing the characteristics and parameters of the received
request with the flow IDs, the computation engine (214) may determine whether
the received request matches any of the previously executed requests. If a matching
25 flow ID is found, it indicates that the corresponding output data for the received
request is already present in the data lake (220) and can be retrieved directly.
[00177] The computation engine (214) may also be responsible for
generating a flow ID for each newly calculated set of output data. When the
30 computation engine (214) calculates new output data for a request that was not
previously executed, it may generate a unique flow ID and store it along with the
45

calculated new output data in the data lake (220). This flow ID may be used for future reference and mapping of similar requests.
[00178] The method may provide several benefits and advantages in the
5 context of network performance data computation and analysis. One potential
benefit is the efficient utilization of computing resources. By leveraging the data lake (220) to store previously computed output data, the method may avoid redundant calculations and save significant computational time and resources when the same or similar requests are received again.
10
[00179] Another potential advantage of the method is improved response
time and enhanced user experience. With the ability to retrieve previously computed output data from the data lake (220), the method may provide faster results to the user (102). Instead of executing the request from scratch, the method
15 may quickly fetch the relevant output data from the data lake (220) and present it
to the user (102), reducing the overall response time.
[00180] The use of advanced algorithms and data processing techniques in
the computation engine (214) may bring additional benefits in terms of accurate
20 determination of previously executed requests. By employing efficient search and
matching methods, the method may effectively identify similar or identical requests and retrieve their corresponding output data from the data lake (220). This may minimize the need for unnecessary computations and further optimize the performance of the method.
25
[00181] The distributed computation approach employed by the computation
engine (214) may also contribute to the efficiency and scalability of the method. By dividing the request into sub-tasks and executing them in parallel across multiple computing nodes, the method may achieve faster processing and handle
30 complex requests more effectively. The distributed computation may allow for
46

better utilization of available computing resources and may enable the method to scale seamlessly as the volume and complexity of requests increase.
[00182] Furthermore, the generation and utilization of flow IDs in the
5 method may provide a structured and organized approach to managing and
retrieving output data. By associating each set of output data with a unique flow ID, the method may efficiently map requests to their corresponding results, enabling quick retrieval and reuse of previously computed data.
10 [00183] It is important to note that the steps and features described above are
merely exemplary, and various modifications and variations may be made to the disclosed method without departing from the scope of the present disclosure. The specific algorithms, techniques, and implementation details mentioned in the claims and description may be adapted or combined with other suitable
15 approaches, as deemed appropriate by those skilled in the art.
[00184] FIG. 4 illustrates an example block diagram (400) of a system
architecture for pre-computation of network performance data, in accordance with an embodiment of the present disclosure.
20
[00185] As illustrated in FIG. 4, in an embodiment, the system (108) may
initialize itself. Further, the system (108) may receive a request for network performance data from the user (102) via a graphical user interface (GUI) (402). The computation engine (214) may process the received request to determine if
25 corresponding output data is present in a data lake (220). Based on a positive
determination, the system (108) may retrieve the corresponding output data from the data lake (220). Further, the system (108) may display the retrieved output data to the user (102) via the GUI (402) and terminate the process. Based on a negative determination, the computation engine (214) may calculate new output data for the
30 request and store the calculated new output data in the data lake (220). Further, the
47

system (108) may display the calculated new output data to the user (102) via the GUI (402) and terminate the process.
[00186] In an embodiment, the system (108) may generate a flow identifier
5 (ID) associated with the calculated new output data stored in the data lake (220).
Further, the computation engine (214) may map an incoming request with the flow IDs stored in the data lake (220) to identify if the received request was executed previously.
10 [00187] In an embodiment, to identify whether the request was previously
executed, the computation engine (214) may generate a request identifier for the received request and compare it with the stored flow IDs in the data lake (220). Based on this comparison, the computation engine (214) determines whether the request was previously executed. The computation engine (214) may employ
15 advanced algorithms and data processing techniques to provide accurate results.
Additionally, based on the runtime feedback, the computation engine (214) may further improve the accuracy of its results over time.
[00188] Based on a positive match, i.e., upon determination that the request
20 was previously executed, the system (108) may retrieve the corresponding output
data from the data lake (220) and display it to the user (102) via the GUI (402).
Further, based on a negative match, i.e., upon determination that the request was
not previously executed, the computation engine (214) may calculate new output
data, generate a flow ID for the calculated new output data, and store the calculated
25 new output data along with the generated flow ID in the data lake (220). The
system (108) may then display the calculated new output data to the user (102) via
the GUI (402). Thus, by pre-computing the network performance data, resource
utilization may be optimized, troubleshooting efficiency may be improved, and
optimal performance and reliability may be ensured.
30
48

[00189] Thus, with the deployment of the advanced computation engine
(214), the same request does not need to be executed again, and the output
corresponding to the initial execution of the request can be used in such instances.
In this manner, the overall computational load to execute the request is reduced,
5 and the execution becomes faster compared to conventional systems, thereby
improving the performance of the network performance data analysis.
[00190] FIG. 5 illustrates an exemplary flow diagram of a method (500) for
pre-computation of network performance data, in accordance with embodiments
10 of the present disclosure.
[00191] At step (502), the method (500) includes receiving, by a data
collection engine (212), a request for network performance data from a user (102) via a graphical user interface (GUI) (402). This step involves the initial interaction
15 between the user and the system, where the user inputs their request for specific
network performance data. The GUI (402) may provide various input options such as dropdown menus, text fields, or interactive network diagrams to allow the user to specify the desired network performance metrics, time ranges, network segments, or device identifiers. For example, a user might request average latency
20 data for a specific network segment over the past 24 hours.
[00192] At step (504), the method (500) includes processing, by a
computation engine (214), the received request for network performance data to
determine whether corresponding output data is present in a data lake (220). This
25 step involves analyzing the received request to extract key parameters and
searching the data lake for matching data. The processing step may further comprise:
a. Extracting request parameters from the received request, which may
include time ranges, network segments, performance metrics, and
30 device identifiers.
49

b. Generating a request identifier based on these extracted parameters,
which serves as a unique tag for the specific combination of
parameters in the request.
c. Searching the data lake for this generated request identifier.
5 d. Determining that corresponding output data is present if the
generated request identifier is found in the data lake.
[00193] This approach allows for efficient matching of incoming requests
with previously computed data, potentially saving significant computation time.
10
[00194] At step (506), the method (500) includes retrieving, by the
computation engine (214), the corresponding output data from the data lake (220) when the corresponding output data is present in the data lake (220). This step is executed when a match is found in the data lake, allowing the system to quickly
15 provide pre-computed results without the need for new calculations. The data lake
serves as a centralized storage system, enabling quick retrieval of previously calculated output data for identical requests, thus improving system efficiency.
[00195] At step (508), the method (500) includes calculating, by the
20 computation engine (214), new output data for the received request for network
performance data when the corresponding output data is not present in the data
lake (220). This step is executed when no matching data is found in the data lake,
requiring fresh computation of the requested network performance data. The
calculation is triggered when the generated request identifier is not found in the
25 data lake.
[00196] Furthermore, the calculation of new output data may involve a
distributed computing approach:
a. Dividing the received request into a plurality of sub-tasks.
30 b. Distributing these sub-tasks across multiple computing nodes.
50

c. Executing the sub-tasks in parallel across these nodes to calculate
partial output data for each sub-task.
d. Aggregating the calculated partial output data from all nodes to
obtain the final calculated new output data.
5
[00197] This distributed approach allows for efficient processing of complex
requests and scalability of the system.
[00198] At step (510), the method (500) includes storing the calculated new
10 output data in the data lake (220). This step ensures that newly computed data is
saved for potential future use, contributing to the system's efficiency over time. This step may also involve:
a. Generating a flow ID associated with the calculated new output data.
b. Storing this generated flow ID along with the calculated new output
15 data in the data lake.
[00199] The flow ID serves as an additional identifier that can be used in
future requests to quickly match and retrieve relevant data.
20 [00200] At step (512), the method (500) includes displaying, via the
graphical user interface (GUI) (402), either the retrieved corresponding output data or the calculated new output data to the user (102). This final step presents the requested network performance data to the user in a visually accessible format, completing the request-response cycle.
25
[00201] In another exemplary embodiment, a computing device (104)
communicatively coupled to a system (108) for pre-computation of network performance data via a network (106) is described. This computing device comprises a memory (204) and one or more processors (202) configured to fetch
30 and execute computer-readable instructions stored in the memory (204) to perform
the method (500) as described above. This embodiment allows for the
51

implementation of the pre-computation method on various user devices, extending the benefits of efficient network performance data retrieval and calculation to end-users.
5 [00202] The present disclosure provides technical advancement related to
network performance analysis and data retrieval systems. This advancement addresses the limitations of existing solutions by implementing a pre-computation and efficient retrieval mechanism for network performance data. The disclosure involves a sophisticated request processing system, distributed computation
10 capabilities, and an intelligent data storage and retrieval mechanism, which offer
significant improvements in response time and computational efficiency. By implementing a data lake with flow ID mapping and distributed calculation methods, the disclosed invention enhances the speed and accuracy of network performance data analysis, resulting in improved network monitoring capabilities,
15 faster troubleshooting, and more efficient resource utilization in network
management scenarios.
[00203] FIG. 6 illustrates an example computer system (600) in which or
with which the embodiments of the present disclosure may be implemented.
20
[00204] As shown in FIG. 6, the computer system (600) may include an
external storage device (610), a bus (620), a main memory (630), a read-only memory (640), a mass storage device (650), a communication port(s) (660), and a processor (670). A person skilled in the art will appreciate that the computer
25 system (600) may include more than one processor and communication ports. The
processor (670) may include various modules associated with embodiments of the present disclosure. The communication port(s) (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fibre, a serial port, a parallel port, or other
30 existing or future ports. The communication ports(s) (660) may be chosen
52

depending on a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects.
[00205] In an embodiment, the main memory (630) may be Random Access
5 Memory (RAM), or any other dynamic storage device commonly known in the art.
The read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (670). The mass storage device (650) may be any current or future mass
10 storage solution, which can be used to store information and/or instructions.
Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
15
[00206] In an embodiment, the bus (620) may communicatively couple the
processor(s) (670) with the other memory, storage, and communication blocks. The bus (620) may be, e.g. a Peripheral Component Interconnect PCI) / PCI Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial
20 Bus (USB), or the like, for connecting expansion cards, drives, and other
subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (600).
[00207] In another embodiment, operator and administrative interfaces, e.g.,
25 a display, keyboard, and cursor control device may also be coupled to the bus (620)
to support direct operator interaction with the computer system (600). Other
operator and administrative interfaces can be provided through network
connections connected through the communication port(s) (660). Components
described above are meant only to exemplify various possibilities. In no way
30 should the aforementioned exemplary computer system (600) limit the scope of
the present disclosure.
53

[00208] The method and system of the present disclosure may be
implemented in a number of ways. For example, the methods and systems of the
present disclosure may be implemented by software, hardware, firmware, or any
5 combination of software, hardware, and firmware. The above-described order for
the steps of the method is for illustration only, and the steps of the method of the
present disclosure are not limited to the order specifically described above unless
specifically stated otherwise. Further, in some embodiments, the present disclosure
may also be embodied as programs recorded in a recording medium, the programs
10 including machine-readable instructions for implementing the methods according
to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
15 [00209] While considerable emphasis has been placed herein on the preferred
embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the
20 disclosure herein, whereby it is to be distinctly understood that the foregoing
descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.
ADVANTAGES OF THE PRESENT DISCLOSURE
25 [00210] The present disclosure provides technical advancement related to a
system and a method where a request for network performance data from a user is received by a data collection engine via a graphical user interface (GUI), and a computation engine processes the received request to determine whether corresponding output data is present in a data lake. If the corresponding output
30 data is present in the data lake, the computation engine retrieves the output data
54

from the data lake rather than executing the request again, significantly reducing response time and computational load.
[00211] The present disclosure provides technical advancement related to a
system and a method where the computation engine is configured to efficiently
5 determine if corresponding output data for the received request is present in the
data lake. This determination involves extracting request parameters, generating a unique request identifier, and searching the data lake for this identifier. If the corresponding output data is already present in the data lake, it is retrieved and displayed to the user via the GUI, enhancing system efficiency and user
10 experience.
[00212] The present disclosure provides technical advancement related to a
system and a method where the system generates a flow identifier (ID) associated with the calculated new output data stored in the data lake. The computation engine maps the received request with stored flow IDs to determine whether the received
15 request was previously executed. This approach allows for more flexible and
efficient data retrieval, potentially improving the system's ability to handle similar but not identical requests.
[00213] The present disclosure provides technical advancement related to a
system and a method where, upon determining that the corresponding output data
20 is not present in the data lake, the computation engine calculates new output data
for the request in a distributed manner. This involves dividing the request into sub-tasks, distributing them across multiple computing nodes, and executing them in parallel. The calculated new output data is then stored in the data lake along with a generated flow ID and displayed to the user via the GUI. This distributed
25 approach allows for efficient handling of complex requests and improves the
scalability of the system.
55

WE CLAIM:
1. A system (108) for pre-computation of network performance data,
comprising:
5 a memory (204);
one or more processors (202) configured to execute instructions stored in the memory (204) to:
receive, by a data collection engine (212), a request for
network performance data from a user (102) via a graphical user
10 interface (GUI) (402);
process, by a computation engine (214), the received request for network performance data to determine whether corresponding output data is present in a data lake (220);
retrieve, by the computation engine (214), the corresponding
15 output data from the data lake (220) when the corresponding output
data is present in the data lake (220);
calculate, by the computation engine (214), new output data
for the received request for network performance data when the
corresponding output data is not present in the data lake (220);
20 store the calculated new output data in the data lake (220);
and
display, via the graphical user interface (GUI) (402), either the retrieved corresponding output data or the calculated new output data to the user (102). 25
2. The system (108) as claimed in claim 1, wherein the computation engine
(214) is further configured to:
extract request parameters from the received request for network performance data, wherein the extracted request parameters comprise at
56

least one of: a time range, network segments, performance metrics, and device identifiers;
generate a request identifier for the received request for network
performance data based on the extracted request parameters;
5 search the data lake (220) for the generated request identifier; and
determine that the corresponding output data is present in the data lake (220) if the generated request identifier is found in the data lake (220).
3. The system (108) as claimed in claim 2, wherein the computation engine
10 (214) is configured to calculate the new output data when the generated
request identifier is not found in the data lake (220).
4. The system (108) as claimed in claim 1, wherein the data lake (220) is
configured to serve as a centralized storage system for storing the calculated
15 new output data, thereby enabling retrieval of the calculated new output data
as the corresponding output data when a subsequent identical request for network performance data is received.
5. The system (108) as claimed in claim 1 is further configured to generate a
20 flow ID associated with the new output data stored in the data lake (220).
6. The system (108) as claimed in claim 5, wherein the computation engine
(214) is further configured to:
generate the request identifier for the received request for network
25 performance data;
compare the generated request identifier with stored flow IDs in the data lake (220); and
determine that the corresponding output data is present in the data lake (220) if a matching flow ID is found. 30
57

7. The system (108) as claimed in claim 1, wherein the computation engine
(214) is further configured to:
generate a flow ID for the calculated new output data; and
store the generated flow ID along with the calculated new output
5 data in the data lake (220).
8. The system (108) as claimed in claim 1, wherein the computation engine
(214) is a distributed computation engine configured to:
divide the received request for network performance data into a
10 plurality of sub-tasks;
distribute the plurality of sub-tasks across multiple computing nodes;
execute the plurality of sub-tasks in parallel across the multiple
computing nodes to calculate partial output data for each sub-task of the
15 plurality of sub-tasks; and
aggregate the calculated partial output data from the multiple computing nodes to obtain the calculated new output data.
9. A method (500) for pre-computation of network performance data,
20 comprising:
receiving (502), by a data collection engine (212), a request for network performance data from a user (102) via a graphical user interface (GUI) (402);
processing (504), by a computation engine (214), the received
25 request for network performance data to determine whether corresponding
output data is present in a data lake (220);
retrieving (506), by the computation engine (214), the corresponding output data from the data lake (220) when the corresponding output data is present in the data lake (220);
58

calculating (508), by the computation engine (214), new output data for the received request for network performance data when the corresponding output data is not present in the data lake (220);
storing (510) the calculated new output data in the data lake (220);
5 and
displaying (512), via the graphical user interface (GUI) (402), either the retrieved corresponding output data or the calculated new output data to the user (102).
10 10. The method (500) as claimed in claim 9, wherein processing (304) the
received request for network performance data comprises:
extracting, by the computation engine (214), request parameters
from the received request for network performance data, wherein the
extracted request parameters comprise at least one of: a time range, network
15 segments, performance metrics, and device identifiers;
generating, by the computation engine (214), a request identifier for the received request for network performance data based on the extracted request parameters;
searching the data lake (220) for the generated request identifier; and
20 determining that the corresponding output data is present in the data
lake (220) if the generated request identifier is found in the data lake (220).
11. The method (500) as claimed in claim 9, wherein calculating the new output
data comprises using the computation engine (214) to calculate the new
25 output data when the generated request identifier is not found in the data
lake (220).
12. The method (500) as claimed in claim 9, wherein the data lake (220) serves
as a centralized storage system for storing the calculated new output data,
30 thereby enabling retrieval of the calculated new output data as the

corresponding output data when a subsequent identical request for network performance data is received.
13. The method (500) as claimed in claim 9, further comprising generating a
5 flow ID associated with the calculated new output data stored in the data
lake (220).
14. The method (500) as claimed in claim 13, further comprising:
generating, by the computation engine (214), the request identifier
10 for the received request for network performance data;
comparing the generated request identifier with stored flow IDs in the data lake (220); and
determining that the corresponding output data is present in the data lake (220) if a matching flow ID is found. 15
15. The method (500) as claimed in claim 9, further comprising:
generating, by the computation engine (214), a flow ID for the calculated new output data; and
storing the generated flow ID along with the calculated new output
20 data in the data lake (220).
16. The method (500) as claimed in claim 9, wherein calculating (308) the new
output data for the received request for network performance data
comprises:
25 dividing the received request for network performance data into a
plurality of sub-tasks;
distributing the plurality of sub-tasks across multiple computing nodes;
executing the plurality of sub-tasks in parallel across the multiple
30 computing nodes to calculate partial output data for each sub-task of the
plurality of sub-tasks; and
60

aggregating the calculated partial output data from the multiple computing nodes to obtain the calculated new output data.
17. A computing device (104) communicatively coupled to a system (108) for
5 pre-computation of network performance data via a network (106), wherein
the system (108) comprises: a memory (204); and
one or more processors (202) configured to fetch and execute
computer-readable instructions stored in the memory (204) to perform the
10 method (500) as claimed in claim 9.

Documents

Application Documents

# Name Date
1 202321051989-STATEMENT OF UNDERTAKING (FORM 3) [02-08-2023(online)].pdf 2023-08-02
2 202321051989-PROVISIONAL SPECIFICATION [02-08-2023(online)].pdf 2023-08-02
3 202321051989-FORM 1 [02-08-2023(online)].pdf 2023-08-02
4 202321051989-DRAWINGS [02-08-2023(online)].pdf 2023-08-02
5 202321051989-DECLARATION OF INVENTORSHIP (FORM 5) [02-08-2023(online)].pdf 2023-08-02
6 202321051989-FORM-26 [28-10-2023(online)].pdf 2023-10-28
7 202321051989-FORM-26 [03-06-2024(online)].pdf 2024-06-03
8 202321051989-FORM 13 [03-06-2024(online)].pdf 2024-06-03
9 202321051989-AMENDED DOCUMENTS [03-06-2024(online)].pdf 2024-06-03
10 202321051989-Request Letter-Correspondence [04-06-2024(online)].pdf 2024-06-04
11 202321051989-Power of Attorney [04-06-2024(online)].pdf 2024-06-04
12 202321051989-Covering Letter [04-06-2024(online)].pdf 2024-06-04
13 202321051989-CORRESPONDENCE(IPO)-(WIPO DAS)-12-07-2024.pdf 2024-07-12
14 202321051989-FORM-5 [31-07-2024(online)].pdf 2024-07-31
15 202321051989-DRAWING [31-07-2024(online)].pdf 2024-07-31
16 202321051989-CORRESPONDENCE-OTHERS [31-07-2024(online)].pdf 2024-07-31
17 202321051989-COMPLETE SPECIFICATION [31-07-2024(online)].pdf 2024-07-31
18 202321051989-ORIGINAL UR 6(1A) FORM 26-190924.pdf 2024-09-23
19 202321051989-FORM 18 [07-10-2024(online)].pdf 2024-10-07
20 Abstract-1.jpg 2024-10-10
21 202321051989-FORM 3 [11-11-2024(online)].pdf 2024-11-11
22 202321051989-FORM 3 [13-11-2024(online)].pdf 2024-11-13