Sign In to Follow Application
View All Documents & Correspondence

System And Method For Intelligent Distributed Computing Of Network Performance Data

Abstract: The present disclosure provides a system (108) and a method (500) for performing on-demand network performance management. The method (500) includes receiving (502) at least one request comprising one or more parameters and one or more tasks from a user equipment (UE) (104). The method (500) includes collecting (504) the network performance data from one or more data sources. The method (500) includes splitting (506) each of the received one or more tasks into one or more sub-tasks. The method (500) includes assigning (508) the one or more sub-tasks along with the collected network performance data across one or more computing nodes (316). Each computing node (316) is configured to perform the one or more assigned sub-tasks to generate a modified network performance data. The method (500) includes analysing (512) the received modified network performance data to generate a view of the received modified network performance data. FIGURE 5

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
04 September 2023
Publication Number
10/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. BHATNAGAR, Pradeep Kumar
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
3. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
4. SAROHI, Meenakshi
2B-62, Narmada, Kalpataru, Riverside, Takka, Panvel, Raigargh - 410206, Maharashtra, India.
5. BHANWRIA, Mohit
39, Behind Honda Showroom, Jobner Road, Phulera, Jaipur - 303338, Rajasthan, India.
6. GAYKI, Vinay
259, Bajag Road, Gadasarai, District -Dindori - 481882, Madhya Pradesh, India.
7. KUMAR, Durgesh
Mohalla Ramanpur, Near Prabhat Junior High School, Hathras, Uttar Pradesh -204101, India.
8. BHUSHAN, Shashank
Fairfield 1604, Bharat Ecovistas, Shilphata, NH48, Thane - 421204, Maharashtra, India.
9. KUMAR, Kothagundla Vinay
2-81/A, Vijaya Bhaskara Fancy and Metal Stores, Opp Vinayaka Temple, Sai Krishna Theater Road, Kodad, Suryapet Dist, Telangana - 508206, India.
10. KHADE, Aniket
X-29/9, Godrej Creek Side Colony, Phirojshanagar, Vikhroli East - 400078, Mumbai, Maharashtra, India.
11. KISHORE, Jugal
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
12. KUMAR, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
13. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
14. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera District-Kota, Rajasthan - 324001, India
15. SAHU, Kishan
Ajay Villa, Gali No. 2 Ambedkar Colony, Bikaner, Rajasthan - 334003, India.
16. RAJANI, Manasvi
C-22, Old Jawahar Nagar, Kota, Rajasthan - 324005, India.
17. GANVEER, Chandra
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
18. KUMAR, Yogesh
Village-Gatol, Post-Dabla, Tahsil-Ghumarwin, Distict-Bilaspur, Himachal Pradesh - 174021, India.
19. TELGOTE, Kunal
29, Nityanand Nagar, Nr. Tukaram Hosp., Gaurakshan Road, Akola - 444004, Maharashtra, India.
20. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli, Maharashtra - 421204, India
21. VISHWAKARMA, Dharmendra Kumar
Ramnagar, Sarai Kansarai, Bhadohi - 221404, Uttar Pradesh, India.
22. SONI, Sajal
K. P. Nayak Market Mauranipur, Jhansi, Uttar Pradesh - 284204, India.
23. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
24. KUSHWAHA, Avinash
SA 18/127, Mauza Hall, Varanasi - 221007, Uttar Pradesh, India.
25. SAXENA, Gaurav
B1603, Platina Cooperative Housing Society, Casa Bella Gold, Kalyan Shilphata Road, Near Xperia Mall Palava City, Dombivli, Kalyan, Thane - 421204, Maharashtra, India.
26. INGLE, Shubham
Dr. Baliga Nagar, Jasmine Mill Road, Bldg No 7, B Wing, Flat No 15, 3rd Floor, Mahim(E), Mumbai - 400017, Maharashtra, India.
27. PODDAR, Harsh
C/O Prabhat Poddar, Lal Bazar, Bettiah, West Champaran - 845438, Bihar, India
28. SAHU, Suraj
502, Shahid Bhagat Singh Building, Dumping Road, Mulund West, Mumbai 400080, Maharashtra, India.
29. DE, Supriya Kaushik
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi-Mumbai, Maharashtra 400701 India
30. DEBASHISH, Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi-Mumbai, Maharashtra 400701 India
31. GARG, Harshita
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi-Mumbai, Maharashtra 400701 India

Specification

FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
SYSTEM AND METHOD FOR INTELLIGENT DISTRIBUTED COMPUTING OF NETWORK
PERFORMANCE DATA
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad -
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material,
which is subject to intellectual property rights such as but are not limited to, copyright, design, trademark, integrated circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The present disclosure relates to the field of network performance
monitoring and analysis. More precisely, the present disclosure relates to a system and a method for on-demand network performance management.
DEFINITION
[0003] As used in the present disclosure, the following terms are generally
intended to have the meaning as set forth below, except to the extent that the context in which they are used to indicate otherwise.
[0004] The term “distributed network” as used herein, refers to a type of
computer network where components and resources are dispersed across multiple locations, interconnected to enable communication and resource sharing.
[0005] The term “intelligent distributed computing” as used herein, refers
to the application of advanced algorithms, automation, and smart decision-making techniques within a distributed computing environment. This approach aims to optimize the use of computing resources, improve performance, and enhance scalability and reliability.
[0006] The term “data lake” as used herein, refers to a centralized repository

that stores vast amounts of raw, unstructured, or semi-structured data in its native format until needed. The data lake allows for scalable data storage and efficient querying, often supporting big data and analytics use cases.
[0007] These definitions are in addition to those expressed in the art.
BACKGROUND OF THE INVENTION
[0008] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0009] Wireless communication technology has rapidly evolved over the
past few decades. The first generation of wireless communication technology was based on analog technology that offered only voice services. Further, when the second-generation (2G) technology was introduced, text messaging and data services became possible. The 3G technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized the wireless communication with faster data speeds, improved network coverage, and security. Currently, the fifth generation (5G) technology is being deployed, with even faster data speeds, low latency, and the ability to connect multiple devices simultaneously.
[0010] As wireless technologies are advancing, there is a need to cope with
the 5G requirements and deliver an elevated level of service to the subscribers. As networks continue to expand, organizations face challenges in monitoring and managing network performance in order to ensure efficient and reliable operations. Traditional approaches for network performance monitoring often fall short in handling the increasing volume and velocity of network performance data. In traditional systems, execution of task/request is dependent on the output of

predefined tasks which runs periodically. In the periodic execution of the tasks, tasks are pushed into the queue and are executed sequentially. In such cases, execution of the request present at the end of the queue takes a lot of time even though the resources are available to execute such requests, resulting in poor execution of the request and underutilization of resources. Moreover, such periodic execution causes the delay in the system and affects the performance of the network.
[0011] There is, therefore, a need in the art to provide a system and a method
that can mitigate the problems associated with the prior arts.
SUMMARY
[0012] In an exemplary embodiment, the present disclosure discloses a
method for performing on-demand network performance management. The method includes receiving at least one request comprising one or more parameters and one or more tasks from a user equipment (UE). The method includes collecting network performance data from one or more data sources based on the one or more parameters. The method includes splitting each of the received one or more tasks into one or more sub-tasks. The method includes assigning the one or more sub-tasks along with the collected network performance data across one or more computing nodes. Each computing node is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data. The method includes receiving the generated modified network performance data from each of the computing node. The method includes analysing the received modified network performance data to generate a view of the received modified network performance data.
[0013] In some embodiments, the method further includes generating one
or more notifications associated with the received network performance data and communicating the one or more generated notifications to the UE.
[0014] In some embodiments, the method further includes displaying the
generated view of the analysed modified network performance data over a graphical user interface (GUI).

[0015] In some embodiments, the analysing includes performing at least one
of data transformation, aggregation, and statistical calculations on the received modified network performance data.
[0016] In some embodiments, the one or more defined conditions includes
at least one of processing capabilities, and workload of the one or more computing nodes.
[0017] In an exemplary embodiment, the present disclosure discloses a
system for performing on-demand network performance management. The system includes a receiving unit configured to receive at least one request comprising one or more parameters and one or more tasks from a user equipment (UE). The system includes a processing engine coupled with the receiving unit to receive the at least one request and is further coupled with a memory to execute a set of instructions stored in the memory. The processing engine is configured to collect network performance data from one or more data sources based on the one or more parameters. The processing engine is configured to split each of the received one or more tasks into one or more sub-tasks. The processing engine is configured to assign the one or more sub-tasks along with the collected network performance data across one or more computing nodes. Each computing node is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data. The processing engine is configured to receive the generated modified network performance data from each of the computing node. The processing engine is configured to analyse the received modified network performance data to generate a view of the received modified network performance data.
[0018] In some embodiments, the system is further configured to generate
one or more notifications associated with the received modified network performance data and communicate the one or more generated notifications to the UE.
[0019] In some embodiments, the system is further configured to display the

generated view of the analysed modified network performance data over a graphical user interface (GUI).
[0020] In an exemplary embodiment, the present disclosure discloses a user
equipment (UE) communicatively coupled with a network. The coupling comprises steps of receiving, by the network, a connection request from the UE, sending, by the network, an acknowledgment of the connection request to the UE and transmitting a plurality of signals in response to the connection request. The on-demand network performance management is performed by a method that includes receiving at least one request comprising one or more parameters and one or more tasks from a user equipment (UE). The method includes collecting network performance data from one or more data sources based on the one or more parameters. The method includes splitting each of the received one or more tasks into one or more sub-tasks. The method includes assigning the one or more sub-tasks along with the collected network performance data across one or more computing nodes. Each computing node is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data. The method includes receiving the generated modified network performance data from each of the computing node. The method includes analysing the received modified network performance data to generate a view of the received modified network performance data.
[0021] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
OBJECTS OF THE DISCLOSURE
[0022] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0023] It is an object of the present disclosure to overcome the drawbacks
and limitations of the existing systems for intelligent distributed computing of

network performance data.
[0024] It is an object of the present disclosure to provide an intelligent
distributed computing environment to enable a selection of the request based on the
amount of the resources available for execution to achieve efficient utilization of
5 resources.
[0025] It is an object of the present disclosure to provide scalability by
leveraging distributed computing techniques to handle increasing volumes of network performance data and scale resources dynamically.
[0026] It is an object of the present disclosure to enable real-time or near-
10 real-time analysis of network performance data to provide timely insights into
network behavior and performance to help detect and address issues associated with the network performance promptly. For example, the issues may be associated with network congestion, latency, packet loss, bandwidth utilization, outages, security threats, and quality of service (QoS) degradation.
15 [0027] It is an object of the present disclosure to optimize the utilization of
computing resources by dynamically allocating and deallocating resources based on the workload to make efficient use of available resources while maintaining high processing speed and accuracy.
[0028] It is an object of the present disclosure to be user-friendly, providing
20 a seamless interface for users to interact with and manage the distributed computing
resources and include monitoring and management capabilities to track the status of distributed tasks and resource utilization.
[0029] It is an object of the present disclosure to optimize costs by
efficiently utilizing resources and minimizing unnecessary overheads by providing
25 cost-effective solutions for processing and analysing network performance data.
BRIEF DESCRIPTION OF DRAWINGS
[0030] The accompanying drawings, which are incorporated herein, and
6

constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
5 principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical components, electronic components, or circuitry commonly used to implement such components.
10 [0031] FIG. 1 illustrates an exemplary network architecture for
implementing a system for performing on-demand network performance management, in accordance with an embodiment of the present disclosure.
[0032] FIG. 2 illustrates an exemplary block diagram of the system for
performing the on-demand network performance management, in accordance with
15 an embodiment of the present disclosure.
[0033] FIG. 3A illustrates an exemplary system architecture for performing
the on-demand network performance management, in accordance with an embodiment of present disclosure.
[0034] FIG. 3B illustrates an exemplary flow diagram illustrating a method
20 for performing the on-demand network performance management, in accordance
with an embodiment of the present disclosure.
[0035] FIG. 4 illustrates an exemplary computer system in which or with
which embodiments of the present invention can be utilized, in accordance with an embodiment of the present disclosure.
25 [0036] FIG. 5 illustrates another exemplary flow diagram of the method for
performing the on-demand network performance management, in accordance with an embodiment of the present disclosure.
[0037] The foregoing shall be more apparent from the following detailed
7

description of the disclosure.
LIST OF REFERENCE NUMERALS
100 – Network architecture
102-1, 102-2…102-N – A plurality of users
5 104-1, 104-2…104-N – A plurality of computing devices
106 – Network
108 – System
200 – Block Diagram
202 – Processor(s)
10 204 – Memory
206 – Interface(s)
208 – Processing engine
210 – Data collection module
212 – Data analysis module
15 214 – Aggregation module
216 – Other modules
218 – Database
220 – Receiving unit
300A – System architecture
20 302 – Graphical user interface (GUI)
304 – Data lake
306 – Distributed computation engine
314 – Distributed compute cluster
316 – Computing nodes
25 318 – Compute master
320 – Cluster manager
300B – Flow diagram
400 – Computer system
402 – Input devices
30 404 – Central processing unit (CPU)
8

408 – Output devices 410 – Secondary storage devices 412 – Control unit
414 – Arithmetic and Logical Unit
5 416 – Memory unit
500 – Flow Diagram
DETAILED DESCRIPTION
[0038] In the following description, for explanation, various specific details
are outlined in order to provide a thorough understanding of embodiments of the
10 present disclosure. It will be apparent, however, that embodiments of the present
disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed above or might address only some of the problems discussed
15 above. Some of the problems discussed above might not be fully addressed by any
of the features described herein.
[0039] The ensuing description provides exemplary embodiments only and
is not intended to limit the scope, applicability, or configuration of the disclosure.
Rather, the ensuing description of the exemplary embodiments will provide those
20 skilled in the art with an enabling description for implementing an exemplary
embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope of the disclosure as set forth.
[0040] Specific details are given in the following description to provide a
25 thorough understanding of the embodiments. However, it will be understood by one
of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known
9

circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail to avoid obscuring the embodiments.
[0041] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a
5 structure diagram, or a block diagram. Although a flowchart may describe the
operations as a sequential process, many of the operations can be performed in
parallel or concurrently. In addition, the order of the operations may be re-arranged.
A process is terminated when its operations are completed but could have additional
steps not included in a figure. A process may correspond to a method, a function, a
10 procedure, a subroutine, a subprogram, etc. When a process corresponds to a
function, its termination can correspond to a return of the function to the calling function or the main function.
[0042] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt,
15 the subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms
20 “includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0043] Reference throughout this specification to “one embodiment” or “an
25 embodiment” or “an instance” or “one instance” means that a particular feature,
structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.
10

Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0044] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular
5 forms “a”, “an”, and “the” are intended to include the plural forms as well, unless
the context indicates otherwise. It will be further understood that the terms
“comprises” and/or “comprising,” when used in this specification, specify the
presence of stated features, integers, steps, operations, elements, and/or
components, but do not preclude the presence or addition of one or more other
10 features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “and/or” includes any combinations of one or more of the associated listed items.
[0045] Within the scope of this application, it is expressly envisaged that the
various aspects, embodiments, examples, and alternatives set out in the preceding
15 paragraphs, in the claims and/or in the following description and drawings, and in
particular the individual features thereof, may be taken independently or in any combination. Features described in connection with one embodiment are applicable to all embodiments unless such features are incompatible.
[0046] System and method are described for intelligent distributed
20 computing for network performance data. The system includes an artificial
intelligence or machine learning (AI/ML) engine to achieve efficient utilization of
the resources. The AI/ML engine receives information about the current resource
utilization and total resource capacity. Based on the received information, the
AI/ML engine identifies a number of requests to be executed by selecting one or
25 more requests from the queue and then pushes the selected one or more requests to
the front of the queue in order to execute with the available resources. In particular, through the AI/ML engine, one or more worker nodes (computing nodes) are assigned with the one or more requests based on their current and total capacity. The AI/ML engine continuously receives the inputs from the worker nodes in order
11

to select the one or more requests for execution. The selection of the one or more
requests may be based on the user-defined parameters as and when required by the
user for metrics over a distributed network system. In this manner, intelligent
distributed computing environment is achieved for the network performance data
5 through parallel execution of user requests, thereby facilitating effective utilization
of resources in the distributed computing environment.
[0047] Embodiment herein relates to a system and a method for on-demand
distributed computing of network performance data. On demand distributed computing (also referred to as “intelligent distributed computing”) includes parallel
10 calculation of network performance data on user-defined parameters as and when
required by the user for metrics over a distributed network system. In an embodiment, the network performance data is collected from the data lake and displayed on a graphical user interface (GUI). The data lake is a repository that integrates various types of data. For example, the data lake may integrate raw data
15 from diverse sources such as logs and sensors, structured data organized into tabular
formats from relational databases, semi-structured data like JavaScript Object Notation (JSON) or extensible markup language (XML) files, and unstructured data including emails, social media content, and multimedia files. The data lake may include metadata, which provides information about the data’s origin, format, and
20 structure. When a user requests specific information related to network
performance, the system selects relevant data based on performance metrics like latency (delay in data transmission) and bandwidth utilization (amount of data transmitted), historical performance trends, real-time statistics, alerts for detected issues or anomalies, and customized reports. For instance, if the user request is for
25 a report on slow internet speeds over the past week, the system will compile latency
spikes, bandwidth usage data, historical trends, and any alerts for issues like congestion or hardware failures into a comprehensive, customized report. This report helps the user understand the factors contributing to the performance issues and provides actionable insights for troubleshooting and optimization. Additionally,
30 notifications are generated based on network performance to alert users. Further,
12

the system may also perform data transformation, aggregation, and statistical calculations in order to summarize the performance metrics.
[0048] The present disclosure allows the users to create new tasks as per
user requirements through GUI and execute it on the distributed computation engine
5 at any time. The GUI may be situated on a web-based platform, allowing users to
access and interact with network data through a standard web browser. Alternatively, the GUI may be implemented as a desktop or mobile application, providing flexibility based on user needs and preferences. It facilitates effective utilization of resources such as but not limited to random access memory (RAM),
10 memory and disk space as the resources are allocated dynamically i.e. resources are
allocated only when the user raises a request. In this manner, large amounts of data can be processed in the distributed system. The concept of on-demand distributed computing of network performance data leverages the power of distributed computing, where tasks are divided and executed across multiple nodes or machines
15 in a network to efficiently process and analyse large volumes of network
performance data. By adopting on-demand distributed computing, computing resources are dynamically allocated as needed and scaled up or down based on the demand for network performance analysis. This approach enables faster processing and analysis of network performance data, facilitating real-time or near-real-time
20 insights into the performance of network infrastructure.
[0049] In an aspect, the system for on-demand distributed computing of
network performance data is designed to analyse and process large volumes of
network performance data in a scalable and efficient manner. The system aims to
provide real-time or near-real-time analysis of network behavior and performance,
25 enabling timely detection and resolution of issues. It also optimizes the utilization
of computing resources by dynamically allocating and deallocating resources based on workload demands.
[0050] In an embodiment, the disclosed system is fault-tolerant, ensuring
reliability by handling failures gracefully and minimizing disruptions in data
13

analysis.
[0051] Various objects, features, aspects, and advantages of the inventive
subject matter will become apparent from the following detailed description of
preferred embodiments, along with the accompanying drawing figures in which like
5 numerals represent like components.
[0052] The various embodiments throughout the disclosure will be
explained in more detail with reference to FIGS. 1-5.
[0053] FIG. 1 illustrates an exemplary network architecture (100) for
implementing a system (108) for performing network performance management, in
10 accordance with an embodiment of the present disclosure.
[0054] As illustrated in FIG. 1, one or more computing devices (104-1, 104-
2…104-N) may be connected to the system (108) through the network (106). A person of ordinary skill in the art will understand that the one or more computing devices (104-1, 104-2…104-N) may be collectively referred as computing devices
15 (104) and individually referred as a computing device (104). One or more users
(102-1, 102-2…102-N) may provide one or more requests to the system (108). A person of ordinary skill in the art will understand that the one or more users (102-1, 102-2…102-N) may be collectively referred as users (102) and individually referred as a user (102). Further, the computing devices (104) may also be referred
20 to as a user equipment (UE) (104) or as UEs (104) throughout the disclosure.
[0055] Referring to FIG. 1, the UE (104) may communicate with the system
(108), for example, a system for performing on-demand network performance
management. In an embodiment, the computing device (104) may include, but not
be limited to, a mobile, a laptop, etc. Further, the computing device (104) may
25 include one or more in-built or externally coupled accessories including, but not
limited to, a visual aid device such as a camera, audio aid, microphone, or keyboard. Furthermore, the computing device (104) may include a mobile phone, smartphone, virtual reality (VR) devices, augmented reality (AR) devices, a laptop, a general-purpose computer, a desktop, a personal digital assistant, a tablet computer, and a
14

mainframe computer. Additionally, input devices for receiving input from the user 102 such as a touchpad, touch-enabled screen, electronic pen, and the like may be used.
[0056] In an embodiment, the network (106) may include, by way of
5 example but not limitation, at least a portion of one or more networks having one
or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth. In an embodiment, the network (106) may include a distributed network. The network
10 (106) may also include, by way of example but not limited to , 4G, 5G and a 6G
network, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite
15 network, a fiber optic network, or some combination thereof. The UE (104) may be
communicatively coupled with the network (106). The communicatively coupling comprises receiving, from the UE (104), a connection request by the network (106), sending an acknowledgment of the connection request to the UE (104), and transmitting a plurality of signals in response to the connection request. The
20 network architecture plays a pivotal role in revolutionizing how data is processed,
transmitted, and managed across mobile networks. Unlike traditional centralized architectures, which concentrate computing and processing tasks in a few centralized locations, distributed networks in 5G distribute these functions across multiple edge nodes and data centers. This decentralization brings several
25 advantages, primarily aimed at enhancing performance, scalability, and user
experience.
[0057] FIG. 2 illustrates an exemplary block diagram of the system (108)
for performing the on-demand network performance management, in accordance with an embodiment of present disclosure.
15

[0058] Referring to FIG. 2, a processing engine (208) includes a data
collection module (210), a data analysis module (212), an aggregation module (214)
and other modules (216). The processing engine (208) is accompanied with a
processor (202), a memory (204), an interface (206), a database (218) and a
5 receiving unit (220) for executing programming instructions to perform network
performance data analysis. modules work collaboratively to create a robust system that enables on-demand distributed computing of network performance data, providing valuable insights, and facilitating efficient network management.
[0059] In an embodiment, the processing engine (208) is configured to
10 receive at least one request comprising one or more parameters and one or more
tasks from the UE (104). The at least one request may be transmitted in a structured data format such as the JSON or XML. The processing engine (208) uses an appropriate parser to decode the at least one request into an internal format that can be processed. For example, if the request is in JSON format, the engine employs a
15 JSON parser to interpret the data, extracting relevant parameters and tasks.
Similarly, if the request is in XML format, an XML parser is used. The at least one request encompass diverse needs, ranging from real-time monitoring to historical data analysis and proactive alerting. The at least one request may include real-time monitoring requests to enable immediate insights into current network conditions,
20 crucial for detecting and responding to sudden performance issues. On the other
hand, the at least one request may include historical data requests to facilitate trend analysis and long-term planning by providing a comprehensive view of network performance over extended periods. In an embodiment, the one or more parameters may include a time interval for which the user can specify a specific time range or
25 interval within which network data should be analysed. The one or more parameters
may include a network segment for which the user may filter data based on specific network segments or zones within their infrastructure. This could include analysing data from particular geographical locations, departments, VLANs (Virtual Local Area Networks), or subnet ranges. The one or more parameters may include a type
30 of metric for which the user can select which network performance metrics the user
16

want to analyse. The examples of metrics may include latency (response time),
bandwidth utilization, packet loss rate, error rates, throughput (data transfer rate),
and jitter (variation in packet delay). The one or more tasks are specific actions or
operations requested by the UE (104), which may include data retrieval, data
5 processing, data visualization, and alert generation. In an embodiment, the at least
one request is received through communication channels, such as application programming interfaces (APIs), web interfaces, or command-line interfaces, each tailored to accommodate different user needs and technical capabilities.
[0060] In an embodiment, the processing engine (208) is configured to
10 collect the network performance data from one or more data sources based on the
one or more parameters. The network performance data includes various measurements that defines how well a network is working. The network performance data may include latency, bandwidth utilization, packet loss, throughput and response time. The latency measures how long it takes for data to
15 travel from one point to another. The bandwidth utilization shows how much of the
network’s capacity is being used. The packet loss tracks the percentage of data packets that don’t make it to their destination. The throughput shows the actual rate at which data is successfully transferred. The response time shows how quickly a system or application reacts to a request. These metrics help in understanding and
20 improving the overall performance of the network. In an embodiment, the data
collection module (210) is responsible for gathering/collecting network performance data from various sources and preparing it for further analysis and processing. The one or more data sources encompass a wide array of network components and systems, including routers, switches, firewalls, servers, client
25 devices, and cloud services. Each of the components generates data points related
to network traffic, latency, bandwidth usage, packet loss, and other performance metrics. For instance, routers and switches provide data on data packet routing and traffic flows within the network. Firewalls contribute information about security events and traffic filtering. The data collection module (210) retrieves the network
30 performance data from the one or more data sources. The retrieving of the network
17

performance data can be achieved through various mechanisms such as Simple
Network Management Protocol (SNMP) polling, NetFlow or sFlow data collection,
API integrations with monitoring tools, packet capturing, log file parsing, or other
custom data acquisition methods. The NetFlow is a network protocol for collecting
5 IP traffic information and monitoring network traffic flow. The sampled flow
(sFlow) is a standard network monitoring protocol used for monitoring and
analysing network traffic in real-time. Once the network performance data is
collected from different sources, the data collection module (210) aggregates it into
a cohesive dataset. This involves organizing and merging the collected data to create
10 a comprehensive view of network performance metrics. The aggregation may
include combining data from multiple devices, consolidating data from different time intervals, or merging data from various monitoring tools. The collected data may be in different formats, structures, or units.
[0061] In an embodiment, the processing engine (208) is configured to split
15 the one or more tasks into one or more sub-tasks. When the at least one request
comprising one or more tasks is received from the UE (104), the processing engine
(208) breaks down/splits the one or more tasks into smaller, manageable sub-tasks
that can be concurrently processed across the one or more computing nodes. The
decomposition process begins with a thorough analysis of the task’s requirements
20 and dependencies. Each sub-task is carefully designed to encapsulate a specific
portion of the overall workload, ensuring that computational resources are allocated
optimally based on the nature and complexity of the sub-task. Furthermore, the
system considers factors such as data locality and node capabilities during sub-task
allocation to minimize data transfer latency and maximize processing efficiency.
25 The one or more sub-tasks may include data retrieval sub-task, statistical analysis
sub-task, anomaly detection sub-task, data filtering sub-task, alert generation sub-task and data aggregation sub-task.
[0062] In an embodiment, the processing engine (208) is configured to
assign the one or more sub-tasks along with the collected network performance data
30 across the one or more computing nodes. The one or more computing nodes may
18

include server nodes, virtual machines and containerized nodes, and edge
computing nodes. Each computing node is selected based on one or more defined
conditions and is configured to perform the one or more assigned sub-tasks to
generate a modified network performance data. The one or more defined conditions
5 include at least one of computing node processing power, memory capacity, and
network proximity to data sources, processing capabilities, and workload. The process begins with a detailed assessment of each sub-task’s computational requirements and the capabilities of available computing nodes. Sub-tasks are then intelligently allocated based on factors such as computing node processing power,
10 memory capacity, network proximity to data sources, processing capabilities, and
workload. Furthermore, the processing engine (208) employs algorithms for load balancing to ensure an equitable distribution of workload among computing nodes, thereby optimizing resource utilization, and minimizing processing times. The load balancing algorithms may include a round robin algorithm, least connection
15 algorithm, and weighted round robin algorithm. The modified network performance
data refers to the processed and transformed network performance data generated by the one or more computing nodes after executing assigned one or more sub-tasks on collected network performance data. When the processing engine (208) assigns sub-tasks to various computing nodes for analyzing network performance data, the
20 resulting modified network performance data refers to data that has been processed,
refined, or enhanced based on specific analytical tasks. For instance, if the network performance data includes metrics such as latency, bandwidth utilization, and packet loss, the processing engine (208) may perform tasks like calculating average latency, identifying bandwidth bottlenecks, and detecting packet loss patterns to
25 different nodes. Each node performs its assigned analysis and produces modified
data, such as detailed reports showing trends in latency, visual graphs of bandwidth usage over time with identified peak periods, and maps highlighting areas with significant packet loss. The modified data typically reflects insights, optimizations, or enhancements made through computations, analyses, or transformations
30 performed by the computing nodes. The modified data may include aggregated
statistics, derived metrics, visualizations, or other forms of processed data that
19

provide deeper insights into network behavior, performance trends, or operational conditions.
[0063] In an embodiment, the processing engine (208) is configured to
receive the generated modified network performance data from each of the
5 computing node.
[0064] In an embodiment, the processing engine (208) is configured to
analyse the received modified network performance data to generate a view of the received modified network performance data. Analysing the collected network performance data involves an approach that integrates data transformation,
10 aggregation, statistical calculations, and trend analysis to derive actionable insights
and optimize network operations. In an embodiment, the analysing comprising of performing at least one of data transformation, aggregation, statistical calculations, and trend analysis on the received modified network performance data. Aggregation consolidates data from various sources or time intervals, enabling a holistic view of
15 network performance. Statistical calculations involve applying quantitative
methods to assess network metrics. Measures such as averages, deviations, and correlations provide statistical context, revealing performance trends, anomalies, or areas requiring attention. Trend analysis examines data over time to uncover patterns or fluctuations in network performance. In an embodiment, the data
20 analysis module (212) processes and analyses the received modified data to extract
meaningful insights and derive actionable conclusions. In an embodiment, the data analysis module (212) includes an artificial intelligence or machine learning engine to efficiently utilize the resources. The data analysis module (212) applies statistical techniques to the network performance data to identify statistical properties, such
25 as mean, median, variance, and distribution of performance metrics. The data
analysis module (212) conducts diagnostic analysis to investigate the causes of network performance issues or anomalies. It employs techniques such as root cause analysis, troubleshooting, and error diagnostics to identify the underlying factors affecting network performance.
20

[0065] In an embodiment, the aggregation module (214) aggregates the
received modified network performance data to provide a consolidated view of the
network’s overall performance. It combines data from multiple sources, time
intervals, or network segments to generate meaningful summaries, such as average
5 latency, total bandwidth usage, or overall network availability.
[0066] The other modules (216) having functions that may include but are
not limited to testing, storage, and peripheral functions, such as wireless
communication units for remote operation, audio units for alerts, and the like. The
other modules (216) may include various functionalities that support the overall
10 operation of the system (108). These may include testing modules to validate the
performance and accuracy of the AI/ML models, storage modules for maintaining historical data, and peripheral modules for communication and alerting purposes. These modules enhance the capability of the system (108) to manage and distribute data loads efficiently.
15 [0067] In an embodiment, the processing engine (208) is configured to
visualize the generated view of the analysed modified network performance data over a graphical user interface (GUI). Visualizing the generated view of the analysed modified network performance data through the GUI involves creating a user-friendly and informative presentation of complex metrics and insights. In an
20 embodiment, the visualization may be done through interactive charts and graphs.
Utilizing various types of visual elements such as line charts, bar graphs, pie charts, and scatter plots to depict different aspects of network performance. These visualizations provide a clear representation of metrics such as latency trends, throughput variations, and error rates, enabling quick comprehension of data
25 patterns. In an embodiment, the visualization may be done through customizable
dashboards. Designing dashboards that aggregate multiple visualizations into cohesive displays. Dashboards can be customized to display specific key performance indicators (KPIs), performance trends over time, or comparative analyses between different network segments.
21

[0068] In an embodiment, the processing engine (208) is configured to
generate one or more notifications associated with the received modified network
performance data and communicate the one or more generated notifications to the
UE (104). In an embodiment, the notifications are triggered under specific
5 conditions that indicate network issues or significant changes. For example, if the
latency exceeds a predefined threshold, such as 100 milliseconds, or if packet loss rates rise above 2%, the processing engine (208) may generate an alert to inform administrators of potential performance degradation. The notifications may be triggered by the detection of anomalies, such as unexpected spikes in bandwidth
10 usage or abnormal patterns in throughput. Once the network performance returns to
normal levels or the issues get resolved, the notifications get terminated or updated to reflect the resolution. The notifications may include descriptive messages, severity levels (e.g., critical, warning), and recommended actions to address the identified issue. In an embodiment, the visualization may be done through alerts
15 and notifications. Integrating visual cues or alert mechanisms within the GUI to
notify users of significant events or deviations from predefined criteria. Alerts may be displayed alongside visualizations, prompting immediate action or further investigation to maintain optimal network performance. The notifications are communicated to the UE (104) through appropriate channels such as mobile apps,
20 web interfaces, email alerts, or short message service (SMS) notifications.
[0069] In an embodiment, the UE (104) performs at least one of creating,
modifying, and terminating the at least one request through the GUI. The UE (104) interacts with the system (108) by creating, modifying, and terminating requests through the GUI, facilitating dynamic management of tasks and data processing
25 within the distributed computing environment. In an embodiment, the UE (104)
may initiate new requests through the GUI to perform specific tasks or analyses on the network performance data. The UE (104) may input the one or more parameters, select analysis criteria, and define objectives such as monitoring latency trends, analysing throughput metrics, or detecting anomalies. In an embodiment, the UE
30 (104) may modify the requests. The UE (104) may adjust existing requests based
22

on evolving requirements or changing operational priorities. The UE (104) may
revise the one or more parameters, update analysis methods, or modify task scopes
through interactive controls provided by the GUI. In an embodiment, the UE (104)
may terminate the requests. The UE (104) may terminate ongoing requests that are
5 no longer relevant or necessary for current operations. The UE (104) may initiate
termination commands via the GUI interface, halting ongoing data processing or analysis tasks promptly.
[0070] FIG. 3A illustrates an exemplary system architecture (300A) for
performing the on-demand network performance management, in accordance with
10 an embodiment of present disclosure.
[0071] As illustrated in FIG. 3A, the system architecture (300A) includes a
distributed computation engine (306), a distributed compute cluster (314), a data
lake (304), and the GUI (302) for the user (102) to interact with the system (108).
The system (108) facilitates the accurate calculation of metrics of the network
15 performance data according to the requirements and inputs of the user.
[0072] In an embodiment, the distributed computation engine (306)
analyses user requests to analyse network performance data in order to provide an efficient and scalable analysis of network performance. The network performance data may include specific metrics, time intervals, or filters. At step 308, the user
20 requests are calculated by managing and prioritizing these requests based on factors
like user priorities, system resources, and service level agreements. It also splits the user requests into smaller tasks that can be executed in parallel across a distributed computing infrastructure. Upon splitting the user requests into smaller tasks, the smaller tasks are distributed into available computing nodes to maximize
25 computational efficiency. To optimize resource utilization, the distributed
computation engine (306) employs load-balancing techniques to evenly distribute tasks across the available computing resources. It monitors the workload of each computing node (316) and dynamically assigns tasks based on their processing capabilities and current workload. It also incorporates fault tolerance mechanisms
23

to ensure the reliability of task execution. It monitors the health of computing nodes (316) and detects failures or slowdowns. In case of failure, it automatically redistributes the tasks to other available nodes to maintain system performance and minimize disruptions.
5 [0073] In an embodiment, the distributed computation engine (306) is
coupled to the distributed compute cluster (314). The distributed compute cluster (314) analyses the user requests to further analyse the network performance data to generate a modified network performance data. The distributed compute cluster (314) comprises a compute master (318) and a cluster manager (320) to efficiently
10 manage and coordinate computational tasks across multiple nodes in the cluster.
The compute master (318) is responsible for overseeing the execution of tasks and managing the overall computational workload within the cluster. The compute master (318) directs which nodes should perform specific tasks and monitors their progress, ensuring that tasks are completed efficiently and accurately. The cluster
15 manager (320) handles the operational aspects of the cluster, such as resource
allocation, node management, and load balancing. The cluster manager (320) ensures that resources are distributed effectively across the cluster, manages node availability, and addresses any issues related to node performance or failures.
[0074] The distributed compute cluster (314) comprises multiple computing
20 nodes (316) working together to handle the computational workload and provide
scalable analysis capabilities. The distributed compute cluster (314) consists of
multiple computing nodes (316), which can be physical machines or virtual
instances deployed across the network. These computing nodes (316) are equipped
with processing power and memory resources to perform the data analysis tasks.
25 The distributed compute cluster (316) receives the user requests for network
performance data analysis and distributes the tasks associated with these requests
across the available computing nodes for an even workload distribution to generate
modified network performance data. In this manner, the tasks are executed faster in
parallel execution as each computing node (316) independently processes a subset
30 of the data, employing its computational resources to perform the required
24

calculations and transformations. To optimize resource utilization and ensure fair
distribution of tasks, the distributed compute cluster (314) employs load balancing
techniques. These techniques monitor the workload and performance of each
computing node, dynamically redistributing tasks based on their availability and
5 capacity. In an embodiment, at step 310, after analysing the user requests, a
response is sent to the UE (104). In an embodiment, at step 312, the response may include alerts and notifications regarding the network performance issues.
[0075] In an embodiment, the system includes an artificial intelligence or
machine learning engine to achieve efficient utilization of the resources. The
10 artificial intelligence or machine learning (AI/ML) engine receives information
about the current resource utilization and total resource capacity.
[0076] In an embodiment, the AI/ML engine determines the amount of
available resources and based on the determined available resources and user
predefined parameters, the AI/ML engine selects one or more user requests for
15 execution with the available resources instead of sequentially executing the
requests.
[0077] In an embodiment, the AI/ML engine identifies a number of requests
to be executed by selecting one or more requests from the queue and then pushes
the selected one or more requests to the front of the queue in order to execute with
20 the available resources. In this manner, intelligent distributed computing
environment is achieved for the network performance data, thereby facilitating effective utilization of resources in the distributed computing environment.
[0078] In an embodiment, the AI/ML engine can process and analyse the
modified network performance data to produce a detailed view or interpretation of
25 modified network performance data. The AI/ML engine identifies patterns,
anomalies, and trends that offers a deeper understanding of the network performance.
[0079] FIG. 3B illustrates an exemplary flow diagram illustrating a method
(300B) for performing the on-demand network performance management, in
25

accordance with an embodiment of the present disclosure.
[0080] At step 322, the user can create at least one request from the GUI for
calculating metrics over the network performance data and accessing network
performance as and when required. The network performance data is collected from
5 the data lake and displayed on the GUI. In an embodiment, the GUI may typically
be situated on a web-based platform, a desktop application, or a mobile application, depending on user needs and system architecture. In an embodiment, the user can add one or more filters and parameters to the request as required. The user interacts with the GUI to create and manage the requests for calculating network
10 performance metrics and accessing related data on demand. The process begins
when the user logs into the GUI, which provides a user-friendly environment for network management. Within the GUI, the user can specify needs by creating a request for performance metrics. The performance metrics creation involves selecting various criteria and parameters, such as which network components (e.g.,
15 routers, switches, servers) to analyse, what specific metrics to calculate (such as
latency, bandwidth usage, packet loss), and the time frame for the analysis (e.g., the past hour, day, or week). For example, if a user wants to assess the latency performance of a particular network segment over the last 24 hours, the user may input the parameters into the GUI. The system processes the request by querying
20 the relevant network performance data stored in a central repository or data lake.
The processing engine calculates the metrics based on the user’s specifications, such as average latency, peak latency, and any anomalies detected. Once the calculations are complete, the results are presented to the user through the GUI, typically in the form of detailed reports, charts, or graphs. This allows the user to
25 visualize the network performance data and gain insights into how the network is
performing. The GUI also provides the flexibility to access this information at any time, whether for real-time monitoring or historical analysis, enabling users to make informed decisions and respond to network issues as they arise.
[0081] At step 324, the user request is then calculated by the system in a
30 distributed manner. The request is divided into several parts, which are
26

simultaneously calculated, and the output is finally merged. The calculation may
involve data transformation, aggregation, and statistical calculations to summarize
the performance metrics. In an embodiment, when the user submits a request
through the GUI for calculating the network performance metrics, the system
5 processes the request using a distributed approach. The request is decomposed into
multiple smaller tasks or sub-tasks, each focusing on a specific aspect of the performance data. These sub-tasks are then executed concurrently across various computing nodes or servers within the distributed system. For example, one node might handle data transformation, which involves converting raw performance data
10 into a standardized format. Another node might perform aggregation, summarizing
metrics like average latency or total bandwidth usage across different network segments. Yet another node might handle statistical calculations, such as determining the variance or identifying trends in the data. Once all the sub-tasks are completed, the results are collected and merged to produce a comprehensive output.
15 This distributed processing approach ensures that the calculation is performed
efficiently and quickly, even when dealing with large volumes of network performance data. The final output, which integrates the results from all sub-tasks, is then presented to the user through the GUI, providing a detailed and accurate summary of the requested performance metrics.
20 [0082] At step 326, once the result is formed, notifications are generated to
alert the user. In an embodiment, once the system (108) completes the calculation of network performance metrics, the system (108) generates notifications to inform the user about the availability of the results. After the distributed calculation tasks get finished, the notifications are triggered to alert the user that the requested
25 performance data is ready for review. The notifications may take various forms,
such as email alerts, system messages, or updates within the GUI. The notifications typically include key information about the results, such as summaries of the metrics calculated, any notable findings or anomalies, and possibly a link or prompt to view the detailed results. The notifications ensure that the user is promptly
30 informed about the completion of their request, allowing them to access and analyse
27

the performance data without delay. The notifications may terminate once the user
has acknowledged the alert or when the underlying issue or condition prompting
the notification has been resolved, ensuring that the system reflects the updated
status. Thus, the users stay updated on network performance that enables them to
5 take timely actions or decisions based on the latest metrics.
[0083] FIG. 4 illustrates an exemplary computer system in which or with
which embodiments of the present invention can be utilized, in accordance with an embodiment of present disclosure.
[0084] Referring to FIG. 4, a block diagram (400) of an exemplary computer
10 system is disclosed. The computer system includes input devices (402) connected
through I/O peripherals. The system also includes a Central Processing Unit (CPU)
(404), and Output Devices (408), connected through the I/O peripherals. The CPU
(404) is also attached to a memory unit (416) along with an Arithmetic and Logical
Unit (ALU) (414), a control unit (412), along with secondary storage devices (410)
15 such as Hard Disks and a Secure Digital Card (SD). The data flow and control flow
(406) is indicated by a straight and dashed arrow respectively. The CPU (404)
consists of data registers that hold the data bits, pointers, cache, Random Access
Memory (RAM) (204), and a main processing unit containing the processing
engine. The computer system (400) also consists of communication buses that are
20 used to transport the data internally in the system. In an embodiment, a processor
of the system is used for conducting on demand distributed computing of network
performance data. A person skilled in the art will appreciate that the system may
include more than one processor (202) and communication ports for ease of
function. The processor (202) may include various modules associated with
25 embodiments of the present invention. The input component can also include
communication ports, ethernet ports, gigabit ports, parallel port, or another
Universal Serial Bus (USB). The communication port can also be chosen depending
on a specific network such as a Wide Area Server (WAN), Local Area Network
LAN), or a Personal Area Network (PAN). The communication port can be a RS-
30 232 port that can be used with the remote dialling and internet connection options
28

of the system. A Gigabit port can be used to connect the system to the internet at all times. And the Gigabit port can use copper or fibre for connection.
[0085] FIG. 5 illustrates an exemplary flow diagram for the method (500)
for performing the on-demand network performance data, in accordance with an
5 embodiment of the present disclosure.
[0086] At step 502: The method (500) includes receiving at least one request
comprising one or more parameters and one or more tasks from a user equipment (UE) (104). The at least one request encompass diverse needs, ranging from real-time monitoring to historical data analysis and proactive alerting. The at least one
10 request may include real-time monitoring requests enable immediate insights into
current network conditions, crucial for detecting and responding to sudden performance issues. On the other hand, at least one request may include historical data requests facilitate trend analysis and long-term planning by providing a comprehensive view of network performance over extended periods. In an
15 embodiment, the one or more parameters may include a time interval for which the
user can specify a specific time range or interval within which data should be analysed. The one or more parameters may include a network segment for which the user may filter data based on specific network segments or zones within their infrastructure. This could include analysing data from particular geographical
20 locations, departments, VLANs, or subnet ranges. The one or more parameters may
include a type of metric for which the user can select which network performance metrics they want to analyse. Examples of metrics include latency (response time), bandwidth utilization, packet loss rate, error rates, throughput (data transfer rate), and jitter (variation in packet delay). The one or more parameters may include QoS.
25 The one or more tasks are specific actions or operations requested by the UE (104),
which may include data retrieval, data processing, data visualization, and alert generation. In an embodiment, the at least one request is received through well-defined communication channels, such as APIs, web interfaces, or command-line interfaces, each tailored to accommodate different user needs and technical
30 capabilities.
29

[0087] At step 504: The method (500) includes collecting the network
performance data from one or more data sources based on the one or more
parameters. In an embodiment, the one or more data sources encompass a wide
array of network components and systems, including routers, switches, firewalls,
5 servers, client devices, and cloud services. Each of these components generates data
points related to network traffic, latency, bandwidth usage, packet loss, and other performance metrics. For instance, routers and switches provide data on data packet routing and traffic flows within the network. Firewalls contribute information about security events and traffic filtering. The data collection module (210) retrieves the
10 network performance data from the identified one or more data sources. The
retrieving of the network performance data can be achieved through various mechanisms such as SNMP polling, NetFlow or sFlow data collection, API integrations with monitoring tools, packet capturing, log file parsing, or other custom data acquisition methods. Once the raw data is collected from different
15 sources, the data collection module (210) aggregates it into a cohesive dataset. This
involves organizing and merging the collected data to create a comprehensive view of network performance metrics.
[0088] At step 506: The method (500) includes splitting each of the received
one or more tasks into one or more sub-tasks. When the at least one request
20 comprising one or more tasks is received from the UE (104), the one or more tasks
are splitted into smaller, manageable sub-tasks that can be concurrently processed across the one or more computing nodes. This decomposition process begins with a thorough analysis of the task’s requirements and dependencies. Each sub-task is carefully designed to encapsulate a specific portion of the overall workload,
25 ensuring that computational resources are allocated optimally based on the nature
and complexity of the sub-task. Furthermore, the system considers factors such as data locality and node capabilities during sub-task allocation to minimize data transfer latency and maximize processing efficiency. The one or more sub-tasks may include data retrieval sub-task, statistical analysis sub-task, anomaly detection
30 sub-task, data filtering sub-task, alert generation sub-task and data aggregation sub-
30

task.
[0089] At step 508: The method (500) includes assigning the one or more
sub-tasks along with the collected network performance data across one or more
computing nodes (316). The one or more computing nodes may include server
5 nodes, virtual machines and containerized nodes, and edge computing nodes. Each
computing node is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data. The one or more defined conditions includes at least one of computing node processing power, memory capacity, and network proximity to
10 data sources, processing capabilities, and workload. This process begins with a
detailed assessment of each sub-task’s computational requirements and the capabilities of available computing nodes. Sub-tasks are then intelligently allocated based on factors such as computing node processing power, memory capacity, and network proximity to data sources, processing capabilities, and workload.
15 Furthermore, the system employs algorithms for load balancing to ensure an
equitable distribution of workload among computing nodes, thereby optimizing resource utilization, and minimizing processing times. The load balancing algorithms may include a round robin, least connection, and weighted round robin algorithms. The modified network performance data refers to the processed and
20 transformed network performance data generated by the one or more computing
nodes after executing assigned one or more sub-tasks on collected network performance data. This modified data typically reflects insights, optimizations, or enhancements made through computations, analyses, or transformations performed by the computing nodes. It may include aggregated statistics, derived metrics,
25 visualizations, or other forms of processed data that provide deeper insights into
network behavior, performance trends, or operational conditions.
[0090] At step 510: The method (500) includes receiving the generated
modified network performance data from each of the computing node (316).
[0091] At step 512: The method (500) includes analysing the received
31

modified network performance data to generate a view of the received modified
network performance data. Analysing the collected network performance data
involves an approach that integrates data transformation, aggregation, statistical
calculations, and trend analysis to derive actionable insights and optimize network
5 operations. In an embodiment, the analysing comprising of performing at least one
of data transformation, aggregation, statistical calculations, and trend analysis on the received modified network performance data. Aggregation consolidates data from various sources or time intervals, enabling a holistic view of network performance. Statistical calculations involve applying quantitative methods to
10 assess network metrics. Measures such as averages, deviations, and correlations
provide statistical context, revealing performance trends, anomalies, or areas requiring attention. Trend analysis examines data over time to uncover patterns or fluctuations in network performance. In an embodiment, the data analysis module (212) processes and analyses the received modified data to extract meaningful
15 insights and derive actionable conclusions. In an embodiment, the data analysis
module (212) includes an artificial intelligence or machine learning engine to efficiently utilize the resources. The data analysis module (212) applies statistical techniques to the network performance data to identify statistical properties, such as mean, median, variance, and distribution of performance metrics. The data
20 analysis module (212) conducts diagnostic analysis to investigate the causes of
network performance issues or anomalies. It employs techniques such as root cause analysis, troubleshooting, and error diagnostics to identify the underlying factors affecting network performance.
[0092] In another embodiment, the data analysis module (212) performs
25 various analytical tasks on the network performance data to uncover patterns,
trends, anomalies, and correlations. To perform analysis, the module performs
preprocessing steps to ensure data quality, consistency, and readiness for analysis.
This may involve data cleaning, outlier detection and handling, data normalization,
and feature engineering. The data analysis module (212) applies statistical
30 techniques to the network performance data to identify statistical properties, such
32

as mean, median, variance, and distribution of performance metrics. The data
analysis module (212) conducts diagnostic analysis to investigate the causes of
network performance issues or anomalies. It employs techniques such as root cause
analysis, troubleshooting, and error diagnostics to identify the underlying factors
5 affecting network performance. By utilizing machine learning and pattern
recognition algorithms, the data analysis module also identifies recurring patterns, trends, or anomalies in the network performance data.
[0093] In some embodiments, the method further includes displaying the
generated view of the analysed modified network performance data over a graphical
10 user interface (GUI). Visualizing the generated view of the analysed modified
network performance data through the GUI involves creating a user-friendly and informative presentation of complex metrics and insights. In an embodiment, the visualization may be done through interactive charts and graphs. Utilizing various types of visual elements such as line charts, bar graphs, pie charts, and scatter plots
15 to depict different aspects of network performance. These visualizations provide a
clear representation of metrics such as latency trends, throughput variations, and error rates, enabling quick comprehension of data patterns. In an embodiment, the visualization may be done through customizable dashboards. Designing dashboards that aggregate multiple visualizations into cohesive displays. Dashboards can be
20 customized to display specific key performance indicators (KPIs), performance
trends over time, or comparative analyses between different network segments.
[0094] In some embodiments, the method further includes generating one
or more notifications associated with the received modified network performance data and communicating the one or more generated notifications to the UE (104).
25 In an embodiment, the visualization may be done through alerts and notifications.
Integrating visual cues or alert mechanisms within the GUI to notify users of significant events or deviations from predefined criteria. Alerts may be displayed alongside visualizations, prompting immediate action or further investigation to maintain optimal network performance. Notifications are communicated to the UE
30 (104) through appropriate channels such as mobile apps, web interfaces, email
33

alerts, or SMS notifications. Notifications may include descriptive messages, severity levels (e.g., critical, warning), and recommended actions to address the identified issue.
[0095] In an exemplary embodiment, the present disclosure discloses a user
5 equipment (UE) (104) communicatively coupled with a network (106). The
coupling comprises steps of receiving, by the network (106), a connection request from the UE (104), sending, by the network, an acknowledgment of the connection request to the UE (104) and transmitting a plurality of signals in response to the connection request. The on-demand network performance management is
10 performed by a method that includes receiving at least one request comprising one
or more parameters and one or more tasks from a user equipment (UE). The method includes collecting network performance data from one or more data sources based on the one or more parameters. The method includes splitting each of the received one or more tasks into one or more sub-tasks. The method includes assigning the
15 one or more sub-tasks along with the collected network performance data across
one or more computing nodes. Each computing node is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data. The method includes receiving the generated modified network performance data from each of the
20 computing node. The method includes analysing the received modified network
performance data to generate a view of the received modified network performance data.
[0096] It is to be appreciated by a person skilled in the art that while various
embodiments of the present disclosure have been elaborated for the on-demand
25 distributed computing of network performance data. However, the teachings of the
present disclosure are also applicable for other types of applications as well, and all such embodiments are well within the scope of the present disclosure. However, the system and method for the on-demand distributed computing of network performance data is also equally implementable in other industries as well, and all
30 such embodiments are well within the scope of the present disclosure without any
34

limitation.
[0097] Moreover, in interpreting the specification, all terms should be
interpreted in the broadest possible manner consistent with the context. In
particular, the terms “comprises” and “comprising” should be interpreted as
5 referring to elements, components, or steps in a non-exclusive manner, indicating
that the referenced elements, components, or steps may be present, or utilized, or
combined with other elements, components, or steps that are not expressly
referenced. Where the specification claims refer to at least one of something
selected from the group consisting of A, B, C….and N, the text should be
10 interpreted as requiring only one element from the group, not A plus N, or B plus
N, etc.
[0098] The present disclosure provides technical advancement related to
improving performance of the network. This advancement addresses the limitations of existing solutions by providing advancement in network performance analysis
15 systems addresses inherent limitations by introducing capabilities that empower
users to dynamically create and execute tasks through the GUI on distributed computation engines. Traditionally, analysing network performance data has been constrained by static task configurations and predefined workflows, which hindered adaptability to evolving network conditions and user needs. By enabling users to
20 define and initiate tasks in real-time via intuitive GUI interactions, the system
facilitates agile response to changing requirements and operational demands. This approach allows distributed computation engines to efficiently allocate computing resources, optimize task execution through parallel processing, and ensure timely delivery of insights.
25 [0099] While considerable emphasis has been placed herein on the preferred
embodiments it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the
35

disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
5 [00100] The present disclosure provides a system for efficiently executing an
on-demand distributed computing of network performance data.
[00101] The present disclosure provides an intelligent distributed computing
environment to enable selection of the request based on the amount of the resources available for execution to achieve efficient utilization of resources.
10 [00102] The present disclosure provides a system where requests are
processed and executed immediately upon user demand to ensure that the execution of a request occurs promptly and specifically in response to a user’s request.
[00103] The present disclosure provides a system where users can obtain
results or perform actions in real-time or near-real-time, enabling quick decision-
15 making or immediate access to the required information.
[00104] The present disclosure provides a system where resources are
provisioned only when a request is made thus reducing the overall resource usage
and to avoid keeping resources idle during periods of inactivity, resources are
dynamically allocated and released based on user demand, optimizing resource
20 utilization.
[00105] The present disclosure provides a system that enables handling large
volumes of network performance data and accommodating increasing processing demands without sacrificing performance.
[00106] The present disclosure provides a system where the distributed
25 nature of the system enables parallel execution of analysis tasks across multiple
compute nodes to significantly reduce the processing time and improves overall system performance, allowing for efficient analysis of network performance data.
36

[00107] The present disclosure provides a system that supports real-time
monitoring and analysis of network performance data to continuously process streaming data and provide immediate insights into the network's performance, allowing for proactive troubleshooting, anomaly detection, and quick decision-making.
[00108] The present disclosure provides a system that allows users to
customize their analysis according to their specific needs and obtain relevant insights from the network performance data.
[00109] The present disclosure provides a system that utilizes a data lake or
distributed storage infrastructure to store network performance data as centralization of data enables easy access, efficient data management, and seamless integration with other data sources or analytics tools.

We Claim:
1. A method (500) for performing on-demand network performance
management, the method (500) comprising:
receiving (502), by a receiving unit (220), at least one request comprising one or more parameters and one or more tasks from a user equipment (UE) (104);
collecting (504), by a processing engine (208), network performance data from one or more data sources based on the one or more parameters;
splitting (506), by the processing engine (208), each of the received one or more tasks into one or more sub-tasks;
assigning (508), by the processing engine (208), the one or more sub-tasks along with the collected network performance data across one or more computing nodes (316), wherein each computing node (316) is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data;
receiving (510), by the processing engine (208), the generated modified network performance data from each of the computing nodes (316); and
analysing (512), by the processing engine (208), the received modified network performance data to generate a view of the received modified network performance data.
2. The method (500) as claimed in claim 1, further comprising generating one
or more notifications associated with the received modified network
performance data and communicating the one or more generated
notifications to the UE (104).

3. The method (500) as claimed in claim 1, further comprising displaying the generated view of the analysed modified network performance data over a graphical user interface (GUI).
4. The method (500) as claimed in claim 1, wherein the analysing comprising of performing at least one of data transformation, aggregation, and statistical calculations on the received modified network performance data.
5. The method (500) as claimed in claim 1, wherein the one or more defined conditions include at least one of processing capabilities, and workload of the one or more computing nodes (316).
6. A system (108) for performing on-demand network performance management, the system (108) comprising:
a receiving unit (220) configured to receive at least one request comprising one or more parameters and one or more tasks from a user equipment (UE) (104);
a processing engine (208) coupled with the receiving unit (220) to receive the at least one request and is further coupled with a memory (204) to execute a set of instructions stored in the memory (204), the processing engine (208) is configured to:
collect network performance data from one or more data sources based on the one or more parameters;
split each of the received one or more tasks into one or more sub-tasks;
assign the one or more sub-tasks along with the collected network performance data across one or more computing nodes (316), wherein each computing node (316) is selected based on one or more defined conditions and is configured to perform the one or more assigned sub-tasks to generate a modified network performance data;

receive, the generated modified network performance data from each of the computing nodes (316); and
analyse, the received modified network performance data to generate a view of the received modified network performance data.
7. The system (108) as claimed in claim 6, is further configured to generate one or more notifications associated with the received modified network performance data and communicate the one or more generated notifications to the UE (104).
8. The system (108) as claimed in claim 6, is further configured to display the generated view of the analysed modified network performance data over a graphical user interface (GUI).
9. The system (108) as claimed in claim 6, wherein the processing engine (208) is configured to perform at least one of data transformation, aggregation, and statistical calculations on the received modified network performance data.
10. The system (108) as claimed in claim 6, wherein the one or more defined conditions include at least one of processing capabilities, and workload of the one or more computing nodes (316).
11. A user equipment (UE) (104) communicatively coupled with a network (106), the coupling comprises steps of:
receiving, by the network (106), a connection request from the UE (104);
sending, by the network (106), an acknowledgment of the connection request to the UE (104); and

transmitting a plurality of signals in response to the connection request, wherein on-demand network performance management is performed by a method (500) as claimed in claim 1.

Documents

Application Documents

# Name Date
1 202321059356-STATEMENT OF UNDERTAKING (FORM 3) [04-09-2023(online)].pdf 2023-09-04
2 202321059356-PROVISIONAL SPECIFICATION [04-09-2023(online)].pdf 2023-09-04
3 202321059356-FORM 1 [04-09-2023(online)].pdf 2023-09-04
4 202321059356-DRAWINGS [04-09-2023(online)].pdf 2023-09-04
5 202321059356-DECLARATION OF INVENTORSHIP (FORM 5) [04-09-2023(online)].pdf 2023-09-04
6 202321059356-FORM-26 [01-12-2023(online)].pdf 2023-12-01
7 202321059356-Proof of Right [04-03-2024(online)].pdf 2024-03-04
8 202321059356-FORM-26 [03-06-2024(online)].pdf 2024-06-03
9 202321059356-FORM 13 [03-06-2024(online)].pdf 2024-06-03
10 202321059356-AMENDED DOCUMENTS [03-06-2024(online)].pdf 2024-06-03
11 202321059356-Request Letter-Correspondence [04-06-2024(online)].pdf 2024-06-04
12 202321059356-Power of Attorney [04-06-2024(online)].pdf 2024-06-04
13 202321059356-Covering Letter [04-06-2024(online)].pdf 2024-06-04
14 202321059356-CORRESPONDENCE(IPO)-(WIPO DAS)-12-07-2024.pdf 2024-07-12
15 202321059356-FORM-5 [03-09-2024(online)].pdf 2024-09-03
16 202321059356-DRAWING [03-09-2024(online)].pdf 2024-09-03
17 202321059356-CORRESPONDENCE-OTHERS [03-09-2024(online)].pdf 2024-09-03
18 202321059356-COMPLETE SPECIFICATION [03-09-2024(online)].pdf 2024-09-03
19 202321059356-ORIGINAL UR 6(1A) FORM 26-190924.pdf 2024-09-23
20 Abstract 1.jpg 2024-09-25
21 202321059356-FORM 18 [07-10-2024(online)].pdf 2024-10-07