Sign In to Follow Application
View All Documents & Correspondence

Method And System For Query Management In Network Databases

Abstract: The present disclosure relates to a method and system for query management in network databases. The system includes a transceiver unit [302] configured to receive queries scheduled for execution in a plurality of database clusters. The system [300] comprises a collecting unit [304] to collect performance metrics data and a resource utilization metrics data and a validating unit [306] to validate each of the performance metrics data and the resource utilization metrics data. The system comprises an identifying unit [308] to identify using a learning model, a set of resource intensive queries from the received queries. The system comprises an estimation unit [310] configured to estimate resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric data. Further, the system comprises a generating unit [312] configured to generate a set of insights based on the correlation. [FIG. 3]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 September 2023
Publication Number
11/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Sumit Thakur
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Pritam Nath
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Puspesh Prakash
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Mohit Chaudhary
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Tejesh Dakhinkar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970 (39 OF 1970) & THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR QUERY MANAGEMENT IN NETWORK DATABASES”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in which it is to be performed.

METHOD AND SYSTEM FOR QUERY MANAGEMENT IN NETWORK
DATABASES
FIELD OF INVENTION
[0001] Embodiments of the present disclosure generally relate to network performance management systems. More particularly, embodiments of the present disclosure relate to methods and systems for query management in network databases.
BACKGROUND
[0002] The following description of the related art is intended to provide background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section is used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of the prior art.
[0003] Wireless communication technology has rapidly evolved over the past few decades, with each generation bringing significant improvements and advancements. The first generation of wireless communication technology was based on analog technology and offered only voice services. However, with the advent of the second generation (2G) technology, digital communication and data services became possible, and text messaging was introduced. The third generation (3G) technology marked the introduction of high-speed internet access, mobile video calling, and location-based services. The fourth generation (4G) technology revolutionized wireless communication with faster data speeds, better network coverage, and improved security. Currently, the fifth generation (5G) technology is being deployed, promising even faster data speeds, low latency, and the ability to connect multiple devices simultaneously. With each generation, wireless

communication technology has become more advanced, sophisticated, and capable of delivering more services to its users.
[0004] In the 5G Core and 5G Cloud ecosystem, a wide range of databases are deployed which span numerous clusters. An increase in the number of databases makes it difficult to monitor and get an overall picture of the health status and performance of all databases. Also, an increase in the number of users accessing 5G services can indeed lead to a higher network load, which, in turn, results in a sudden surge in the number of queries related to network parameters. This surge in queries is a direct consequence of the increased demand and usage of the 5G network. Additionally, the current systems lack the ability to promptly monitor and handle queries, and then deliver timely resolutions of the queries within the specified timeframe to manage the performance of the network. The conventional methods and systems prove inadequate in effectively managing the various queries within network databases, primarily due to a substantial surge in query volume.
[0005] Therefore, inefficient monitoring and handling of queries in the network databases impact the overall health and performance of the network and database and lead to various challenges such as query performance bottlenecks, dynamic workload handling, resource wastage, database slowdowns, scalability challenges, cost optimization, and the like.
[0006] Thus, in order to improve the radio access network capacity, network performance, and database performance, there exists an imperative need in the art to reliably monitor and manage the queries in the network databases, which the present disclosure aims to address.
OBJECTS OF THE DISCLOSURE
[0007] Some of the objects of the present disclosure, which at least one embodiment disclosed herein satisfies are listed herein below.

[0008] It is an object of the present disclosure to provide a system and a method for query management in network databases.
[0009] It is another object of the present disclosure to optimise resource wastage by analysing the resource utilization pattern related to the raised queries.
[0010] It is yet another object of the present disclosure to provide real-time insights into query performance and resource utilization.
[0011] It is yet another object of the present disclosure to recommend optimizing resource allocation for specific query types, or suggestions to modify certain queries for better performance.
[0012] It is yet another object of the present disclosure to monitor the network performance parameters.
[0013] It is yet another object of the present disclosure to optimize queries for better execution times to further enhance the network performance.
[0014] It is yet another object of the present disclosure to efficiently handle challenges in handling network-related queries such as challenges related to query performance bottlenecks, dynamic workload handling, resource wastage, database slowdowns, scalability challenges, cost optimization, and the like.
SUMMARY
[0015] This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.

[0016] An aspect of the present disclosure may relate to a method for query management in network databases. The method comprising receiving, by a transceiver unit, one or more queries scheduled for execution in a plurality of database clusters. Further, the method comprises collecting, by a collecting unit, at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters based on the received one or more queries. Next, the method comprises validating, by a validating unit, each of the collected performance metrics data and the collected resource utilization metrics data. Thereafter, the method comprises identifying, by an identifying unit using a learning model, a set of resource intensive queries from the received one or more queries. Furthermore, the method comprises estimating, by an estimation unit, resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric data, and further the method comprises generating, by a generating unit, a set of insights based on the correlation. In addition, the method comprises displaying, by a display unit via a user interface, the generated set of insights.
[0017] In an exemplary aspect of the present disclosure, the performance metrics data is related to efficiency, throughput, and effectiveness associated with the one or more queries.
[0018] In an exemplary aspect of the present disclosure, the set of insights comprises at least: anomalies detection, query optimization recommendation and resource management.
[0019] In an exemplary aspect of the present disclosure, the one or more the queries are scheduled for execution on the plurality of database clusters.

[0020] In an exemplary aspect of the present disclosure, the estimation of resource usage comprises at least one of CPU usage, I/O usage, elapsed time, and execution time.
[0021] In an exemplary aspect of the present disclosure, the learning model is configured to analyse query performance metrics data including execution times, query plans, and response times.
[0022] Another aspect of the present disclosure may relate to a system for query management in network databases. The system comprising a transceiver unit configured to receive one or more queries scheduled for execution in a plurality of database clusters. Further, the system comprise a collecting unit configured to collect at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters based on the received one or more queries. Further, the system comprise a validating unit configured to validate each of the collected performance metrics data and the collected resource utilization metrics data. Further, the system comprise an identifying unit configured to identify using a learning model, a set of resource intensive queries from the received one or more queries. Further, the system comprise an estimation unit configured to estimate resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric data. Further, the method comprise a generating unit configured to generate a set of insights based on the correlation. Further, the method comprises a display unit configured to display via a user interface, the generated set of insights.
[0023] Yet another aspect of the present disclosure may relate to a non-transitory computer readable storage medium, storing instructions for query management in network databases, the instructions include executable code which, when executed by one or more units of a system, causes: a transceiver unit to receive one or more queries scheduled for execution in a plurality of database clusters. Further, executable code which, when executed by one or more units of a system causes a

collecting unit to collect at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters based on the received one or more queries. Further, executable code which, when executed by one or more units of a system causes a validating unit to validate each of the collected performance metrics data and the collected resource utilization metrics data. Further, executable code which, when executed by one or more units of a system causes an identifying unit to identify using a learning model, a set of resource intensive queries from the received one or more queries. Further, executable code which, when executed by one or more units of a system causes an estimation unit to estimate resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric data. Further, executable code which, when executed by one or more units of a system causes a generating unit to generate a set of insights based on the correlation. Further, executable code which, when executed by one or more units of a system causes a display unit to display via a user interface, the generated set of insights.
DESCRIPTION OF THE DRAWINGS
[0024] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the different drawings. Components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the method and system according to the disclosure are illustrated herein to highlight the advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure of such drawings includes disclosure of electrical components or circuitry commonly used to implement such components.

[0025] FIG.1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture.
[0026] FIG. 2 illustrates an exemplary block diagram of a computing device upon which the features of the present disclosure may be implemented in accordance with exemplary implementation of the present disclosure.
[0027] FIG. 3 illustrates an exemplary block diagram of a system for query management in network databases, in accordance with exemplary implementations of the present disclosure.
[0028] FIG. 4 illustrates a method flow diagram for query management in network databases in accordance with exemplary implementations of the present disclosure.
[0029] FIG. 5 illustrates an exemplary system architecture diagram, for query management in network databases in accordance with exemplary implementations of the present disclosure.
[0030] FIG. 6 illustrates an exemplary method flow for providing access to user, in accordance with the exemplary embodiments of the present invention.
[0031] The foregoing shall be more apparent from the following more detailed description of the disclosure.
DETAILED DESCRIPTION
[0032] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter may each be used independently of one

another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above.
5 [0033] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
It should be understood that various changes may be made in the function and
10 arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0034] Specific details are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
15 ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail.
20 [0035] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations may be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process
25 is terminated when its operations are completed but could have additional steps not
included in a figure.
[0036] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
30 subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
9

necessarily to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
5 description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding any additional or other elements.
[0037] As used herein, a “processing unit” or “processor” or “operating processor”
10 includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a Digital Signal Processing (DSP) core, a controller, a microcontroller, Application Specific
15 Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.
20
[0038] As used herein, “a user equipment”, “a user device”, “a smart-user-device”, “a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”, “a wireless communication device”, “a mobile communication device”, “a communication device” may be any electrical, electronic and/or computing device
25 or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart phone, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure. Also, the user device may
30 contain at least one input means configured to receive an input from unit(s) which
are required to implement the features of the present disclosure.
10

[0039] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
5 medium includes read-only memory (“ROM”), random access memory (“RAM”),
magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media. The storage unit stores at least the data that may be required by one or more units of the system to perform their respective functions.
10
[0040] As used herein “interface” or “user interface refers to a shared boundary across which two or more separate components of a system exchange information or data. The interface may also be referred to a set of rules or protocols that define communication or interaction of one or more modules or one or more units with
15 each other, which also includes the methods, functions, or procedures that may be
called.
[0041] All modules, units, components used herein, unless explicitly excluded herein, may be software modules or hardware processors, the processors being a
20 general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array circuits (FPGA), any other type of integrated circuits, etc.
25
[0042] As used herein the transceiver unit include at least one receiver and at least one transmitter configured respectively for receiving and transmitting data, signals, information or a combination thereof between units/components within the system and/or connected with the system.
30
11

[0043] As discussed in the background section, the current known solutions have
several shortcomings. The present disclosure aims to overcome the above-
mentioned and other existing problems in this field of technology by providing
method and system of query management in network databases. The present
5 disclosure aims to overcome the above-mentioned and other existing problems in
this field of technology by providing a method and system to monitor and manage various queries in the network databases. In the present disclosure, the application, through a pull-based mechanism, gathers data on query performance metrics and resource utilization metrics (such as CPU, memory, storage, disk IO etc.) from the
10 database clusters. The gathered data is streamed in real-time or near-real-time from
the database clusters to the application’s worker service. Next, the collected data is stored in the application’s centralized data repository. Next, data aggregation is performed to organize the data for analysis. Next, advanced algorithm techniques are used to analyze query performance metrics, including execution times, query
15 plans, and response times. Accordingly, patterns and trends in query behavior are
identified. In the present disclosure, advanced algorithms are primarily part of an unsupervised learning model. It helps in detecting anomalies in query performance metrics without the need for labeled training data. Next, slow and resource-intensive queries are flagged for further investigation. Next, the resource utilization
20 metrics are processed and analysed to understand patterns of CPU, memory, and
storage usage. In an embodiment, anomalies in resource consumption are detected, such as sudden spikes or sustained high usage. Next, the present disclosure involves correlating query performance with resource utilization. Next, the system generates insights based on query and resource data correlations. The generated insights may
25 include recommendations to optimize resource allocation for specific query types,
or suggestions to modify certain queries for better performance. Finally, the users access the dashboard to view real-time insights into query performance and resource utilization.
30 [0044] Therefore, the present disclosure facilitates identification of abrupt patterns,
trends, or significant deviations in metrics, surpassing predefined thresholds. The
12

key aspect of the invention lies in gathering, assessing, and exhibiting query and resource data to furnish valuable insights. The application feeds on metrics data from all database clusters, filtering out irrelevant information and focusing on pivotal data aspects at the backend server. 5
[0045] The present disclosure provides various technical advancements. The present disclosure provides integration of performance and resource metrics to analyze query behavior. The present disclosure provides a dynamic and data-driven approach to predicting resource needs, as opposed to simple heuristics or predefined
10 limits. In addition, the present disclosure includes detailed analysis of query and
resources that further makes management of queries easy. The present disclosure facilitates in identification of abrupt patterns, trends, or significant deviations in metrics, surpassing predefined thresholds. The key aspect of the invention lies in gathering, assessing, and exhibiting query and resource data to furnish valuable
15 insights. The application feeds on metrics data from all database clusters, filtering
out irrelevant information and focusing on pivotal data aspects at the backend server
[0046] FIG. 1 illustrates an exemplary block diagram representation of 5th generation core (5GC) network architecture [100], in accordance with exemplary
20 implementation of the present disclosure. As shown in figure 1, the 5GC network
architecture [100] includes a user equipment (UE) [102], a radio access network (RAN) [104], an access and mobility management function (AMF) [106], a Session Management Function (SMF) [108], a Service Communication Proxy (SCP) [110], an Authentication Server Function (AUSF) [112], a Network Slice Specific
25 Authentication and Authorization Function (NSSAAF) [114], a Network Slice
Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122], a Unified Data Management (UDM) [124], an application function (AF) [126], a User Plane Function (UPF) [128], a data network (DN) [130], wherein all the
30 components are assumed to be connected to each other in a manner as obvious to
the person skilled in the art for implementing features of the present disclosure.
13

[0047] The RAN [104] is the part of a mobile telecommunications system that
connects user equipment (UE) [102] to the core network (CN) and provides access
to different types of networks (e.g., 5G network). It consists of radio base stations
5 and the radio access technologies that enable wireless communication.
[0048] The AMF [106] is a 5G core network function responsible for managing access and mobility aspects, such as UE registration, connection, and reachability. It also handles mobility management procedures like handovers and paging.
10
[0049] The SMF [108] is a 5G core network function responsible for managing session-related aspects, such as establishing, modifying, and releasing sessions. It coordinates with the User Plane Function (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
15
[0050] The SCP [110] is a network function in the 5G core network that facilitates communication between other network functions by providing a secure and efficient messaging service. It acts as a mediator for service-based interfaces.
20 [0051] The AUSF [112] is a network function in the 5G core responsible for
authenticating UEs during registration and providing security services. It generates and verifies authentication vectors and tokens.
[0052] The NSSAAF [114] is a network function that provides authentication and
25 authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
[0053] The NSSF [116] is a network function responsible for selecting the
appropriate network slice for a UE based on factors such as subscription, requested
30 services, and network policies.
14

[0054] The NEF [118] is a network function that exposes capabilities and services of the 5G network to external applications, enabling integration with third-party services and applications.
5 [0055] The NRF [120] is a network function that acts as a central repository for
information about available network functions and services. It facilitates the discovery and dynamic registration of network functions.
[0056] The PCF [122] is a network function responsible for policy control
10 decisions, such as QoS, charging, and access control, based on subscriber
information and network policies.
[0057] The UDM [124] is a network function that centralizes the management of
subscriber data, including authentication, authorization, and subscription
15 information.
[0058] The AF [126] is a network function that represents external applications interfacing with the 5G core network to access network capabilities and services.
20 [0059] The UPF [128] is a network function responsible for handling user data
traffic, including packet routing, forwarding, and QoS enforcement.
[0060] The DN [130] refers to a network that provides data services to user
equipment (UE) in a telecommunications system. The data services may include
25 but are not limited to Internet services, private data network related services.
[0061] The 5GC network architecture [100] also comprises a plurality of interfaces
for connecting the network functions with a network entity for performing the
network functions. The NSSF [116] is connected with the network entity via the
30 interface denoted as (Nnssf) interface in the figure. The NEF [118] is connected with
the network entity via the interface denoted as (Nnef) interface in the figure. The
15

NRF [120] is connected with the network entity via the interface denoted as (Nnrf)
interface in the figure. The PCF [122] is connected with the network entity via the
interface denoted as (Npcf) interface in the figure. The UDM [124] is connected with
the network entity via the interface denoted as (Nudm) interface in the figure. The
5 AF [126] is connected with the network entity via the interface denoted as (Naf)
interface in the figure. The NSSAAF [114] is connected with the network entity via the interface denoted as (Nnssaaf) interface in the figure. The AUSF [112] is connected with the network entity via the interface denoted as (Nausf) interface in the figure.
10
[0062] The AMF [106] is connected with the network entity via the interface denoted as (Namf) interface in the figure. The SMF [108] is connected with the network entity via the interface denoted as (Nsmf) interface in the figure. The SMF [108] is connected with the UPF [128] via the interface denoted as (N4) interface
15 in the figure. The UPF [128] is connected with the RAN [104] via the interface
denoted as (N3) interface in the figure. The UPF [128] is connected with the DN [130] via the interface denoted as (N6) interface in the figure. The RAN [104] is connected with the AMF [106] via the interface denoted as (N2). The AMF [106] is connected with the RAN [104] via the interface denoted as (N1). The UPF [128]
20 is connected with other UPF [128] via the interface denoted as (N9). The interfaces
such as Nnssf, Nnef, Nnrf, Npcf, Nudm, Naf, Nnssaaf, Nausf, Namf, Nsmf, N9, N6, N4, N3, N2, and N1 can be referred to as a communication channel between one or more functions or modules for enabling exchange of data or information between such functions or modules, and network entities.
25
[0063] FIG. 2 illustrates an exemplary block diagram of a computing device [200] (herein, also referred to as a computer system [200]) upon which one or more features of the present disclosure may be implemented in accordance with an exemplary implementation of the present disclosure. In an implementation, the
30 computing device [200] may also implement a method for query management in
network databases, utilising a system, or one or more sub-systems, provided in the
16

network. In another implementation, the computing device [200] itself implements the method for query management in network databases, using one or more units configured within the computing device [200], wherein said one or more units are capable of implementing the features as disclosed in the present disclosure. 5
[0064] The computing device [200] may include a bus [202] or other communication mechanism(s) for communicating information, and a hardware processor [204] coupled with bus [202] for processing said information. The hardware processor [204] may be, for example, a general-purpose microprocessor.
10 The computing device [200] may also include a main memory [206], such as a
random-access memory (RAM), or other dynamic storage device, coupled to the bus [202], for storing information and instructions to be executed by the processor [204]. The main memory [206] also may be used for storing temporary variables or other intermediate information during execution of the instructions to be executed
15 by the processor [204]. Such instructions, when stored in a non-transitory storage
media accessible to the processor [204], render the computing device [200] into a special purpose device that is customized to perform operations according to the instructions. The computing device [200] further includes a read only memory (ROM) [208] or other static storage device coupled to the bus [202] for storing static
20 information and instructions for the processor [204].
[0065] A storage device [210], such as a magnetic disk, optical disk, or solid-state drive is provided and coupled to the bus [202] for storing information and instructions. The computing device [200] may be coupled via the bus [202] to a
25 display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
Light Emitting Diode (LED) display, Organic LED (OLED) display, etc., for displaying information to a user of the computing device [200]. An input device [214], including alphanumeric and other keys, touch screen input means, etc. may be coupled to the bus [202] for communicating information and command
30 selections to the processor [204]. Another type of user input device may be a cursor
controller [216], such as a mouse, a trackball, or cursor direction keys, for
17

communicating direction information and command selections to the processor
[204], and for controlling cursor movement on the display [212]. The cursor
controller [216] typically has two degrees of freedom in two axes, a first axis (e.g.,
x) and a second axis (e.g., y), that allows the cursor controller [216] to specify
5 positions in a plane.
[0066] The computing device [200] may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which, in combination with the computing device [200],
10 causes or programs the computing device [200] to be a special-purpose device.
According to one implementation, the techniques herein are performed by the computing device [200] in response to the processor [204] executing one or more sequences of one or more instructions contained in the main memory [206]. The one or more instructions may be read into the main memory [206] from another
15 storage medium, such as the storage device [210]. Execution of the one or more
sequences of the one or more instructions contained in the main memory [206] causes the processor [204] to perform the process steps described herein. In alternative implementations of the present disclosure, hard-wired circuitry may be used in place of, or in combination with, software instructions.
20
[0067] The computing device [200] also may include a communication interface [218] coupled to the bus [202]. The communication interface [218] provides two-way data communication coupling to a network link [220] that is connected to a local network [222]. For example, the communication interface [218] may be an
25 integrated service digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of telecommunication line. In another example, the communication interface [218] may be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links may also be implemented. In any
30 such implementation, the communication interface [218] sends and receives
18

electrical, electromagnetic or optical signals that carry digital data streams representing different types of information.
[0068] The computing device [200] can send and receive data, including program
5 code, messages, etc. through the network(s), the network link [220] and the
communication interface [218]. In an example, a server [230] might transmit a
requested code for an application program through the Internet [228], the ISP [226],
the local network [222], a host [224] and the communication interface [218]. The
received code may be executed by the processor [204] as it is received, and/or stored
10 in the storage device [210], or other non-volatile storage for later execution.
[0069] Referring to FIG. 3, an exemplary block diagram of a system [300] for query management in network databases, is shown, in accordance with the exemplary implementations of the present disclosure. The system [300] comprises
15 at least one transceiver unit [302], at least one collecting unit [304], at least one
validating unit [306], at least one identifying unit [308], at least one estimation unit [310], at least one generating unit [312], and at least one display unit [314]. Also, all of the components/ units of the system [300] are assumed to be connected to each other unless otherwise indicated below. As shown in the figures all units
20 shown within the system [300] should also be assumed to be connected to each
other. Also, in FIG. 3 only a few units are shown, however, the system [300] may comprise multiple such units or the system [300] may comprise any such numbers of said units, as required to implement the features of the present disclosure. Further, in an implementation, the system [300] may be present in a user device/
25 user equipment [102] to implement the features of the present disclosure. The
system [300] may be a part of the user device [102]/ or may be independent of but in communication with the user device [102] (may also referred herein as a UE). In another implementation, the system [300] may reside in a server or a network entity. In yet another implementation, the system [300] may reside partly in the server/
30 network entity and partly in the user device.
19

[0070] The system [300] is configured for query management in network databases,
with the help of the interconnection between the components/units of the system
[300]. The query management in the network databases is referred to the process or
handling, processing, and executing one or more queries across a plurality of
5 databases clusters within the network.
[0071] Further, in accordance with the present disclosure, it is to be acknowledged that the functionality described for the various components/units can be implemented interchangeably. While specific embodiments may disclose a
10 particular functionality of these units for clarity, it is recognized that various
configurations and combinations thereof are within the scope of the disclosure. The functionality of specific units as disclosed in the disclosure should not be construed as limiting the scope of the present disclosure. Consequently, alternative arrangements and substitutions of units, provided they achieve the intended
15 functionality described herein, are considered to be encompassed within the scope
of the present disclosure.
[0072] The system [300] comprises the transceiver unit [302] configured to receive
the one or more queries scheduled for execution in the plurality of database clusters.
20 Herein, the plurality of database clusters is referred to a group of databases, where
each database is allotted to handle a certain type of data or an operation. Also, in the database cluster, multiple databases are managed by an application.
[0073] In an implementation of the present disclosure, the term refers herewith
25 ‘database clusters’ is the groups of databases that are managed and monitored
collectively. Examples of such database clusters may include, but are not limited
to, “Nosql” databases such as “mongodb” cluster, “redis”, “kafka”, “cassandra”, or
“Oracle”. It may be noted that such database clusters are only exemplary, and in no
manner construed to limit the scope of the present subject matter in any manner. As
30 would be explained in the foregoing description, the health status and topology of
these clusters are tracked to confirm their proper functioning.
20

[0074] In general, the transceiver unit [302] is configured to handle
communication within the system [300]. In one implementation, the transceiver unit
[302] may receive the one or more queries that are scheduled for execution from an
5 external system or a third-party application. The one or more queries herein are
preferably the database operation queries that are scheduled for execution on the plurality of database clusters. In an implementation, the transceiver unit [302] may receive the one or more queries and may further route the one or more queries to an associated database cluster from the plurality of database clusters.
10
[0075] In one aspect, the scheduling of the one or more queries implies that that the one or more queries are to be executed at a specific time as required by the external system. The external system may allocate specific time period when the one or more queries may be executed. In another aspect, the scheduling of the one or more
15 queries implies that the one or more queries are to be executed only after meeting
certain parameters. The external system may provide certain parameter (such as but not limited to completion of any task) which may be required to meet for the execution of one or more queries. It is to be noted that the scheduling of the one or more queries may further occur in other ways that are known to a person skilled in
20 the art.
[0076] Further, the one or more the queries are database operation queries that are.
Furthermore, the one or more queries comprises at least: Data Retrieval Queries,
Data Insertion/Update Queries and Batch Processing Queries. Herein, the Data
25 Retrieval Queries may involve extracting specific data from the database. Further,
the data insertion/update queries may involve adding new data or modifying
existing data in the database. Moreover, the Batch Processing Queries refer to a
type of query that process large amounts of data in a single execution. It is to be
noted that the Batch Processing Queries are typically scheduled to run during off-
30 peak hours to minimize impact on overall performance of the system [300].
21

[0077] The system [300] further comprise the collecting unit [304] connected at
least to the transceiver unit [302]. The collecting unit [304] is further configured to
collect at least one of performance metrics data and a resource utilization metrics
data from the plurality of database clusters based on the received one or more
5 queries. Post receiving the one or more queries from the transceiver unit [302], the
system [300] may execute the one or more queries and simultaneously, the collecting unit [304] may collect at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters during the execution of the one or more queries.
10
[0078] Further, the performance metrics data is related to efficiency, throughput, and effectiveness associated with the one or more queries. The performance metrics data mentioned herein may refer to one or more information associated with the efficiency, throughput, and effectiveness of the executed one or more queries in the
15 plurality of database clusters. Herein, the efficiency metric is referred to an
execution of the one or more query with minimizing the use of resources like CPU, memory, and disk I/O while maximizing output. Further, the throughput metric may define a number of queries handled in a specific time period. Furthermore, the effectiveness metric may define the accuracy and correctness in the execution of
20 the one or more queries.
[0079] Further, the resource utilization metric data mentioned herein may refer to
a number of resources such as CPU, memory, storage, and network bandwidth, are
consumed by the plurality of database clusters while executing the one or more
25 queries.
[0080] The system [300] further comprise the validating unit [306] configured to
validate each of the collected performance metrics data and the collected resource
utilization metrics data. The primary role of the validating unit [306] is to ensure
30 that the collected metrics data or the resource utilization metrics data are accurate
and reliable. The validating unit [306] may perform one or more steps to validate
22

the collected metrics data or the resource utilization metrics data, which are described accordingly.
[0081] At first, the validating unit [306] may parse the collected metrics data or the
5 resource utilization metrics data to ensure that the collected data is in correct format
or not. Further, post parsing the collected data, the validating unit [306] may determine that whether one or more mandatory parameters from the collected data are present and correctly formatted. Next, the validating unit [306] may ensure that an application programming interface (API) key request for the one or more queries
10 is made by an authenticated user by verifying the provided credentials, such as API
keys and tokens. Thereafter, the validating unit [306] post authentication of the user, may further verify that whether the authenticated user has the appropriate permissions to make the specific API request. Next, the validating unit [306] ensures that the provided parameters are consistent with each other. For instance, if
15 the API request includes a date range, the start date should not be later than the end
date.
[0082] The system [300] further comprise the identifying unit [308] configured to
identify using a learning model, a set of resource intensive queries from the received
20 one or more queries. Post validating the collected data, the identifying unit [308]
using the learning model may identify a query or a group of queries that may consume excessive resources such as CPU, memory, or disk I/O.
[0083] Further, the learning model is configured to analyse query performance
25 metrics data including execution times, query plans, and response times. Further,
the learning model mentioned herein may utilize one or more machine learning
techniques and other models to analyse the performance metrics of queries and
further detect the query or the group of queries that are consuming excessive
resource. The learning model based on a historical data associated with the
30 execution of the one or more queries and the real-time data associated with the
execution of the one or more queries. The query performance metrics data include
23

the execution times, query plans, and response times. Herein, the execution time
analysis may represent a time period takes to execute the one or more query. The
queries that may take significantly longer time in execution if compared to an
average time period are further separated and may termed as potential resource-
5 intensive query.
[0084] Herein, the query plan analysis may provide a plan stating how a query is to be executed. The learning model may process these plans to identify any inefficiencies in the execution process. 10
[0085] Further, the response time analysis may refer to a time taken in the execution of the one or more queries and when a response is returned back to the user.
[0086] The system [300] further comprise the estimation unit [310] configured to
15 estimate resource usage for each of the set of resource intensive queries based upon
correlation between query performance data and the resource utilization metric data. Further, the estimation of resource usage comprises at least one of CPU usage, I/O usage, elapsed time, and execution time. Herein, the estimation unit [310] may analyse both the query performance data and resource utilization metrics data and
20 may further predict a number of resources (such as CPU, I/O, and execution time)
consumed during the execution of each query from the one or more queries. Further, the resource utilization metrics data define the percentage of resources utilized by a component/unit from the total resources allocated for processing or performing a task. Further, the resource utilization metrics data measures the efficiency and the
25 effectiveness of the of the components/units of the system. For example, to estimate
the resource utilization metrics data of, say, a CPU, the estimation unit [310] may measure the percentage of time taken by the CPU to process, perform or execute a task allocated to the CPU from the total available time. The higher percentage may indicate that the CPU may be under heavy load and the CPU may consuming more
30 resources. Whereas the lower percentage may indicate may indicated that the CPU
may ideal and may not be consuming more resources than required.
24

[0087] Post recognition of the resource-intensive queries by the identifying unit
[308], the estimation unit [310] may further calculate the estimate resource usage
for each query, which may further be utilized in efficient allotment of resources to
5 the one or more queries in future. Further, for generating the estimation of resource
usage mentioned above, the estimation unit [310] firstly analyses a behaviour of query during their execution (termed as query performance metrics) and the resources consumed during the execution of said query (termed as resource utilization metrics). 10
[0088] In an implementation, the estimation unit [310] correlates a complexity and a type of query (mentioned in the query performance data) with an amount of CPU required to execute the query.
15 [0089] In another implementation, the I/O usage may refer to a data required by the
query to either read from or write to the database. Herein, the estimation unit [310] analyses the performance metrics for said query, such as data retrieval rates and the volume of data processed and correlates the performance metrics with the I/O usage metrics.
20
[0090] In yet another implementation, the elapsed time may refer to the total time taken by the query to execute. The estimation unit [310] herein may use the historical data and performance metrics such as query execution plans, latency, and bottlenecks to estimate the time taken by said query to execute. The elapsed time is
25 often influenced by both the computational complexity of the query (affecting CPU
usage) and the amount of data involved (affecting I/O usage). Herein, the estimation unit [310] correlates a longer time period for the execution of query with higher resource consumption.
30 [0091] In yet another implementation, the execution time may refer to the actual
time taken from the scheduling of the query to the execution of the query. The
25

estimation unit [310] correlates execution time with the CPU and I/O usage, as the amount of processing power and data transfer involved in the time taken by the query to execute.
5 [0092] The system [300] further comprise the generating unit [312] configured to
generate a set of insights based on the correlation. Further, the set of insights comprises at least: anomalies detection, query optimization recommendation and resource management.
10 [0093] The generating unit [312] herein is connected to the estimation unit [310]
and is responsible for creating the set of insights based on the correlation between the query performance data and resource utilization metrics data. The set of insights generated herein may further assist in identifying any potential bottlenecks, optimization recommendation, and anomalies in resource usage.
15
[0094] The generating unit [312] herein may firstly analyse the data generated by the estimation unit [310] and transforming the generated raw data into high-level recommendations or alerts. The generating unit [312] may create the set of insights by utilizes historical data and predefined patterns to provide insights such as
20 detecting anomalies, offer optimization recommendation, and facilitating resource
management. The set of insights may further assist in improving query performance, and efficient resource allocation across the database clusters.
[0095] In an implementation, for anomalies detection, the generating unit [312]
25 identifies unusual patterns or deviations from an expected behaviour (such as taking
longer time than usual) in the query performance data and resource utilization metrics data during the execution of the query.
[0096] In an implementation, for query optimization recommendation, the
30 generating unit [312] within the set of insights may provide suggestions for
optimizing resource-intensive queries, based on the performance and resource
26

utilization metrics. For example, the suggestions may include changes to query
structure, on an addition of indexes, or modifications to query plans to improve
efficiency and reduce resource consumption. The generating unit [312] derives the
following by analysing patterns in the execution plans, resource usage, and
5 historical performance data. The generating unit [312] might suggest alternative
query strategies, such as rewriting queries to reduce complexity or changing database indexing strategies to enhance performance.
[0097] In yet another implementation, for generating resource management
10 insights, the generating unit [312] may provide insights which may include a better
management of resources such as CPU, memory, and I/O. Herein, the insights may involve redistributing workloads across database clusters, scheduling resource-intensive queries during off-peak hours, adjusting resource allocation and others to prevent bottlenecks. 15
[0098] The system [300] further comprise the display unit [314] configured to
display via a user interface, the generated set of insights. The display unit [314] is
configured to visually present the generated set of insights through a user interface
(UI), that may further allow users to monitor the health of the database clusters,
20 identify performance issues, and make informed decisions based on the data. The
UI herein may present the generated set of insights such as the anomalies detection, query optimization recommendations, and resource management strategies, in a user-friendly manner.
25 [0099] Referring to FIG. 4, an exemplary method flow diagram [400] for query
management in network databases, in accordance with exemplary implementations of the present disclosure is shown. In an implementation the method [400] is performed by the system [300]. Further, in an implementation, the system [300] may be present in a server device to implement the features of the present
30 disclosure.
27

[0100] Also, as shown in Figure 4, the method [400] initially starts at step [402].
[0101] The method [400] is associated with query management in network
databases. Herein, the query management in the network databases is referred to
5 the process or handling, processing, and executing one or more queries across a
plurality of databases clusters within the network.
[0102] At step [404], the method [400] comprises receiving, by a transceiver unit
[302], one or more queries scheduled for execution in a plurality of database
10 clusters. The method [400] explains that the plurality of database clusters is referred
to a group of databases, where each database is allotted to handle a certain type of data or an operation.
[0103] Further, in one implementation, the transceiver unit [302] may receive the
15 one or more queries that are scheduled for execution from an external system or a
third-party application. Furthermore, the one or more the queries are database
operation queries that are scheduled for execution on the plurality of database
clusters. In an implementation, the transceiver unit [302] may receive the one or
more queries and may further route the one or more queries to an associated
20 database cluster from the plurality of database clusters.
[0104] In one aspect, the scheduling of the one or more queries implies that that the
one or more queries are to be executed at a specific time as required by the external
system. In another aspect, the scheduling of the one or more queries implies that
25 the one or more queries are to be executed only after meeting certain parameters. It
is to be noted that the scheduling of the one or more queries may further occur in other ways that are known to a person skilled in the art.
[0105] Furthermore, the one or more queries comprises at least: Data Retrieval
30 Queries, Data Insertion/Update Queries and Batch Processing Queries. Herein, the
Data Retrieval Queries may involve extracting specific data from the database.
28

Further, the data insertion/update queries may involve adding new data or
modifying existing data in the database. Moreover, the Batch Processing Queries
refer to a type of query that process large amounts of data in a single execution. It
is to be noted that the Batch Processing Queries are typically scheduled to run
5 during off-peak hours to minimize impact on overall performance of the system
[300].
[0106] At step [406], the method [400] comprises collecting, by a collecting unit [304], at least one of performance metrics data and a resource utilization metrics
10 data from the plurality of database clusters based on the received one or more
queries. The method [400] explains that post receiving the one or more queries from the transceiver unit [302], the system [300] may execute the one or more queries and simultaneously the collecting unit [304] may further collect the at least one of performance metrics data and a resource utilization metrics data from the plurality
15 of database clusters during the execution of the one or more queries.
[0107] Further, the performance metrics data is related to efficiency, throughput, and effectiveness associated with the one or more queries. The performance metrics data mentioned herein may refer to one or more information associated with the
20 efficiency, throughput, and effectiveness of the executed one or more queries in the
plurality of database clusters. Herein, the efficiency metric is referred to an execution of the one or more query with minimizing the use of resources like CPU, memory, and disk I/O while maximizing output. Further, the throughput metric may define a number of queries handled in a specific time period. Furthermore, the
25 effectiveness metric may define the accuracy and correctness in the execution of
the one or more queries.
[0108] Further, the resource utilization metric data mentioned herein may refer to
a number of resources such as CPU, memory, storage, and network bandwidth, are
30 consumed by the plurality of database clusters while executing the one or more
queries.
29

[0109] At step [408], the method [400] comprises validating, by a validating unit
[306], each of the collected performance metrics data and the collected resource
utilization metrics data. The method [400] further states that the primary role of the
5 validating unit [306] is to ensure that the collected metrics data or the resource
utilization metrics data are accurate and reliable. The validating unit [306] may perform one or more steps to validate the collected metrics data or the resource utilization metrics data, which are described accordingly.
10 [0110] At first, the validating unit [306] may parse the collected metrics data or the
resource utilization metrics data to ensure that the collected data is in correct format or not. Further, post parsing the collected data, the validating unit [306] may determine that whether one or more mandatory parameters from the collected data are present and correctly formatted. Next, the validating unit [306] may ensure that
15 an application programming interface (API) key request for the one or more queries
is made by an authenticated user by verifying the provided credentials, such as API keys and tokens. Thereafter, the validating unit [306] post authentication of the user, may further verify that whether the authenticated user has the appropriate permissions to make the specific API request. Next, the validating unit [306]
20 ensures that the provided parameters are consistent with each other. For instance, if
the API request includes a date range, the start date should not be later than the end date.
[0111] At step [410], the method [400] comprises identifying, by an identifying
25 unit [308] using a learning model, a set of resource intensive queries from the
received one or more queries. The method [400] further explains that post validating the collected data, the identifying unit [308] using the learning model may identify a query or a group of queries that may consume excessive resources such as CPU, memory, or disk I/O. 30
30

[0112] Further, the learning model is configured to analyse query performance
metrics data including execution times, query plans, and response times. The
learning model mentioned herein may utilize one or more machine learning
techniques and other models to analyse the performance metrics of queries and
5 further detect the query or the group of queries that are consuming excessive
resource. The learning model based on a historical data associated with the
execution of the one or more queries and the real-time data associated with the
execution of the one or more queries. The query performance metrics data include
the execution times, query plans, and response times. Herein, the execution time
10 analysis may represent a time period takes to execute the one or more query. The
queries that may take significantly longer time in execution if compared to an average time period are further separated and may termed as potential resource-intensive query.
15 [0113] Herein, the query plan analysis may provide a plan stating how a query is to
be executed. The learning model may process these plans to identify any inefficiencies in the execution process.
[0114] Further, the response time analysis may refer to a time taken in the execution
20 of the one or more queries and when a response is returned back to the user.
[0115] At step [412], the method [400] comprises estimating, by an estimation unit [310], resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric
25 data. Further, the estimation of resource usage comprises at least one of CPU usage,
I/O usage, elapsed time, and execution time. The method [400] further explains that the the estimation unit [310] may analyse both the query performance data and resource utilization metrics data and may further predict a number of resources (such as CPU, I/O, and execution time) consumed during the execution of each
30 query from the one or more queries.
31

[0116] Post recognition of the resource-intensive queries by the identifying unit
[308], the estimation unit [310] may further calculate the estimate resource usage
for each query, which may further be utilized in efficient allotment of resources to
the one or more queries in future. Further, for generating the estimation of resource
5 usage mentioned above, the estimation unit [310] firstly analyses a behaviour of
query during their execution (termed as query performance metrics) and the resources consumed during the execution of said query (termed as resource utilization metrics).
10 [0117] In an implementation, the estimation unit [310] correlates a complexity and
a type of query (mentioned in the query performance data) with an amount of CPU required to execute the query.
[0118] In another implementation, the I/O usage may refer to a data required by the
15 query to either read from or write to the database. Herein, the estimation unit [310]
analyses the performance metrics for said query, such as data retrieval rates and the volume of data processed and correlates the performance metrics with the I/O usage metrics.
20 [0119] In yet another implementation, the elapsed time may refer to the total time
taken by the query to execute. The estimation unit [310] herein may use the historical data and performance metrics such as query execution plans, latency, and bottlenecks to estimate the time taken by said query to execute. The elapsed time is often influenced by both the computational complexity of the query (affecting CPU
25 usage) and the amount of data involved (affecting I/O usage). Herein, the estimation
unit [310] correlates a longer time period for the execution of query with higher resource consumption.
[0120] In yet another implementation, the execution time may refer to the actual
30 time taken from the scheduling of the query to the execution of the query. The
estimation unit [310] correlates execution time with the CPU and I/O usage, as the
32

amount of processing power and data transfer involved in the time taken by the query to execute.
[0121] At step [414], the method [400] comprises generating, by a generating unit
5 [312], a set of insights based on the correlation. Further, the set of insights
comprises at least: anomalies detection, query optimization recommendation and
resource management. The method [400] further explains that the generating unit
[312] herein is connected to the estimation unit [310] and is responsible for creating
the set of insights based on the correlation between the query performance data and
10 resource utilization metrics data. The set of insights generated herein may further
assist in identifying any potential bottlenecks, optimization recommendation, and anomalies in resource usage.
[0122] The generating unit [312] herein may firstly analyse the data generated by
15 the estimation unit [310] and transforming the generated raw data into high-level
recommendations or alerts. The generating unit [312] may create the set of insights
by utilizes historical data and predefined patterns to provide insights such as
detecting anomalies, offer optimization recommendation, and facilitating resource
management. The set of insights may further assist in improving query
20 performance, and efficient resource allocation across the database clusters.
[0123] In an implementation, for anomalies detection, the generating unit [312]
identifies unusual patterns or deviations from an expected behaviour (such as taking
longer time than usual) in the query performance data and resource utilization
25 metrics data during the execution of the query.
[0124] In an implementation, for query optimization recommendation, the
generating unit [312] within the set of insights may provide suggestions for
optimizing resource-intensive queries, based on the performance and resource
30 utilization metrics. For example, the suggestions may include changes to query
structure, on an addition of indexes, or modifications to query plans to improve
33

efficiency and reduce resource consumption. The generating unit [312] derives the
following by analysing patterns in the execution plans, resource usage, and
historical performance data. The generating unit [312] might suggest alternative
query strategies, such as rewriting queries to reduce complexity or changing
5 database indexing strategies to enhance performance.
[0125] In yet another implementation, for generating resource management
insights, the generating unit [312] may provide insights which may include a better
management of resources such as CPU, memory, and I/O. Herein, the insights may
10 involve redistributing workloads across database clusters, scheduling resource-
intensive queries during off-peak hours, adjusting resource allocation and others to prevent bottlenecks.
[0126] At step [416], the method [400] comprises displaying, by a display unit
15 [314] via a user interface, the generated set of insights. The method [400] further
explains that the display unit [314] is configured to visually present the generated
set of insights through a user interface (UI), that may further allow users to monitor
the health of the database clusters, identify performance issues, and make informed
decisions based on the data. The UI herein may present the generated set of insights
20 such as the anomalies detection, query optimization recommendations, and resource
management strategies, in a user-friendly manner.
[0127] The method [400] herein terminates at step [418].
25 [0128] Referring to FIG. 5, an exemplary system architecture diagram [500], for
query management in network databases in accordance with exemplary implementations of the present disclosure is shown. The system architecture [500] comprises a user interface [502], a notification unit [504], a management unit [506], a centralized data repository [508], and one or more database clusters [510].
30
34

[0129] Further, the user interface [502] mentioned herein may serve as a point of
interaction for the users (User A, User B, and User C) with the system [500].
Further, the user interface [502] may display a set of insights which may include
query optimization recommendations, anomaly detection, and resource
5 management information. Further, the user-interface may further allow users (User
A, User B, and User C) to view details on eliminated resource-intensive queries or anomalies, allowing the users (User A, User B, and User C) to make further adjustments in their future one or more queries.
10 [0130] The notification unit [504] is connected to the user-interface and is
responsible for sending notifications to the users (User A, User B, and User C). The notification unit [504] is connected to the management unit [506] and may provide timely notifications to the users (User A, User B, and User C) which may associate with any resource-intensive query or anomaly.
15
[0131] Further, the management unit [506] is responsible for managing the interactions between the user interface [502], the one or more Database Services, and the centralized data repository [508]. The role of the management unit [506] is to receive one or more data from the one or more database services and may further
20 store the one or more data in the centralized data repository [508]. The manager
unit may further handle one or more user requests, such as a real-time query analysis or retrieving the generated set of insights and may further send the generated set of insights to the user interface [502].
25 [0132] Further, the centralized data repository [508] is utilized to store a collected
data including the real time query performance data and the resource utilization metrics data. The centralized data repository [508] is utilized for storing a raw data and a processed data associated with collected data, by the one or more database services such as the validation of the collected data, storing of the collected data,
30 aggregation of the stored data (i.e., collected data), and corelation of the stored data.
35

[0133] Further, the one or more database clusters [510] are referred as a plurality
of databases that are being monitored. The plurality of databases is utilized for
executing the one or more queries. Further, one or more information associated with
the real time query performance data and the resource utilization metrics data is
5 forwarded to the one or more database services that is associated with a specific
database. The one or more database services as mentioned may process the one or
more information and generate the set of insights and post analysing any anomalies,
slow-running queries, and resource-intensive operations across the one or more
database clusters [510], the management unit [506] may further facilitate the set of
10 insights to the users (User A, User B, and User C) over the user interface [502].
[0134] Referring to FIG. 6 an exemplary method flow [600] for providing access
to user, in accordance with the exemplary embodiments of the present invention is
shown. The method [600] is being performed by the system [500]. The method
15 [600] initiates at step [602].
[0135] At step [602], the method [600] explains that the system [500] may receive
a request for collecting the real time query performance data and the resource
utilization metrics data from the external system [500] such as the worker service.
20 Further, the system [500] may then collect the data related to the query performance
(such as execution time, response time, and query plans) as well as metrics for resource usage (including CPU, I/O, and memory consumption). The collected data is streamed for real-time processing, for identifying any performance issues
25 [0136] At step [604], the method [600] explains that the system [500] may validate
the collected data, to ensure that the collected data is in expected format and syntax. During the validation of the collected data, the system [500] may as ensure that all required fields is the collected data are present and the collected data is in the correct format.
30
36

[0137] At step [606], in an event, the data is invalid, the system [500] may then mark as operation failure, and the method [600] terminates herein.
[0138] At step [608], in an event, the data is valid, then the query performance data
5 and resource utilization metrics data are stored in a centralized data repository
[508].
[0139] At step [610], the system [500] may perform an aggregation on the stored data (i.e., the query performance data and resource utilization metrics data) to
10 organize the stored data for analysis. Herein, the data aggregation may refer to a
combining and summarizing of the incoming data a plurality of database clusters. Herein, the system [500] ensures that the query performance data and resource utilization metrics data are in format that and may be utilized for detailed analysis, such as creating averages, identifying patterns, and grouping data for efficient
15 processing.
[0140] At step [612], the system [500] thereafter may initiate the analytical process
by reviewing the query performance data and resource utilization metrics data to
identify trends and patterns. Herein, the resource utilization metrics data may
20 include execution time, response time, and query complexity. Further, a learning
model in the system [500] is utilized for detecting deviations from normal performance, and further identifying slow or resource-intensive queries, and determining trends in the overall query behaviour across different database clusters.
25 [0141] At step [614], the system [500] evaluates, whether the analysed queries are
slow or consuming excessive resources.
[0142] At step [616], the system [500] may further separate the slow or resource-
intensive queries for advanced analysis, as they may indicate inefficiencies such as
30 poorly optimized queries or bottlenecks in resource allocation. Herein, separating
37

the slow or resource-intensive queries may further assist the system [500] to efficiently figure out the potential recommend optimizations.
[0143] At step [618], the system [500] may further procced to analyse the resource utilization metrics data such as CPU usage, memory consumption, and I/O usage for the queries that are not separated in order to understand patterns in resource consumption and in case a resource allocation is optimal across the database clusters.
[0144] At step [620], the system [500] may check for anomalies in the resource usage based on the collected resource utilization metrics data. Herein, anomalies may include unusual spikes in CPU or memory usage, unusually high I/O rates, or excessive time spent on query execution.
[0145] At step [622], in an event, any anomalies are detected, the system [500] may further separate the query or resource for advanced analysis.
[0146] At step [624], in an event, no anomalies are detected, the system [500] may correlate the query performance metrics data with the resource utilization metrics data, to identify relationships between query execution patterns and resource consumption. For instance, a slow query might correspond to high CPU usage, or inefficient I/O operations that might lead to delayed query response times.
[0147] At step [626], the system [500] based on the correlation between the query performance metrics data with the resource utilization metrics data, generates a set of insights. Herein, the set of insights may include anomaly detection reports (such as identifying queries causing resource overuse), recommendations for query optimization (like suggesting indexes or rewriting queries), and strategies for resource management (optimizing resource allocation across database clusters). The system [500] may further provide one or more actionable insights that may improve an overall performance and efficiency.

[0148] At step [628], post generation of the insights, the system [500] may further compile the data into a structured report or response. The response herein may contain detailed information about the identified performance issues, optimization recommendations, and any resource management suggestions. The compiled data is further utilized for future analysis.
[0149] At step [630], the system [500] may display the generated insights on a user interface [502], which is accessible via a dashboard. Herein, the dashboard may provide real-time data on query performance, resource utilization, anomalies, and optimization suggestions. The user interface [502] may utilize one or more visual tools like charts, graphs, and alerts to present the compiled data in an easily understandable way for the users.
[0150] The present disclosure further discloses a non-transitory computer readable storage medium, storing instructions for query management in network databases, the instructions include executable code which, when executed by one or more units of a system [300], causes: a transceiver unit [302] to receive one or more queries scheduled for execution in a plurality of database clusters. Further, executable code which, when executed by one or more units of a system [300] causes a collecting unit [304] to collect at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters based on the received one or more queries. Further, executable code which, when executed by one or more units of a system [300] causes a validating unit [306] to validate each of the collected performance metrics data and the collected resource utilization metrics data. Further, executable code which, when executed by one or more units of a system [300] causes an identifying unit [308] to identify using a learning model, a set of resource intensive queries from the received one or more queries. Further, executable code which, when executed by one or more units of a system [300] causes an estimation unit [310] to estimate resource usage for each of the set of resource intensive queries based upon correlation between query performance data

and the resource utilization metric data. Further, executable code which, when executed by one or more units of a system [300] causes a generating unit [312] to generate a set of insights based on the correlation. Further, executable code which, when executed by one or more units of a system [300] causes a display unit [314] to display via a user interface, the generated set of insights.
[0151] As is evident from the above, the present disclosure provides a technically advanced solution for query management in network databases. The present solution may provide actionable insights into query performance issues and resource bottlenecks. Further, the present solution may generate data-driven insights from query and resource analysis inform informed decision-making for database optimization, scaling, and resource allocation. The present disclosure is implemented in the 5G network but may further be implemented in a 6th generation network or any other future generations of network. Further, the present solution facilitates optimized query execution that may improve application responsiveness and user experience. The present solution further ensures balanced resource allocation across various database clusters, preventing resource contention and maintaining smooth operations.
[0152] While considerable emphasis has been placed herein on the disclosed implementations, it will be appreciated that many implementations can be made and that many changes can be made to the implementations without departing from the principles of the present disclosure. These and other changes in the implementations of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.

We Claim:
1. A method [400] for query management in network databases, the method
[400] comprising:
receiving, by a transceiver unit [302], one or more queries scheduled for execution in a plurality of database clusters;
collecting, by a collecting unit [304], at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters based on the received one or more queries;
validating, by a validating unit [306], each of the collected performance metrics data and the collected resource utilization metrics data;
identifying, by an identifying unit [308] using a learning model, a set of resource intensive queries from the received one or more queries;
estimating, by an estimation unit [310], resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric data;
generating, by a generating unit [312], a set of insights based on the correlation; and
displaying, by a display unit [314] via a user interface, the generated set of insights.
2. The method [400] as claimed in claim 1, wherein the performance metrics data is related to efficiency, throughput, and effectiveness associated with the one or more queries.
3. The method [400] as claimed in claim 1, wherein the set of insights comprises at least: anomalies detection, query optimization recommendation and resource management.
4. The method [400] as claimed in claim 1, wherein the one or more the queries are scheduled for execution on the plurality of database clusters.

5. The method [400] as claimed in claim 1, wherein the estimation of resource usage comprises at least one of CPU usage, I/O usage, elapsed time, and execution time.
6. The method [400] as claimed in claim 1, wherein the learning model is configured to analyse query performance metrics data including execution times, query plans, and response times.
7. A system [300] for query management in network databases, the system [300] comprising:
a transceiver unit [302] configured to receive one or more queries scheduled for execution in a plurality of database clusters;
a collecting unit [304] configured to collect at least one of performance metrics data and a resource utilization metrics data from the plurality of database clusters based on the received one or more queries;
a validating unit [306] configured to validate each of the collected performance metrics data and the collected resource utilization metrics data;
an identifying unit [308] configured to identify using a learning model, a set of resource intensive queries from the received one or more queries;
an estimation unit [310] configured to estimate resource usage for each of the set of resource intensive queries based upon correlation between query performance data and the resource utilization metric data;
a generating unit [312] configured to generate a set of insights based on the correlation; and
a display unit [314] configured to display via a user interface, the generated set of insights.
8. The system [300] as claimed in claim 7, wherein the performance metrics data
is related to efficiency, throughput, and effectiveness associated with the one
or more queries.

9. The system [300] as claimed in claim 7, wherein the set of insights comprises at least: anomalies detection, query optimization recommendation and resource management.
10. The system [300] as claimed in claim 7, wherein the one or more the queries are scheduled for execution on the plurality of database clusters.
11. The system [300] as claimed in claim 7, wherein the estimation of resource usage comprises at least one of CPU usage, I/O usage, elapsed time, and execution time.
12. The system [300] as claimed in claim 7, wherein the learning model is configured to analyse query performance metrics data including execution times, query plans, and response times.

Documents

Application Documents

# Name Date
1 202321061430-STATEMENT OF UNDERTAKING (FORM 3) [12-09-2023(online)].pdf 2023-09-12
2 202321061430-PROVISIONAL SPECIFICATION [12-09-2023(online)].pdf 2023-09-12
3 202321061430-POWER OF AUTHORITY [12-09-2023(online)].pdf 2023-09-12
4 202321061430-FORM 1 [12-09-2023(online)].pdf 2023-09-12
5 202321061430-FIGURE OF ABSTRACT [12-09-2023(online)].pdf 2023-09-12
6 202321061430-DRAWINGS [12-09-2023(online)].pdf 2023-09-12
7 202321061430-Proof of Right [03-01-2024(online)].pdf 2024-01-03
8 202321061430-FORM-5 [11-09-2024(online)].pdf 2024-09-11
9 202321061430-ENDORSEMENT BY INVENTORS [11-09-2024(online)].pdf 2024-09-11
10 202321061430-DRAWING [11-09-2024(online)].pdf 2024-09-11
11 202321061430-CORRESPONDENCE-OTHERS [11-09-2024(online)].pdf 2024-09-11
12 202321061430-COMPLETE SPECIFICATION [11-09-2024(online)].pdf 2024-09-11
13 202321061430-Request Letter-Correspondence [18-09-2024(online)].pdf 2024-09-18
14 202321061430-Power of Attorney [18-09-2024(online)].pdf 2024-09-18
15 202321061430-Form 1 (Submitted on date of filing) [18-09-2024(online)].pdf 2024-09-18
16 202321061430-Covering Letter [18-09-2024(online)].pdf 2024-09-18
17 202321061430-CERTIFIED COPIES TRANSMISSION TO IB [18-09-2024(online)].pdf 2024-09-18
18 Abstract 1.jpg 2024-10-07
19 202321061430-FORM 3 [07-10-2024(online)].pdf 2024-10-07
20 202321061430-ORIGINAL UR 6(1A) FORM 1 & 26-090125.pdf 2025-01-14