Sign In to Follow Application
View All Documents & Correspondence

Method And System For Performing Predictive Analysis Of Database Clusters

Abstract: The present disclosure relates to a method and a system for performing predictive analysis of database cluster within a plurality of database clusters. The method comprises collecting, by a collecting unit [302], data associated with a set of historical performance metrics from a plurality of database clusters. The method comprises detecting, by a detecting unit [304] using a trained model, one or more anomalies in the set of historical performance metrics data. Furthermore, the method comprises performing, by a processing unit [306], a root cause analysis for the detected one or more anomalies. The method comprises generating, by a generating unit [308], a report based on the root cause analysis. The method comprises rendering, by a display unit [310], the generated report. [FIG. 4]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 September 2023
Publication Number
14/2025
Publication Type
INA
Invention Field
PHYSICS
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. Aayush Bhatnagar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Sumit Thakur
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Tejesh Dakhinkar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Pritam Nath
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Puspesh Prakash
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
6. Mohit Chaudhary
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
7. Kartik Nahak
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
8. Abhishek Sahu
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

FORM 2
THE PATENTS ACT, 1970
(39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)
“METHOD AND SYSTEM FOR PERFORMING PREDICTIVE
ANALYSIS OF DATABASE CLUSTERS”
We, Jio Platforms Limited, an Indian National, of Office - 101, Saffron, Nr.
Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.
The following specification particularly describes the invention and the manner in
which it is to be performed.
2
METHOD AND SYSTEM FOR PERFORMING PREDICTIVE ANALYSIS
OF DATABASE CLUSTERS
TECHNICAL FIELD
5
[0001] Embodiments of the present disclosure generally relate to network
performance management systems. More particularly, embodiments of the present
disclosure relate to performing predictive analysis of database cluster within a
database clusters.
10
BACKGROUND
[0002] The following description of the related art is intended to provide
background information pertaining to the field of the disclosure. This section may
15 include certain aspects of the art that may be related to various features of the
present disclosure. However, it should be appreciated that this section is used only
to enhance the understanding of the reader with respect to the present disclosure,
and not as admissions of the prior art.
20 [0003] Wireless communication technology has rapidly evolved over the past few
decades, with each generation bringing significant improvements and
advancements. The first generation of wireless communication technology was
based on analog technology and offered only voice services. However, with the
advent of the second generation (2G) technology, digital communication and data
25 services became possible, and text messaging was introduced. The third generation
(3G) technology marked the introduction of high-speed internet access, mobile
video calling, and location-based services. The fourth generation (4G) technology
revolutionized wireless communication with faster data speeds, better network
coverage, and improved security. Currently, the fifth generation (5G) technology is
30 being deployed, promising even faster data speeds, low latency, and the ability to
connect multiple devices simultaneously. With each generation, wireless
3
communication technology has become more advanced, sophisticated, and capable
of delivering more services to its users.
[0004] Tracing cluster health changes in a 5G network can be challenging
primarily, due to high complexity 5 of multitude of interconnected network elements.
Real-time monitoring of cluster health may lead to delays in data collection and
analysis. Also, 5G networks generate vast amounts of data relating to network
performance and health, thereby, leading to huge overhead caused by even
sophisticated analytics tools.
10
[0005] Thus, there exists an imperative need in the art to develop methods and
systems for tracing cluster health changes and analyzing historical data.
SUMMARY
15
[0006] This section is provided to introduce certain aspects of the present disclosure
in a simplified form that are further described below in the detailed description.
This summary is not intended to identify the key features or the scope of the claimed
subject matter.
20
[0007] An aspect of the present disclosure may relate to a method for performing
predictive analysis of database cluster within a plurality of database clusters. The
method comprises collecting, by a collecting unit, data associated with a set of
historical performance metrics from the plurality of database clusters. The method
25 further comprises detecting, by a detecting unit using a trained model, one or more
anomalies in the set of historical performance metrics data. The method further
comprises performing, by a processing unit, a root cause analysis for the detected
one or more anomalies. Furthermore, the method comprises generating, by a
generating unit, a report based on the root cause analysis. The method further
30 comprises rendering, by a display unit, the generated report.
4
[0008] In an exemplary aspect of the present disclosure, the set of historical
performance metrics data comprises at least one of central processing unit (CPU)
utilisation, memory utilisation, network traffic, storage usage, and application log.
[0009] In an exemplary 5 aspect of the present disclosure, the method further
comprises preprocessing, by the processing unit, the collected set of historical
performance metrics data, wherein the preprocessing comprises performing at least
one of data cleaning, transformation, and filtering.
10 [0010] In an exemplary aspect of the present disclosure, the root cause analysis
comprises at least one of examining logs, reviewing configuration changes, and
investigating impact of external factors.
[0011] In an exemplary aspect of the present disclosure, the one or more anomalies
15 corresponds to unusual pattern that indicate at least one of a potential issue or an
irregular behaviour within the plurality of database clusters, wherein the irregular
behaviour comprises at least: increase query latency, unbalanced load distribution,
high resource utilization.
20 [0012] In an exemplary aspect of the present disclosure, the trained model is trained
based on the set of historical performance metrics data.
[0013] In an exemplary aspect of the present disclosure, the set of historical
performance metrics data is collected at a preconfigured time period periodically.
25
[0014] In an exemplary aspect of the present disclosure, the method further
comprises selecting, by a selecting unit, a set of key performance metrics from the
set of historical performance metrics, wherein the selection of set of key
performance metrics is based on a database of the plurality of database clusters
30 being analysed for detecting the one or more anomalies.
5
[0015] In an exemplary aspect of the present disclosure, the set of key performance
metrics comprises at least one of response time, throughput, error rate, and resource
utilization.
[0016] In an exemplary aspect of 5 the present disclosure, the generated report
comprises a set of actionable recommendations for optimizing performance of
plurality of database clusters. The set of actionable recommendations comprises at
least recommendation for scaling size of the plurality of database clusters based on
the detected one or more anomalies.
10
[0017] In an exemplary aspect of the present disclosure, the scaling comprises at
least one of scale up and scale down.
[0018] Another aspect of the present disclosure may relate to a system for
15 performing predictive analysis of database cluster within a plurality of database
clusters. The system comprises a collecting unit. The collecting unit is configured
to collect data associated with a set of historical performance metrics from the
plurality of database clusters. The system further comprises a detecting unit. The
detecting unit may be using a trained model to detect one or more anomalies in the
20 set of historical performance metrics data. The system further comprises a
processing unit. The processing unit is configured to perform a root cause analysis
for the detected one or more anomalies. The root cause analysis comprises at least
one of examining logs, reviewing configuration changes, and investigating impact
of external factors. The system further comprises a generating unit. The generating
25 unit is configured to generate a report based on the root cause analysis. Furthermore,
the system comprises a display unit. The display unit is configured to render the
generated report.
[0019] Yet another aspect of the present disclosure may relate to a non-transitory
30 computer readable storage medium, storing one or more instructions for performing
predictive analysis of database cluster within a plurality of database clusters, the
6
instructions include executable code which, when executed by one or more units of
a system cause a collecting unit to collect data associated with a set of historical
performance metrics from the plurality of database clusters. The instructions when
executed by the system further cause a detecting unit, using a trained model, to
detect one or more anomalies in the set of historical 5 performance metrics data. The
instructions when executed by the system further cause a processing unit to perform
a root cause analysis for the detected one or more anomalies. The root cause
analysis comprises at least one of examining logs, reviewing configuration changes,
and investigating impact of external factors. The instructions when executed by the
10 system further cause a generating unit to generate a report based on the root cause
analysis. The instructions when executed by the system further cause a display unit
configured to render the generated report.
OBJECTS OF THE INVENTION
15
[0020] Some of the objects of the present disclosure, which at least one
embodiment disclosed herein satisfies are listed herein below.
[0021] It is an object of the present disclosure to provide a system and a method for
20 gathering historical data from different regions of 5G network within the cluster,
such as monitoring tools, logs, performance metrics, and resource usage statistics.
[0022] It is another object of the present disclosure provides numerous
functionality such as metric selection, reporting and recommendations, continuous
25 learning, health and performance of database, to ensure the overall stability,
availability, and performance of database.
[0023] It is yet another object of the present disclosure to perform anomaly
detection, root cause analysis and predictive analysis to remove the anomalies.
30
DESCRIPTION OF THE DRAWINGS
7
[0024] The accompanying drawings, which are incorporated herein, and constitute
a part of this disclosure, illustrate exemplary embodiments of the disclosed methods
and systems in which like reference numerals refer to the same parts throughout the
different drawings. Components in the 5 drawings are not necessarily to scale,
emphasis instead being placed upon clearly illustrating the principles of the present
disclosure. Also, the embodiments shown in the figures are not to be construed as
limiting the disclosure, but the possible variants of the method and system
according to the disclosure are illustrated herein to highlight the advantages of the
10 disclosure. It will be appreciated by those skilled in the art that disclosure of such
drawings includes disclosure of electrical components or circuitry commonly used
to implement such components.
[0025] FIG. 1 illustrates an exemplary block diagram representation of 5th
15 generation core (5GC) network architecture, in accordance with exemplary
implementation of the present disclosure.
[0026] FIG. 2 illustrates an exemplary block diagram of a computing device upon
which the features of the present disclosure may be implemented in accordance with
20 exemplary implementation of the present disclosure.
[0027] FIG. 3 illustrates an exemplary block diagram of a system for performing
predictive analysis of database cluster within a database clusters, in accordance with
exemplary implementations of the present disclosure.
25
[0028] FIG. 4 illustrates a method flow diagram for performing predictive analysis
of database cluster within a database clusters, in accordance with exemplary
implementations of the present disclosure.
8
[0029] FIG. 5 illustrates an implementation of the system for performing predictive
analysis of database cluster within a database clusters, in accordance with
exemplary implementations of the present disclosure.
[0030] FIG. 6 illustrates 5 an implementation of the method of collecting historic
performance data for performing predictive analysis of database cluster within a
database clusters, in accordance with exemplary implementations of the present
disclosure.
10 [0031] FIG. 7 illustrates a second implementation of a method for generation of a
report for predictive analysis of database cluster within a database clusters, in
accordance with exemplary implementations of the present disclosure.
[0032] The foregoing shall be more apparent from the following more detailed
15 description of the disclosure.
DETAILED DESCRIPTION
[0033] In the following description, for the purposes of explanation, various
20 specific details are set forth in order to provide a thorough understanding of
embodiments of the present disclosure. It will be apparent, however, that
embodiments of the present disclosure may be practiced without these specific
details. Several features described hereafter may each be used independently of one
another or with any combination of other features. An individual feature may not
25 address any of the problems discussed above or might address only some of the
problems discussed above.
[0034] The ensuing description provides exemplary embodiments only, and is not
intended to limit the scope, applicability, or configuration of the disclosure. Rather,
30 the ensuing description of the exemplary embodiments will provide those skilled in
the art with an enabling description for implementing an exemplary embodiment.
9
It should be understood that various changes may be made in the function and
arrangement of elements without departing from the spirit and scope of the
disclosure as set forth.
[0035] Specific details 5 are given in the following description to provide a thorough
understanding of the embodiments. However, it will be understood by one of
ordinary skill in the art that the embodiments may be practiced without these
specific details. For example, circuits, systems, processes, and other components
may be shown as components in block diagram form in order not to obscure the
10 embodiments in unnecessary detail.
[0036] Also, it is noted that individual embodiments may be described as a process
which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure
diagram, or a block diagram. Although a flowchart may describe the operations as
15 a sequential process, many of the operations may be performed in parallel or
concurrently. In addition, the order of the operations may be re-arranged. A process
is terminated when its operations are completed but could have additional steps not
included in a figure.
20 [0037] The word “exemplary” and/or “demonstrative” is used herein to mean
serving as an example, instance, or illustration. For the avoidance of doubt, the
subject matter disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as “exemplary” and/or “demonstrative” is not
necessarily to be construed as preferred or advantageous over other aspects or
25 designs, nor is it meant to preclude equivalent exemplary structures and techniques
known to those of ordinary skill in the art. Furthermore, to the extent that the terms
“includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner
similar to the term “comprising” as an open transition word—without precluding
30 any additional or other elements.
10
[0038] As used herein, a “processing unit” or “processor” or “operating processor”
includes one or more processors, wherein processor refers to any logic circuitry for
processing instructions. A processor may be a general-purpose processor, a special
purpose processor, a conventional processor, a digital signal processor, a plurality
of microprocessors, one or more 5 microprocessors in association with a (Digital
Signal Processing) DSP core, a controller, a microcontroller, Application Specific
Integrated Circuits, Field Programmable Gate Array circuits, any other type of
integrated circuits, etc. The processor may perform signal coding data processing,
input/output processing, and/or any other functionality that enables the working of
10 the system according to the present disclosure. More specifically, the processor or
processing unit is a hardware processor.
[0039] As used herein, “a user equipment”, “a user device”, “a smart-user-device”,
“a smart-device”, “an electronic device”, “a mobile device”, “a handheld device”,
15 “a wireless communication device”, “a mobile communication device”, “a
communication device” may be any electrical, electronic and/or computing device
or equipment, capable of implementing the features of the present disclosure. The
user equipment/device may include, but is not limited to, a mobile phone, smart
phone, laptop, a general-purpose computer, desktop, personal digital assistant,
20 tablet computer, wearable device or any other computing device which is capable
of implementing the features of the present disclosure. Also, the user device may
contain at least one input means configured to receive an input from at least one of
a transceiver unit, a processing unit, a storage unit, a detection unit and any other
such unit(s) which are required to implement the features of the present disclosure.
25
[0040] As used herein, “storage unit” or “memory unit” refers to a machine or
computer-readable medium including any mechanism for storing information in a
form readable by a computer or similar machine. For example, a computer-readable
medium includes read-only memory (“ROM”), random access memory (“RAM”),
30 magnetic disk storage media, optical storage media, flash memory devices or other
types of machine-accessible storage media. The storage unit stores at least the data
11
that may be required by one or more units of the system to perform their respective
functions.
[0041] As used herein “interface” or “user interface” refers to a shared boundary
across which two or more separate components 5 of a system exchange information
or data. The interface may also be referred to a set of rules or protocols that define
communication or interaction of one or more modules or one or more units with
each other, which also includes the methods, functions, or procedures that may be
called.
10
[0042] All modules, units, components used herein, unless explicitly excluded
herein, may be software modules or hardware processors, the processors being a
general-purpose processor, a special purpose processor, a conventional processor,
a digital signal processor (DSP), a plurality of microprocessors, one or more
15 microprocessors in association with a DSP core, a controller, a microcontroller,
Application Specific Integrated Circuits (ASIC), Field Programmable Gate Array
circuits (FPGA), any other type of integrated circuits, etc.
[0043] As used herein the transceiver unit include at least one receiver and at least
20 one transmitter configured respectively for receiving and transmitting data, signals,
information or a combination thereof between units/components within the system
and/or connected with the system.
[0044] As discussed in the background section, the current known solutions have
25 several shortcomings. The present disclosure aims to overcome the problems
mentioned in the background and other existing problems in this field of technology
by providing method and system of performing predictive analysis of database
cluster within a plurality of database clusters. The present disclosure provides
functionality to address Anomaly Detection, Root Cause Analysis, Performance
30 Measurement, Predictive Analysis and Root Cause Identification.
12
[0045] FIG. 1 illustrates an exemplary block diagram representation of 5th
generation core (5GC) network architecture [100], in accordance with exemplary
implementation of the present disclosure. As shown in FIG. 1, the 5GC network
architecture [100] includes a user equipment (UE) [102], a radio access network
(RAN) [104], an access and mobility 5 management function (AMF) [106], a Session
Management Function (SMF) [108], a Service Communication Proxy (SCP) [110],
an Authentication Server Function (AUSF) [112], a Network Slice Specific
Authentication and Authorization Function (NSSAAF) [114], a Network Slice
Selection Function (NSSF) [116], a Network Exposure Function (NEF) [118], a
10 Network Repository Function (NRF) [120], a Policy Control Function (PCF) [122],
a Unified Data Management (UDM) [124], an application function (AF) [126], a
User Plane Function (UPF) [128], a data network (DN) [130], wherein all the
components are assumed to be connected to each other in a manner as obvious to
the person skilled in the art for implementing features of the present disclosure.
15
[0046] The Radio Access Network (RAN) [104] is the part of a mobile
telecommunications system that connects user equipment (UE) [102] to the core
network (CN) and provides access to different types of networks (e.g., 5G network).
It consists of radio base stations and the radio access technologies that enable
20 wireless communication.
[0047] The Access and Mobility Management Function (AMF) [106] is a 5G core
network function responsible for managing access and mobility aspects, such as UE
registration, connection, and reachability. It also handles mobility management
25 procedures like handovers and paging.
[0048] The Session Management Function (SMF) [108] is a 5G core network
function responsible for managing session-related aspects, such as establishing,
modifying, and releasing sessions. It coordinates with the User Plane Function
30 (UPF) for data forwarding and handles IP address allocation and QoS enforcement.
13
[0049] The Service Communication Proxy (SCP) [110] is a network function in the
5G core network that facilitates communication between other network functions
by providing a secure and efficient messaging service. It acts as a mediator for
service-based interfaces.
5
[0050] The Authentication Server Function (AUSF) [112] is a network function in
the 5G core responsible for authenticating UEs during registration and providing
security services. It generates and verifies authentication vectors and tokens.
10 [0051] The Network Slice Specific Authentication and Authorization Function
(NSSAAF) [114] is a network function that provides authentication and
authorization services specific to network slices. It ensures that UEs can access only
the slices for which they are authorized.
15 [0052] The Network Slice Selection Function (NSSF) [116] is a network function
responsible for selecting the appropriate network slice for a UE based on factors
such as subscription, requested services, and network policies.
[0053] The Network Exposure Function (NEF) [118] is a network function that
20 exposes capabilities and services of the 5G network to external applications,
enabling integration with third-party services and applications.
[0054] The Network Repository Function (NRF) [120] is a network function that
acts as a central repository for information about available network functions and
25 services. It facilitates the discovery and dynamic registration of network functions.
[0055] The Policy Control Function (PCF) [122] is a network function responsible
for policy control decisions, such as QoS, charging, and access control, based on
subscriber information and network policies.
30
14
[0056] The Unified Data Management (UDM) [124] is a network function that
centralizes the management of subscriber data, including authentication,
authorization, and subscription information.
[0057] The Application Function 5 (AF) [126] is a network function that represents
external applications interfacing with the 5G core network to access network
capabilities and services.
[0058] The User Plane Function (UPF) [128] is a network function responsible for
10 handling user data traffic, including packet routing, forwarding, and QoS
enforcement.
[0059] The Data Network (DN) [130] refers to a network that provides data
services to user equipment (UE) in a telecommunications system. The data services
15 may include but are not limited to Internet services, private data network related
services.
[0060] The 5GC network architecture [100] also comprises a plurality of interfaces
for connecting the network functions with a network entity for performing the
20 network functions. The NSSF [116] is connected with the network entity via the
interface denoted as (Nnssf) interface in the figure. The NEF [118] is connected
with the network entity via the interface denoted as (Nnef) interface in the figure.
The NRF [120] is connected with the network entity via the interface denoted as
(Nnrf) interface in the figure. The PCF [122] is connected with the network entity
25 via the interface denoted as (Npcf) interface in the figure. The UDM [124] is
connected with the network entity via the interface denoted as (Nudm) interface in
the figure. The AF [126] is connected with the network entity via the interface
denoted as (Naf) interface in the figure. The NSSAAF [114] is connected with the
network entity via the interface denoted as (Nnssaaf) interface in the figure. The
30 AUSF [112] is connected with the network entity via the interface denoted as
(Nausf) interface in the figure. The AMF [106] is connected with the network entity
15
via the interface denoted as (Namf) interface in the figure. The SMF [108] is
connected with the network entity via the interface denoted as (Nsmf) interface in
the figure. The SMF [108] is connected with the UPF [128] via the interface denoted
as (N4) interface in the figure. The UPF [128] is connected with the RAN [104] via
the interface denoted as (N3) interface 5 in the figure. The UPF [128] is connected
with the DN [130] via the interface denoted as (N6) interface in the figure. The
RAN [104] is connected with the AMF [106] via the interface denoted as (N2). The
AMF [106] is connected with the RAN [104] via the interface denoted as (N1). The
UPF [128] is connected with other UPF [128] via the interface denoted as (N9). The
10 interfaces such as Nnssf, Nnef, Nnrf, Npcf, Nudm, Naf, Nnssaaf, Nausf, Namf,
Nsmf, N9, N6, N4, N3, N2, and N1 can be referred to as a communication channel
between one or more functions or modules for enabling exchange of data or
information between such functions or modules, and network entities.
15 [0061] FIG. 2 illustrates an exemplary block diagram of a computing device [200]
upon which the features of the present disclosure may be implemented in
accordance with exemplary implementation of the present disclosure. In an
implementation, the computing device [200] may also implement a method for
performing predictive analysis of database cluster within a database clusters,
20 utilising the system. In another implementation, the computing device [200] itself
implements the method performing predictive analysis of database cluster within a
database clusters, using one or more units configured within the computing device
[200], wherein said one or more units are capable of implementing the features as
disclosed in the present disclosure.
25
[0062] The computing device [200] may include a bus [202] or other
communication mechanism for communicating information, and a hardware
processor [204] coupled with bus [202] for processing information. The hardware
processor [204] may be, for example, a general-purpose microprocessor. The
30 computing device [200] may also include a main memory [206], such as a random
access memory (RAM), or other dynamic storage device, coupled to the bus [202]
16
for storing information and instructions to be executed by the processor [204]. The
main memory [206] also may be used for storing temporary variables or other
intermediate information during execution of the instructions to be executed by the
processor [204]. Such instructions, when stored in non-transitory storage media
accessible to the processor [204], render 5 the computing device [200] into a specialpurpose
machine that is customized to perform the operations specified in the
instructions. The computing device [200] further includes a read only memory
(ROM) [208] or other static storage device coupled to the bus [202] for storing static
information and instructions for the processor [204].
10
[0063] A storage device [210], such as a magnetic disk, optical disk, or solid-state
drive is provided and coupled to the bus [202] for storing information and
instructions. The computing device [200] may be coupled via the bus [202] to a
display [212], such as a cathode ray tube (CRT), Liquid crystal Display (LCD),
15 Light Emitting Diode (LED) display, Organic LED (OLED) display, etc. for
displaying information to a computer user. An input device [214], including
alphanumeric and other keys, touch screen input means, etc. may be coupled to the
bus [202] for communicating information and command selections to the processor
[204]. Another type of user input device may be a cursor controller [216], such as
20 a mouse, a trackball, or cursor direction keys, for communicating direction
information and command selections to the processor [204], and for controlling
cursor movement on the display [212]. This input device typically has two degrees
of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allow
the device to specify positions in a plane.
25
[0064] The computing device [200] may implement the techniques described
herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware
and/or program logic which in combination with the computing device [200] causes
or programs the computing device [200] to be a special-purpose machine.
30 According to one implementation, the techniques herein are performed by the
computing device [200] in response to the processor [204] executing one or more
17
sequences of one or more instructions contained in the main memory [206]. Such
instructions may be read into the main memory [206] from another storage medium,
such as the storage device [210]. Execution of the sequences of instructions
contained in the main memory [206] causes the processor [204] to perform the
process steps described herein. In 5 alternative implementations of the present
disclosure, hard-wired circuitry may be used in place of or in combination with
software instructions.
[0065] The computing device [200] also may include a communication interface
10 [218] coupled to the bus [202]. The communication interface [218] provides a twoway
data communication coupling to a network link [220] that is connected to a
local network [222]. For example, the communication interface [218] may be an
integrated services digital network (ISDN) card, cable modem, satellite modem, or
a modem to provide a data communication connection to a corresponding type of
15 telephone line. As another example, the communication interface [218] may be a
local area network (LAN) card to provide a data communication connection to a
compatible LAN. Wireless links may also be implemented. In any such
implementation, the communication interface [218] sends and receives electrical,
electromagnetic or optical signals that carry digital data streams representing
20 various types of information.
[0066] The computing device [200] can send messages and receive data, including
program code, through the network(s), the network link [220] and the
communication interface [218]. In the Internet example, a server [230] might
25 transmit a requested code for an application program through the Internet [228], the
ISP [226], the local network [222], a host [224] and the communication interface
[218]. The received code may be executed by the processor [204] as it is received,
and/or stored in the storage device [210], or other non-volatile storage for later
execution.
30
18
[0067] The present disclosure is implemented by a system [300] (as shown in FIG.
3). In an implementation, the system [300] may include the computing device [200]
(as shown in FIG. 2). It is further noted that the computing device [200] is able to
perform the steps of a method [400] (as shown in FIG. 4).
5
[0068] Referring to FIG. 3, an exemplary block diagram of a system [300] for
performing predictive analysis of database cluster within a database clusters is
shown, in accordance with the exemplary implementations of the present
disclosure. The system [300] comprises at least one collecting unit [302], at least
10 one detecting unit [304], at least one processing unit [306], at least one generating
unit [308] and at least one display unit [310] and at least one selecting unit [312].
Also, all of the components/ units of the system [300] are assumed to be connected
to each other unless otherwise indicated below. As shown in the figures all units
shown within the system should also be assumed to be connected to each other.
15 Also, in FIG. 3 only a few units are shown, however, the system [300] may
comprise multiple such units or the system [300] may comprise any such numbers
of said units, as required to implement the features of the present disclosure.
Further, in an implementation, the system [300] may be present in a user device to
implement the features of the present disclosure. The system [300] may be a part of
20 the user device / or may be independent of but in communication with the user
device (may also referred herein as a UE). In another implementation, the system
[300] may reside in a server or a network entity. In yet another implementation, the
system [300] may reside partly in the server/ network entity and partly in the user
device.
25
[0069] The system [300] is configured for performing predictive analysis of
database cluster within a plurality of database clusters, with the help of the
interconnection between the components/units of the system [300]. In an
implementation of the present disclosure, the term refers herewith ‘database
30 clusters’ is the groups of databases that are managed and monitored collectively.
19
[0070] Examples of such database clusters include but may not be limited to Nosql
databases such as “mongodb” cluster, “redis”, “kafka”, “cassandra”, or “Oracle”. It
may be noted that such database clusters are only exemplary, and in no manner
construed to limit the scope of the present subject matter in any manner. As would
be explained in the foregoing 5 description, the health status and topology of these
clusters are tracked to confirm their proper functioning.
[0071] The predictive analysis of database cluster in the database cluster refers to
a process of using historical performance metrics data to predict one or more
10 anomalies of the database cluster. For instance, the historical performance metrics
data shows that during peak hours, a database of the system starts to slow down and
sends corrupt data to a user. Based on the historical performance metrics data, the
system [300] during peak hours, may activate a second database to avoid the
anomaly. The one or more anomalies may include an unexpected surge in a query
15 or abnormal patterns in data. In an implementation, the system [300] may perform
the functionalities in the 5th generation core network. In another implementation,
the system [300] may perform the functionalities in a 4th generation network, 6th
generation network, or any other future generations of network.
20 [0072] The collecting unit [302] is configured to collect data associated with a set
of historical performance metrics. The data may be collected from the plurality of
database clusters. In one example, the set of historical performance metrics data
comprises at least one of central processing unit (CPU) utilisation, memory
utilisation, network traffic, storage usage, and application log. The CPU
25 utilization refers to the percentage of CPU’s capacity being used by the database.
A high CPU utilization may indicate that the database is under heavy load, while a
low CPU utilization indicates that the CPU is idle or underutilized. The memory
utilization refers to amount of RAM (Random Access Memory) being used by the
database. The network traffic refers to amount of data being transmitted and
30 received over a network at the database. The storage usage refers to an actual
20
amount of storage of the database utilized. The application log refers to a record of
events including information about errors, warnings, and the like.
[0073] In one example, the set of historical performance metrics data is collected
at a preconfigured time p 5 eriod periodically. The preconfigured time period may be
defined by a user. The user may be one of a system operator, a network operator,
and the like. To collect the set of historical performance metrics data, the collecting
unit [302] may query the plurality of database clusters.
10 [0074] The processing unit [306] is further configured to preprocess the collected
set of historical performance metrics data. In one example, the preprocessing of the
collected set of historical performance metrics data includes but may not be limited
to performing at least one of data cleaning, transformation, and filtering.
15 [0075] The selecting unit [312] is configured to select a set of key performance
metrics from the set of historical performance metrics. In one example, the key
performance metrics may be selected based on selection of a database from the
plurality of database clusters that may be analysed. The key performance metrics
refer to a measurement to evaluate the success of the plurality of database. The set
20 of key performance metrics comprises at least one of response time, throughput,
error rate, and resource utilization.
[0076] The detecting unit [304] is configured to detect one or more anomalies in
the set of historical performance metrics data. In one example, the one or more
25 anomalies may be one of an unusual pattern. The unusual pattern refers to a
deviation from normal behaviour within the plurality of dataset clusters. The
unusual pattern may indicate at least one of a potential issue or an irregular
behaviour in the plurality of database clusters. An example of the unusual pattern
may be an unexpected increase or decrease in the CPU utilization or the network
30 traffic. The potential issue refers to a problem that may affect the performance of
the database.
21
[0077] The irregular behaviour includes but may not be limited to at least increase
query latency, unbalanced load distribution, high resource utilization. The increased
query latency refers to an increase in delay in the time taken by the database to
process a received query and the delay in 5 time to return results for the query. The
unbalanced load distribution refers to an uneven load distribution of workload
across nodes in the database cluster. The unbalanced load distribution may affect
performance of the database. The high resource utilization refers to uneven resource
utilization across the system where some resources are being highly consumed and
10 other resources are not utilized.
[0078] In one example, the detecting unit [304] may be using a trained model. The
trained model refers to a model that is exposed to the historical performance metrics
data. Based on the historical performance metrics data, the trained model stores a
15 pattern, relationship, and other insights. The trained model is trained based on the
set of historical performance metrics data.
[0079] The processing unit [306] is configured to perform a root cause analysis.
The root cause analysis refers to a process to detect underlying issues in the one or
20 more anomalies. In one example, the root cause analysis includes but may not be
limited to at least one of examining logs, reviewing configuration changes, and
investigating impact of external factors. The examining of logs refers to reviewing
and other recorded data to identify patterns, errors, to indicate the root cause of the
one or more anomalies. The logs refer to a sequence of events in a process to
25 identify the sequence where the problem may be occurring. The reviewing of the
configuration changes refers to checking any recent modifications made to the
configuration of the database as the changes in configuration may sometimes lead
to the one or more anomalies.
30 [0080] The generating unit [308] is configured to generate a report based on the
root cause analysis. In one example, the generated report includes but may not be
22
limited to a set of actionable recommendations for optimizing performance of
plurality of database clusters. The processing unit [308] may send the root cause
analysis to the generating unit [308] to add the set of actionable recommendations.
The set of actionable recommendations may include sending alerts via the generated
report. The alerts may be sent via a configured 5 notification channel based on
irregular behaviours in the plurality of database metrics. In an example, the
generating unit [308] may compile results of the root cause analysis such as
compiling a summary of the detected anomalies, identified root causes, and any
other relevant data to generate the report.
10
[0081] The set of actionable recommendations includes but may not be limited to
a recommendation for scaling a size of the plurality of database clusters based on
the detected one or more anomalies. The scaling comprises at least one of scale up
and scale down. The scale up refers to adding to more resources to existing
15 resources. For instance, increasing the amount of memory for more memory usage,
if the root cause analysis shows the causes of the one or more anomalies to be due
to high memory utilization. The scale down refers to removing resources from
existing resources.
20 [0082] The display unit [310] is configured to render the generated report. The
display unit [310] may be a user interface (UI). The UI may be a graphical user
interface (GUI). The GUI refers to an interface to interact with the system [300] by
visual or graphical representation of icons, menu, etc. The GUI may be a
smartphone, laptop, computer, etc.
25
[0083] Referring to FIG. 4, an exemplary method flow diagram [400] for
performing predictive analysis of database cluster within a database clusters, in
accordance with exemplary implementations of the present disclosure is shown. In
an implementation the method [400] is performed by the system [300]. Further, in
30 an implementation, the system [300] may be present in a server device to implement
23
the features of the present disclosure. Also, as shown in FIG. 4, the method [400]
starts at step [402].
[0084] At step [404], the method [400] comprises collecting, by a collecting unit
[302], data associated with a set of historical 5 performance metrics. In one example,
the data may be collected from a plurality of database clusters. The set of historical
performance metrics data comprises at least one of central processing unit (CPU)
utilisation, memory utilisation, network traffic, storage usage, and application log.
10 [0085] The set of historical performance metrics data is collected at a preconfigured
time period periodically. The preconfigured time period may be defined by a user.
The user may be one of a system operator, a network operator, and the like. To
collect the set of historical performance metrics data, the collecting unit [302] may
query the plurality of database clusters.
15
[0086] The method [400] comprises further comprises preprocessing, by the
processing unit [306], the collected set of historical performance metrics data. In
one example, the preprocessing includes but may not be limited to performing at
least one of data cleaning, transformation, and filtering.
20
[0087] The method [400] further comprises selecting, by a selecting unit [312], a
set of key performance metrics from the set of historical performance metrics. In
one example, the key performance metrics may be selected based on selection of a
database from the plurality of database clusters that may be analysed. The set of
25 key performance metrics comprises at least one of response time, throughput, error
rate, and resource utilization.
[0088] Next at step [406], the method comprises detecting, by a detecting unit
[304], one or more anomalies in the set of historical performance metrics data. In
30 one example, the one or more anomalies may be one of an unusual pattern. The
unusual pattern refers to a deviation from normal behaviour within the plurality of
24
dataset. The unusual pattern may indicate at least one of a potential issue or an
irregular behaviour in the plurality of database clusters. An example of the unusual
pattern may be an unexpected increase or decrease in the CPU utilization or the
network traffic.
5
[0089] The irregular behaviour includes but may not be limited to at least increase
query latency, unbalanced load distribution, high resource utilization. The increased
query latency refers to an increase in delay in the time taken by the database to
process a received query and the delay in time to return results for the query. The
10 unbalanced load distribution refers to an uneven load distribution of workload
across nodes in the database cluster. The unbalanced load distribution may affect
performance of the database. The high resource utilization refers to uneven resource
utilization across the system where some resources are being highly consumed and
other resources are not utilized.
15
[0090] In one example, the trained model may be used to detect the one or more
anomalies. Based on the historical performance metrics data, the trained model
stores a pattern, relationship, and other insights. The trained model is trained based
on the set of historical performance metrics data.
20
[0091] Next at step [408], the method comprises performing, by a processing unit
[306], a root cause analysis. The root cause analysis refers to a process to detect
underlying issues in the one or more anomalies. In one example, the root cause
analysis includes but may not be limited to at least one of examining logs, reviewing
25 configuration changes, and investigating impact of external factors.
[0092] Next at step [410], the method comprises generating, by a generating unit
[308], a report based on the root cause analysis. The generated report includes but
may not be limited to a set of actionable recommendations for optimizing
30 performance of plurality of database clusters. The set of actionable
recommendations comprises at least recommendation for scaling size of the
25
plurality of database clusters based on the detected one or more anomalies. The
scaling comprises at least one of scale up and scale down. The scaling comprises at
least one of scale up and scale down. The scale up refers to adding to more resources
to existing resources. For instance, increasing the amount of memory for more
memory usage, if the root cause analysis 5 shows the causes of the one or more
anomalies to be due to high memory utilization. The scale down refers to removing
resources from existing resources.
[0093] Next at step [412], the method includes rendering, by a display unit [310],
10 the generated report. The display unit [310] may be a user interface (UI). The UI
may be a graphical user interface (GUI). The GUI refers to an interface to interact
with the system [300] by visual or graphical representation of icons, menu, etc. The
GUI may be a smartphone, laptop, computer, etc.
15 [0094] The method terminates at step [414].
[0095] Referring to FIG. 5, an implementation of the system [500] for performing
predictive analysis of a database cluster within a plurality of database clusters, in
accordance with exemplary implementations of the present disclosure is shown.
20 The implementation system [500] comprises a user interface (UI) [502], a manager
service [504], a centralized data repository [506], a database A service [508], a
database B service [510], a database C service [512], a database A cluster [514], a
database B cluster [516] and a database C cluster [518].
25 [0096] One or more users (user A, user B, user C) may use the user interface (UI)
[502] to send the request for generating the report to obtain the set of actionable
recommendations for optimizing performance of plurality of database cluster within
one or more database clusters. In an example the one or more database clusters
refers to the database A cluster [514], the database B cluster [516] and the database
30 C cluster [518].
26
[0097] The one or more users may configure the system to collect the set of
historical performance metrics in the centralized data repository [506] at the
preconfigured time period. The one or more users may send the set of historical
performance metrics through the UI [502]. The one or more database clusters may
be monitored by the one or more database 5 services to collect the historical
performance metrics.
[0098] The set of historical performance metrics may be normalized into a unified
format by the system [300]. Further, the normalized data may be stored at the
10 centralized data repository [506] by the system [300].
[0099] The one or more users may send the request for generating the report to
obtain the set of actionable recommendations for optimizing performance of the
plurality of database clusters via the UI [502].
15
[0100] Based on the request, the system [200] may retrieve the normalized
historical performance metrics. The retrieved data may be analysed to detect one or
more anomalies. The one or more anomalies may be detected based on the root
cause analysis. The one or more anomalies corresponds to unusual pattern that
20 indicate at least one of a potential issue or an irregular behaviour within the plurality
of database clusters. The irregular behaviour comprises at least an increase query
latency, unbalanced load distribution, high resource utilization.
[0101] The root cause analysis comprises at least one of examining logs, reviewing
25 configuration changes, and investigating impact of external factors. Based on the
root cause analysis, the report may be generated by the system [300]. The report
may send the recommendation for action to be taken on the one or more anomalies.
The recommendation may be one of a scale up or scale down of the plurality of
database clusters.
30
27
[0102] Referring to FIG. 6, an implementation of the method [600] of collecting
historic performance data for performing predictive analysis of database cluster
within a database clusters, in accordance with exemplary implementations of the
present disclosure is shown. In an implementation of the present disclosure, the
implementation method [600] may be 5 performed by the system [300] as shown in
FIG. 3.
[0103] The implementation method [600] starts at step [602]. The user may
configure the system [300] as shown in FIG. 3 to monitor the database.
10
[0104] At step [604], the system [300] may try to create a connection with the
database cluster.
[0105] In one example, if the connection is not established, the implementation
15 method [600] may proceed to step [608], where the system [300] may check
configuration and retry to create the connection again.
[0106] At step [610], the system [300] may further check if the retried connection
is a success or a failure. If the connection is not established after retrying, the
20 implementation method [600] may further proceed to step [612], where the system
[300] may send an error to the system operator or the network operator.
[0107] If the connection is established at step [604], the implementation method
[600] may proceed to step [606]. At step [606], the implementation method [600]
25 includes collecting data associated with a set of historical performance metrics from
a plurality of database clusters. The set of historical performance metrics data
comprises at least one of the CPU utilisation, memory utilisation, network traffic,
storage usage, and application log. The set of historical performance metrics data is
collected at a preconfigured time period periodically.
30
28
[0108] Further, at step [614], the implementation method [600] includes checking,
if the collection of data is a success or a failure. In an event the collection of data is
a failure, the system [300] may display an error to the system operator or the
network operator at step [612].
5
[0109] In an event the collection of data is a success, the system [300] may
normalize the collected data into a unified format at step [616]. The normalization
may be performed using an algorithm.
10 [0110] Further at step [618], the normalized data may be stores in the centralized
data repository [506].
[0111] Further at step [620], based on the request from the network operator or the
system operator to perform predictive analysis of the data in the database, the
15 normalized data may be sent from the centralized data repository [506].
[0112] Referring to FIG. 7, a second implementation of a method [700] for
generation of a report for predictive analysis of database cluster within a database
clusters, in accordance with exemplary implementations of the present disclosure
20 is shown.
[0113] At step [702], the user may send the request to the system [300] for
generating the report to obtain the set of actionable recommendations for optimizing
performance of plurality of database cluster. The user may send the request from
25 the UI [502].
[0114] Further at step [704], based on the request, the system [300] may query the
centralized data repository [506] to check if the centralized data repository [506] is
available.
30
29
[0115] If the centralized data repository [506] is not available, a service unavailable
response may be sent by the system [300] at step [708]. Further, the system [300]
may log an error for the centralized data repository [506] at step [710].
[0116] If the centralized data 5 repository [506] is available, the implementation
method [700] may proceed to step [706]. The data may be retrieved from the
database service based on the request.
[0117] Further at step [712], the system [300] may check if the data is retrieved
10 successfully. In an event the data is not retrieved successfully, the implementation
method [700] may proceed to step [710] where the system [300] may log an error.
[0118] In an event the data is retrieved successfully, the retrieved data may be
analysed to detect one or more anomalies. The one or more anomalies may be
15 detected based on the root cause analysis. The one or more anomalies corresponds
to unusual pattern that indicate at least one of a potential issue or an irregular
behaviour within the plurality of database clusters. The irregular behaviour
comprises at least an increase query latency, unbalanced load distribution, high
resource utilization.
20
[0119] The root cause analysis comprises at least one of examining logs, reviewing
configuration changes, and investigating impact of external factors.
[0120] Based on the root cause analysis, the report may be generated by the system
25 [300] and sent to the user via the UI [502] at step [714]. The report may send the
recommendation for action to be taken on the one or more anomalies. The
recommendation may be one of a scale up or scale down of the plurality of database
clusters.
30 [0121] The present disclosure further discloses a non-transitory computer readable
storage medium storing one or more instructions for performing predictive analysis
30
of database cluster within a database clusters, the instructions include executable
code which, when executed by one or more units of a system [300], cause a
collecting unit [302] to collect data associated with a set of historical performance
metrics from a plurality of database clusters. The instructions when executed by the
system [300] further cause a detecting unit [304] 5 , using a trained model, to detect
one or more anomalies in the set of historical performance metrics data. The
instructions when executed by the system [300] further cause a processing unit
[306] to perform a root cause analysis for the detected one or more anomalies. The
root cause analysis comprises at least one of examining logs, reviewing
10 configuration changes, and investigating impact of external factors. The
instructions when executed by the system [300] further cause a generating unit
[308] to generate a report based on the root cause analysis. The instructions when
executed by the system [300] further cause a display unit [310] configured to render
the generated report.
15
[0122] As is evident from the above, the present disclosure provides a technically
advanced solution for performing predictive analysis of database cluster within a
database cluster. The present solution provides a system and a method for gathering
historical data from different regions of a network within the cluster, such as
20 monitoring tools, logs, performance metrics, and resource usage statistics. The
present disclosure is implemented in the 5G network, but may further be
implemented in a 6th generation network or any other future generations of network.
The present disclosure provides numerous functionality such as metric selection,
reporting and recommendations, continuous learning, health and performance of
25 database, to ensure the overall stability, availability, and performance of database.
Further, the present disclosure performs anomaly detection, root cause analysis and
predictive analysis to remove the anomalies.
[0123] While considerable emphasis has been placed herein on the disclosed
30 implementations, it will be appreciated that many implementations can be made and
that many changes can be made to the implementations without departing from the
31
principles of the present disclosure. These and other changes in the implementations
of the present disclosure will be apparent to those skilled in the art, whereby it is to
be understood that the foregoing descriptive matter to be implemented is illustrative
and non-limiting.
5
[0124] Further, in accordance with the present disclosure, it is to be acknowledged
that the functionality described for the various components/units can be
implemented interchangeably. While specific embodiments may disclose a
particular functionality of these units for clarity, it is recognized that various
10 configurations and combinations thereof are within the scope of the disclosure. The
functionality of specific units as disclosed in the disclosure should not be construed
as limiting the scope of the present disclosure. Consequently, alternative
arrangements and substitutions of units, provided they achieve the intended
functionality described herein, are encompassed within the scope of the present
15 disclosure.

We Claim:

1. A method [400] for performing predictive analysis of database cluster
within a plurality of database clusters, the method [400] comprising:
collecting, by a collecting 5 unit [302], data associated with a set of
historical performance metrics from the plurality of database clusters;
detecting, by a detecting unit [304] using a trained model, one or more
anomalies in the set of historical performance metrics data;
performing, by a processing unit [306], a root cause analysis for the
10 detected one or more anomalies;
generating, by a generating unit [308], a report based on the root cause
analysis; and
rendering, by a display unit [310], the generated report.

2. The method [400] as claimed in claim 1, wherein the set of historical
performance metrics data comprises at least one of central processing unit
(CPU) utilisation, memory utilisation, network traffic, storage usage, and
application log.

3. The method [400] as claimed in claim 1, wherein the method [400] comprises
preprocessing, by the processing unit [306], the collected set of historical
performance metrics data, wherein the preprocessing comprises performing
at least one of data cleaning, transformation, and filtering.

4. The method [400] as claimed in claim 1, wherein the root cause analysis
comprises at least one of examining logs, reviewing configuration changes,
and investigating impact of external factors.

5. The method [400] as claimed in claim 1, wherein the one or more anomalies
30 corresponds to unusual pattern that indicate at least one of a potential issue or
an irregular behaviour within the plurality of database clusters, wherein the
irregular behaviour comprises at least: increase query latency, unbalanced
load distribution, high resource utilization.

6. The method [400] as claimed in claim 1, wherein the trained model is trained
based on the set of historical performance metrics data.

7. The method [400] as claimed in claim 1, wherein the set of historical
performance metrics data is collected at a preconfigured time period
periodically.

8. The method [400] as claimed in claim 1, wherein the method [400] further
comprises selecting, by a selecting unit [312], a set of key performance
metrics from the set of historical performance metrics, wherein the selection
of set of key performance metrics is based on a database of the plurality of
database clusters being analysed for detecting the one or more anomalies.

9. The method [400] as claimed in claim 6, wherein the set of key performance
metrics comprises at least one of response time, throughput, error rate, and
resource utilization.

10. The method [400] as claimed in claim 1, wherein the generated report
comprises a set of actionable recommendations for optimizing performance
of plurality of database clusters, wherein the set of actionable
recommendations comprises at least recommendation for scaling size of the
plurality of database clusters based on the detected one or more anomalies.

11. The method [400] as claimed as claimed in claim 8, wherein the scaling
comprises at least one of scale up and scale down.

12. A system [300] for performing predictive analysis of database cluster within
a plurality of database clusters, the system [300] comprising:

a collecting unit [302] configured to collect data associated with a set
of historical performance metrics from the plurality of database clusters;
a detecting unit [304] using a trained model configured to detect one or
more anomalies in the set of historical performance metrics data;
a processing unit [306] configured to perform a root cause analysis for
the detected one or more anomalies;
a generating unit [308] configured to generate a report based on the root
cause analysis; and
a display unit [310] configured to render the generated report.

13. The system [300] as claimed in claim 12, wherein the set of historical
performance metrics data comprises at least one of central processing unit
(CPU) utilisation, memory utilisation, network traffic, storage usage, and
application log.

14. The system [300] as claimed in claim 12, wherein the processing unit [306]
is further configured to preprocess the collected set of historical performance
metrics data, wherein the preprocessing comprises performing at least one of
data cleaning, transformation, and filtering.

15. The system [300] as claimed in claim 12, wherein the root cause analysis
comprises at least one of examining logs, reviewing configuration changes,
and investigating impact of external factors.

16. The system [300] as claimed in claim 12, the one or more anomalies
corresponds to unusual pattern that indicate at least one of a potential issue or
an irregular behaviour within the plurality of database clusters, wherein the
irregular behaviour comprises at least: increase query latency, unbalanced
load distribution, high resource utilization.

17. The system [300] as claimed in claim 12, wherein the trained model is trained
based on the set of historical performance metrics data.

18. The system [300] as claimed in claim 13, wherein the set of historical
performance metrics data is collected at a preconfigured time period
periodically.

19. The system [300] as claimed in claim 12, wherein the system [300] further
comprises a selecting unit [312] configured to select a set of key performance
metrics from the set of historical performance metrics, wherein the selection
of set of key performance metrics is based on a database of the plurality of
database clusters being analysed for detecting the one or more anomalies.

20. The system [300] as claimed in claim 19, wherein the set of key performance
metrics comprises at least one of response time, throughput, error rate, and
resource utilization.

21. The system [300] as claimed in claim 12, wherein the generated report
comprises a set of actionable recommendations for optimizing performance
of plurality of database clusters, wherein the set of actionable
recommendations comprises at least recommendation for scaling size of the
plurality of database clusters based on the detected one or more anomalies.

22. The system [300] as claimed as claimed in claim 21, wherein the scaling
comprises at least one of scale up and scale down.

Dated this the 12th Day of September, 2023

Documents

Application Documents

# Name Date
1 202321061433-STATEMENT OF UNDERTAKING (FORM 3) [12-09-2023(online)].pdf 2023-09-12
2 202321061433-PROVISIONAL SPECIFICATION [12-09-2023(online)].pdf 2023-09-12
3 202321061433-POWER OF AUTHORITY [12-09-2023(online)].pdf 2023-09-12
4 202321061433-FORM 1 [12-09-2023(online)].pdf 2023-09-12
5 202321061433-FIGURE OF ABSTRACT [12-09-2023(online)].pdf 2023-09-12
6 202321061433-DRAWINGS [12-09-2023(online)].pdf 2023-09-12
7 202321061433-Proof of Right [06-02-2024(online)].pdf 2024-02-06
8 202321061433-FORM-5 [11-09-2024(online)].pdf 2024-09-11
9 202321061433-ENDORSEMENT BY INVENTORS [11-09-2024(online)].pdf 2024-09-11
10 202321061433-DRAWING [11-09-2024(online)].pdf 2024-09-11
11 202321061433-CORRESPONDENCE-OTHERS [11-09-2024(online)].pdf 2024-09-11
12 202321061433-COMPLETE SPECIFICATION [11-09-2024(online)].pdf 2024-09-11
13 202321061433-Request Letter-Correspondence [18-09-2024(online)].pdf 2024-09-18
14 202321061433-Power of Attorney [18-09-2024(online)].pdf 2024-09-18
15 202321061433-Form 1 (Submitted on date of filing) [18-09-2024(online)].pdf 2024-09-18
16 202321061433-Covering Letter [18-09-2024(online)].pdf 2024-09-18
17 202321061433-CERTIFIED COPIES TRANSMISSION TO IB [18-09-2024(online)].pdf 2024-09-18
18 Abstract 1.jpg 2024-10-07
19 202321061433-FORM 3 [07-10-2024(online)].pdf 2024-10-07
20 202321061433-ORIGINAL UR 6(1A) FORM 1 & 26-200125.pdf 2025-01-24