Abstract: The disclosed invention relates to a system (102) and method for automatically managing high load on a database (210) using a normalizer (214). The system and method include monitoring the load and health of the database (210) using an AI model (212), determining failed and successful operations to ascertain database health, and automatically pausing the normalizer (214) during high load or connectivity issues. The AI model (212) continuously monitors the database (210) and resumes the normalizer (214) once conditions are normal, increasing the processing rate to compensate for unprocessed data. This system (102) and method prevent data loss, improve efficiency, and require minimal manual intervention. FIG. 3
FORM 2
THE PATENTS ACT, 1970 (39 of 1970) THE PATENTS RULES, 2003
COMPLETE SPECIFICATION
(See section 10; rule 13)
TITLE OF THE INVENTION
SYSTEM AND METHOD FOR LOAD BALANCING OF DATABASE IN A NETWORK
APPLICANT
JIO PLATFORMS LIMITED
of Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad -
380006, Gujarat, India; Nationality : India
The following specification particularly describes
the invention and the manner in which
it is to be performed
RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains
material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belonging to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner.
FIELD OF INVENTION
[0002] The embodiments of the present disclosure generally relate to load
balancing. More particularly, the present disclosure relates to a system and a method for load balancing of the database or application in a network for adapted troubleshooting operation management.
BACKGROUND OF THE INVENTION
[0003] The following description of related art is intended to provide
background information pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. However, it should be appreciated that this section be used only to enhance the understanding of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] In the field of telecommunications, load balancing is essential for
providing efficient operation of the network. Efficient load balancing not only ensures optimal resource utilization but also significantly reduces latency, thereby maintaining the integrity of data within the network. Load balancing involves distributing incoming network traffic across multiple servers or resources to ensure no single server bears too much demand. This process enhances the reliability and scalability of applications and services.
[0005] Load balancing can be accomplished through various methods,
including hardware-based solutions, software-based solutions, and a combination of both. Hardware load balancers are physical devices that distribute traffic based on pre-defined algorithms, such as round-robin, least connections, or IP hash. Software load balancers, on the other hand, are applications that perform similar functions but run on standard hardware. These can be more flexible and scalable, often integrated into cloud environments.
[0006] Conventionally load balancing is performed using algorithms that
distribute the workload evenly or based on specific criteria such as server capacity or current load. For instance, the round-robin method distributes requests sequentially among servers, while the least connections method directs traffic to the server with the fewest active connections. Advanced methods use health checks to ensure traffic is only directed to servers that are operational and performing optimally.
[0007] When the load on a database or application exceeds a certain
threshold, the application may start to behave erratically. This typically
necessitates manual intervention by the operations team to pause the application
or halt incoming data. Manual intervention involves monitoring system
performance and taking corrective actions such as redistributing the load, adding
more resources, or temporarily suspending the application to prevent failures. If
such intervention is not performed in a timely manner, incoming requests to the
application may fail, leading to operational inefficiencies and potential data loss.
[0008] However, none of the existing techniques addresses the challenges
of performing load balancing without manual intervention where a normaliser is enabled to pause and resume the operation automatically. There is, therefore, a need in the art to provide a system and a method that can mitigate the problems associated with the prior arts.
OBJECTS OF THE INVENTION
[0009] Some of the objects of the present disclosure, which at least one
embodiment herein satisfies are as listed herein below.
[0010] An object of the present disclosure is to provide a system and a
method for load balancing of the database in a network.
[0011] Another object of the present disclosure is to reduce time and
efforts in application management/operations.
[0012] An object of the present disclosure is to minimize the failures at the
network.
[0013] An object of the present disclosure is to provide a system and a
method that is economical and easy to implement.
[0014] An object of the present disclosure is to provide a system and a
method that pauses and resumes the operation without manual intervention
whenever there is a connectivity issue or high load on database or application.
SUMMARY
[0015] In an exemplary embodiment, a method for automatically
managing a database (210) by utilizing a normalizer (214) is described. The
method includes monitoring, by an Artificial Intelligence (AI) model (212), a load
and health of the database (210), and determining, by the AI model (212), failed
and successful operations at the database (210) to ascertain the health of the
database (210). The method includes automatically pausing the normalizer (214),
by a processing engine (208), when the load on the database (210) is above a
threshold level or when the health of the database (210) is suboptimal and
monitoring, by the AI model (212), the database (210) to check if an operating
condition of the database (210) returns to normal. The normal operating condition
includes the load on the database is below the threshold level or when the health
of database (210) is in a desired operating state. The method further includes
resuming, by the processing engine (208), the normalizer (214) automatically
once the operating condition is normal, and increasing, by the processing engine
(208), processing rate to compensate for time period during which the normalizer
(214) was paused and unprocessed data was accumulated.
[0016] In some embodiments, the monitoring includes analyzing
throughput and other factors affecting the database (210).
[0017] In some embodiments, the automatic pausing and the resuming
operations prevents data loss during connectivity issues.
[0018] In some embodiments, the AI model (212) is continuously trained
on statistics associated with operations of the database (210).
[0019] In some embodiments, the method includes storing data in the
database (210) or providing the data to an external system when the load on the
database (210) is below a threshold level, or when the health of the database (210)
is optimal.
[0020] In another exemplary embodiment, the system (102) for
automatically managing a database (210), by utilizing a normalizer (214) is
described. The system (102) includes an Artificial Intelligence (AI) model (212)
configured to monitor a load and health of the database (210); and determine the
failed and successful operations at the database (210) to ascertain the health of the
database (210). The system includes a processing engine (208) configured to
automatically pause the normalizer (214) when a load on the database (210) is
above a threshold level or when the health of database (210) is suboptimal. The AI
model (212) of the system is further configured to monitor the database (210) to
check if operating condition of the database (210) returns to normal, wherein the
normal operating condition comprises the load on the database is below the
threshold level or when the health of database (210) is in a desired operating state.
The processing engine (208) is further configured to automatically resume the
normalizer (214) once the operating condition is normal; and increase processing
rate of the database (210), using the AI model (212), to compensate for time
period during which the normalizer (214) was paused and unprocessed data had
accumulated.
[0021] In some embodiments, monitoring includes analyzing throughput
and other factors affecting the database (210).
[0022] In some embodiments, the automatic pausing and the resuming
operations prevent data loss during connectivity issues.
[0023] In some embodiments, the AI model (212) is continuously trained
on statistics associated with operations of the database (210).
[0024] In some embodiments, the system further includes components
configured to store data in the database (210) or provide it to an external system when the load on the database (210) is below a threshold level, or when the health of the database (210) is optimal.
[0025] The foregoing general description of the illustrative embodiments
and the following detailed description thereof are merely exemplary aspects of the teachings of this disclosure and are not restrictive.
BRIEF DESCRIPTION OF DRAWINGS
[0026] The accompanying drawings, which are incorporated herein, and
constitute a part of this disclosure, illustrate exemplary embodiments of the
disclosed methods and systems which like reference numerals refer to the same
parts throughout the different drawings. Components in the drawings are not
necessarily to scale, emphasis instead being placed upon clearly illustrating the
principles of the present disclosure. Some drawings may indicate the components
using block diagrams and may not represent the internal circuitry of each
component. It will be appreciated by those skilled in the art that disclosure of such
drawings includes the disclosure of electrical components, electronic components,
or circuitry commonly used to implement such components.
[0027] FIG. 1 illustrates an example network architecture for
implementing a proposed system, in accordance with an embodiment of the
present disclosure.
[0028] FIG. 2 illustrates an example block diagram of a system, in
accordance with an embodiment of the present disclosure.
[0029] FIG. 3 illustrates an example flow diagram representing a method
for load balancing of a database in a network, in accordance with an embodiment
of the present disclosure.
[0030] FIG. 4A illustrate exemplary representations of network
configurations with single normalizer instances, respectively, in accordance with
some embodiments of the present disclosure.
[0031] FIG. 4B illustrate exemplary representations of network
configurations with multiple normalizer instances, respectively, in accordance
with some embodiments of the present disclosure.
[0032] FIG. 5 illustrates an example computer system in which or with
which the embodiments of the present disclosure may be implemented.
[0033] FIG. 6 illustrates a flow chart of a method for automatically
managing a database by utilizing a normalizer, in accordance with some
embodiments of the present disclosure.
[0034] The foregoing shall be more apparent from the following more
detailed description of the disclosure.
LIST OF REFERENCE NUMERALS
100 – Network Architecture
102 – System
104 – Network
106 – Centralized Server
108-1, 108-2…108-N – User Equipments
110-1, 110-2…110-N – Users
202 – One or more processor(s)
204 – Memory
206 – Interface(s)
208 – Processing engine(s)
210 – Database
212 – AI/ML model
214 – Normalizer
500 – Computer System
510 – External Storage Device
520 – Bus
530 – Main Memory
540 – Read-Only Memory
550 – Mass Storage Device
560 – Communication Port(s) 570 – Processor
DETAILED DESCRIPTION
5 [0035] In the following description, for explanation, various specific
details are outlined in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any
10 combination of other features. An individual feature may not address all of the
problems discussed above or might address only some of the problems discussed
above. Some of the problems discussed above might not be fully addressed by any
of the features described herein.
[0036] The ensuing description provides exemplary embodiments only and
15 is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit and scope
20 of the disclosure as set forth.
[0037] Specific details are given in the following description to provide a
thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and
25 other components may be shown as components in block diagram form in order
not to obscure the embodiments in unnecessary detail. In other instances, well-
known circuits, processes, algorithms, structures, and techniques may be shown
without unnecessary detail to avoid obscuring the embodiments.
[0038] Specific details are given in the following description to provide a
30 thorough understanding of the embodiments. However, it will be understood by
8
one of ordinary skill in the art that the embodiments may be practiced without
these specific details. For example, circuits, systems, networks, processes, and
other components may be shown as components in block diagram form in order
not to obscure the embodiments in unnecessary detail. In other instances, well-
5 known circuits, processes, algorithms, structures, and techniques may be shown
without unnecessary detail in order to avoid obscuring the embodiments.
[0039] Also, it is noted that individual embodiments may be described as a
process that is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the
10 operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process
15 corresponds to a function, its termination can correspond to a return of the function to the calling function or the main function.
[0040] The word “exemplary” and/or “demonstrative” is used herein to
mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not limited by such examples. In addition,
20 any aspect or design described herein as “exemplary” and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in
25 either the detailed description or the claims, such terms are intended to be inclusive like the term “comprising” as an open transition word without precluding any additional or other elements.
[0041] Reference throughout this specification to “one embodiment” or
“an embodiment” or “an instance” or “one instance” means that a particular
30 feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the
9
appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more 5 embodiments.
[0042] The terminology used herein is to describe particular embodiments
only and is not intended to be limiting the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms
10 “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “and/or” includes any combinations of one or more of the
15 associated listed items.
[0043] Embodiments herein relate to a method for load balancing of the
database in a network. The system uses one or more artificial intelligence and machine learning algorithms to determine the failure and success operations at the database. Based on the determination of the failure and success operations, the
20 system determines the health of the application or the database. In case the health level of the application is suboptimal, or the load of the database exceeds a threshold, the system may stop loading the database. Further, when the load on the database reduces to a particular level, the system may continue loading the database to avoid any loss of data.
25 [0044] The various embodiments through the disclosure will be explained
in more detail with reference to FIGs. 1-6.
[0045] FIG. 1 illustrates an example network architecture (100) for
implementing a system (102), in accordance with an embodiment of the present disclosure.
30 [0046] Referring to FIG. 1, the system (102) will be connected to a
network 104, which is further connected to at least one computing devices 108-1,
10
108-2, … 108-N (collectively referred as computing device 108, herein) associated with one or more users 110-1, 1a10-2, … 110-N (collectively referred as users 110, herein). The computing device (108) may be personal computers, laptops, tablets, wristwatch or any custom-built computing device integrated 5 within a modern diagnostic machine that can connect to a network as an IoT (Internet of Things) device. Further, the system (102) may configure a centralized server (106) to store compiled data.
[0047] In an embodiment, the system (102) may receive at least one input
data from the at least one computing devices (108). A person of ordinary skill in
10 the art will understand that the at least one computing devices (108) may be
individually referred to as computing device (108) and collectively referred to as
computing devices (108). In an embodiment, the users (110) may also be referred
to as User Equipment (UE). Accordingly, the terms “computing device” and
“User Equipment” may be used interchangeably throughout the disclosure.
15 [0048] In an embodiment, the computing device (108) may transmit the at
least one captured data packet over a point-to-point or point-to-multipoint
communication channel or network (104) to the system (102).
[0049] In an embodiment, the computing device (108) may involve
collection, analysis, and sharing of data received from the system (102) via the 20 communication network (104).
[0050] In an exemplary embodiment, the communication network (104)
may include, but not be limited to, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, 25 packets, signals, waves, voltage or current levels, some combination thereof, or so forth. In an exemplary embodiment, the communication network (104) may include, but not be limited to, a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-30 Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
11
[0051] In an embodiment, the one or more computing devices (108) may
communicate with the system (102) via a set of executable instructions residing on any operating system. In an embodiment, the one or more computing devices 108 may include, but not be limited to, any electrical, electronic, electro-5 mechanical, or an equipment, or a combination of one or more of the above devices such as mobile phone, smartphone, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the one or more computing devices (108) may include
10 one or more in-built or externally coupled accessories including, but not limited to, a visual aid device such as camera, audio aid, a microphone, a keyboard, input devices such as touch pad, touch enabled screen, electronic pen, receiving devices for receiving any audio or visual signal in any range of frequencies, and transmitting devices that can transmit any audio or visual signal in any range of
15 frequencies. It may be appreciated that the one or more computing devices (108) may not be restricted to the mentioned devices and various other devices may be used.
[0052] In an embodiment, the system is connected to a network (104),
which is connected to the at least one user equipment (108) may include but not
20 limited to personal computers, smartphones, laptops, tablets, smart watches as well as other IoT devices that support a display.
[0053] In another exemplary embodiment, a centralized server (106)
includes or comprise, by way of example but not limitation, one or more of: a stand-alone server, a server blade, a server rack, a bank of servers, a server farm,
25 hardware supporting a part of a cloud service or system, a home server, hardware
running a virtualized server, one or more processors executing code to function as
a server, one or more machines performing server-side functionality as described
herein, at least a portion of any of the above, some combination thereof.
[0054] In an embodiment, the network (104) is further configured with the
30 centralized server (106) including a database., It can be retrieved whenever there is a need to reference this output in future.
12
[0055] In an embodiment, the computing device (108) associated with the
one or more user (110) may transmit the at least one captured data packet over a
point-to-point or point-to-multipoint communication channel or network (104) to
the system (102).
5 [0056] In an embodiment, the computing device (108) may involve
collection, analysis, and sharing of data received from the system (102) via the communication network (104).
[0057] In an embodiment, the system (102) may determine that the
database or the application is not able to process the data successfully, and the
10 system may pause the loading of the database or the application. In an example,
the system may pause sending the data to the database or the application. Once the
system determines that the loading on the application or the database is reduced,
the system may start sending the data to the database or to the application.
[0058] Although FIG. 1 shows exemplary components of the network
15 architecture (100), in other embodiments, the network architecture 100 may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as being performed by one or more other
20 components of the network architecture (100).
[0059] FIG. 2 illustrates an example block diagram (200) of a system
(102), in accordance with an embodiment of the present disclosure. The system (102) is designed to manage the load on a database and application using artificial intelligence techniques, ensuring optimal performance and reliability.
25 [0060] Referring to FIG. 2, the system (102) may include one or more
processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. Among other capabilities, the one
30 or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in a memory (204) of the system (102). The memory
13
(204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for 5 example, volatile memory such as random-access memory (RAM), or non-volatile memory such as erasable programmable read only memory (EPROM), flash memory, and the like.
[0061] In an embodiment, the system (102) may include an interface(s)
(206). The interface(s) (206) may comprise a variety of interfaces, for example,
10 interfaces for data input and output devices (I/O), storage devices, and the like. The interface(s) (206) may facilitate communication through the system (102). The interface(s) (206) may also provide a communication pathway for one or more components of the system (102). Examples of such components include, but are not limited to, a processing engine(s) (208) and a database (210). Further, the
15 processing engine(s) (208) may include one or more engine(s), such as, but not limited to, an input/output engine, an identification engine, and an optimization engine.
[0062] In an embodiment, the processing engine(s) (208) may be
implemented as a combination of hardware and programming (for example,
20 programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage
25 medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system may comprise the
30 machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may
14
be separate but accessible to the system and the processing resource. In other examples, the processing engine(s) (208) may be implemented by electronic circuitry.
[0063] In an embodiment, the processing engine (208) may include an
5 Artificial Intelligence model (212), alternatively referred to as an artificial intelligence /Machine Learning (AI/ML) model (212) (referred to as AI model (212)), and a normalizer (214). The AI model (212) continuously monitors the load and health of the database (210) and the normalizer application (214). The step of monitoring includes analyzing metrics, such as success/failure rates and
10 throughput to determine the operational state of the database and application.
[0064] The normalizer (214) is a specialized application implemented to
manage and normalize data flow to the database (210) or application, particularly under varying load conditions. When the load on the database/application exceeds a predefined threshold or when the health of database (210) is suboptimal, the
15 processing engine (208) automatically pauses the normalizer (214) to prevent overload and potential failures. The predefined threshold may refer to limit or capacity, which when reached, causes abnormal operation of the database (210). The abnormal operations include but are not limited to, missing queries, inaccurate database operations, inaccurate outputs, etc. In some examples, the
20 threshold may include a performance threshold, a resource utilization threshold, a storage capacity threshold, an error and stability threshold, a security threshold, a backup and recovery threshold, etc. This pausing mechanism helps prevent data loss and maintains the integrity of the database/application during high load or connectivity issues. The health of database (210) may refer to overall operational
25 status, encompassing performance, reliability, integrity, and security. Some
metrics that may be used to assess the health includes but are not limited to
performance metrics, resource utilization, storage and capacity, error and stability,
data integrity, and security.
[0065] According to one aspect, the AI model (212) is further configured
30 to monitor the database (210) to check if an operating condition of the database (210) returns to normal. When the load on the database is below the threshold
15
level, the database (210) is said to be running at the normal operating condition. The normal operating condition also includes a state when the health of the database (210) is in a desired operating state. The desired operating state indicates that the metrics that are used to assess the health are in the optimal value range. 5 Once the operating conditions of the database (210) are identified as normal operating conditions, or the health of the database (210) is returned to the desired operating state, or the connectivity issues are resolved, the processing engine (208) resumes the normalizer (214) automatically. Such automatic pausing and the resuming operations prevent data loss during connectivity issues.
10 [0066] Upon resumption, the processing rate of the database (210) is
increased, for example, exponentially or linearly, using the AI model (212), to compensate for the time period during which the normalizer (214) was paused and unprocessed data had accumulated, ensuring that the system catches up efficiently. This adaptive load management minimizes the need for manual intervention and
15 enhances overall system efficiency. For example, the processing rate is maintained
such that the database (210) does not experience the load above the threshold, or
the metrics used for assessing the health do not deviate from desired or optimal
values.
[0067] Although FIG. 2 shows exemplary components of the system
20 (102), in other embodiments, the system (102) may include fewer components, different components, differently arranged components, or additional functional components than depicted in FIG. 2. Additionally, or alternatively, one or more components of the system (102) may perform functions described as being performed by one or more other components of the system (102).
25 [0068] Although the disclosure describes managing the load and health of
the database (210), one can appreciate that the normalizer (214) can be used for managing the load and health of any application. In some examples, the normalizer (214) can be integrated as part of the database (210) or the applications.
30 [0069] FIG. 3 illustrates an example flow diagram (300) representing a
method for load balancing of the database in a network, in accordance with an
16
embodiment of the present disclosure.
[0070] At step (302), the system is configured for data ingestion to the
normalizer via files or streams. The data ingestion process ensures that data is
continuously fed into the normalizer for processing.
5 [0071] At step (304), the AI/ML model checks if there is any threshold
breach on the application/database or any health issues. This step involves continuous monitoring of various metrics, such as success and failure rates, throughput, concurrent requests (or workload), resource adequacy, deadlocks, etc., affecting the database/application performance.
10 [0072] At step (306), if the AI/ML model determines that a threshold
breach or health issue has occurred, the system, by using the processing engine
(208), pauses the processing of the data. This automatic pausing helps prevent
overload and potential data loss.
[0073] At step (308), the system continuously checks if the conditions
15 have returned to normal. This involves continuous monitoring to determine if the database/application load and health metrics are back within acceptable thresholds.
[0074] At step (310), once the conditions are back to normal, the system
resumes the application with an increased processing rate. This step ensures that
20 any accumulated unprocessed data during the pause period is quickly processed, allowing the system to catch up efficiently.
[0075] At step (312), if there are no threshold breaches or health issues,
the system processes the data and stores it in the database or provides it to an external system. This ensures seamless data flow and processing under normal
25 conditions.
[0076] FIG. 4A illustrates an exemplary representation (400A) of a system
in a network with a single normalizer instance, in accordance with an embodiment
of the present disclosure.
[0077] At step (412), data ingestion via files or streams is performed. This
30 is the initial step where data is fed into the normalizer for processing.
17
[0078] At step (414), the normalizer processes the ingested data. The
normalizer ensures that data is properly managed and normalized before being processed further.
[0079] At step (416), the processed data is stored in a data lake. The data
5 lake acts as a central repository for all processed data, ensuring easy access and management.
[0080] At step (418), the AI/ML model (212), trained on application and
database operational statistics associated with operations of the database (210), continuously monitors and manages the normalizer (214). This step ensures that
10 the normalizer operates efficiently and adapts to changing load conditions. In examples, the AI/ML model (212) may define the threshold based on the database operational statistics. Examples of database operational statistics include performance metrics, resource utilization, latency, storage metrics, concurrency and locking, error rates, cache performance, replication and availability, backup
15 and recovery, index performance, configuration statistics, workload patterns, etc. Key performance metrics include query execution time, which measures the duration taken to execute queries, and transaction throughput, indicating the number of transactions processed per second (TPS). Similarly, query throughput (QPS) reflects the number of queries processed per second, while the cache hit
20 ratio shows the percentage of read requests served from the cache rather than the disk. Resource utilization statistics include CPU usage, memory usage, disk I/O, and network I/O. High CPU or memory usage can signal performance issues, whereas disk and network I/O metrics provide insights into data transfer efficiency and potential bottlenecks. Latency metrics, such as read and write
25 latency, measure the time taken for data operations, helping to identify slowdowns in data access. Storage metrics provide a detailed view of how space is used within the database. This includes the total database size, the size of individual tables, and indexes. Storage utilization indicates the percentage of allocated storage in use, highlighting potential needs for expansion. Concurrency and
30 locking statistics, such as active sessions, lock wait time, and the number of deadlocks, reveal how well the database handles multiple simultaneous operations
18
and potential contention issues. Error rates statistics indicate database health, tracking the number of query errors and transaction failures. High error rates can indicate underlying problems that need to be addressed. Cache performance metrics, including the buffer pool hit ratio and page read/write counts, assess the 5 efficiency of memory usage in serving data requests. Replication and availability metrics indicate data consistency and uptime. Replication lag measures the delay between primary and replica databases, while node availability tracks the uptime and health of nodes in a clustered environment. Backup and recovery statistics, such as backup duration and recovery time, are indicative for disaster recovery
10 planning and ensuring data protection. User activity metrics monitor login attempts, both successful and failed, and the number of active users over time, help to detect unusual patterns or potential security threats. Index performance metrics, including index usage frequency and fragmentation levels, indicate how effectively indexes are being utilized and maintained. Maintenance tasks, such as
15 vacuum operations in PostgreSQL or index rebuilds, are necessary for keeping the database running smoothly and preventing performance degradation. Configuration statistics, including parameter settings and change history, provide insights into the database’s setup and any alterations that might impact performance. Log analysis involves monitoring log volume and identifying errors
20 and warnings recorded in logs, which can offer early warnings of issues. The
workload patterns, including peak usage times and query distribution, help in
understanding usage trends and planning for capacity needs.
[0081] FIG. 4B illustrates an exemplary representation (400B) of a system
in a network with multiple normalizer instances, in accordance with an
25 embodiment of the present disclosure.
[0082] At step (420), multiple normalizer instances (Normalizer1,
Normalizer2, Normalizer3) process the ingested data. Each normalizer instance
operates independently to manage and normalize data flow.
[0083] At step (422), the processed data from multiple normalizer
30 instances is stored in a data lake. This centralized storage ensures that all processed data is easily accessible and managed.
19
[0084] At step (424), an AI/ML model, trained on live operational
statistics of operations of the application and database, continuously monitors and
manages the multiple normalizer instances. The AI/ML model ensures efficient
load balancing and management across all instances.
5 [0085] At step (426), an external system consumes the enriched data from
the normalizers when the load on the database (210) is below a threshold level, or
when the health of the database (210) is optimal. This step ensures that processed
and normalized data is utilized effectively by external applications or systems.
[0086] FIG. 5 illustrates an example computer system (500) in which or
10 with which the embodiments of the present disclosure may be implemented.
[0087] As shown in FIG. 5, the computer system (500) may include an
external storage device (510), a bus (520), a main memory (530), a read-only memory (540), a mass storage device (550), a communication port(s) (560), and a processor (570). A person skilled in the art will appreciate that the computer
15 system (500) may include more than one processor and communication ports. The processor (570) may include various modules associated with embodiments of the present disclosure. The communication port(s) (560) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10 Gigabit port using copper or fiber, a serial port, a parallel port, or
20 other existing or future ports. The communication ports(s) (560) may be chosen
depending on a network, such as a Local Area Network (LAN), Wide Area
Network (WAN), or any network to which the computer system (500) connects.
[0088] In an embodiment, the main memory (530) may be Random Access
Memory (RAM), or any other dynamic storage device commonly known in the
25 art. The read-only memory (540) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chip for storing static information e.g., start-up or basic input/output system (BIOS) instructions for the processor (570). The mass storage device (550) may be any current or future mass storage solution, which can be used to store information and/or instructions.
30 Exemplary mass storage solutions include, but are not limited to, Parallel Advanced Technology Attachment (PATA) or Serial Advanced Technology
20
Attachment (SATA) hard disk drives or solid-state drives (internal or external,
e.g., having Universal Serial Bus (USB) and/or Firewire interfaces).
[0089] In an embodiment, the bus (520) may communicatively couple the
processor(s) (570) with the other memory, storage, and communication blocks.
The bus (520) may be, e.g., a Peripheral Component Interconnect PCI) / PCI
Extended (PCI-X) bus, Small Computer System Interface (SCSI), Universal Serial
Bus (USB), or the like, for connecting expansion cards, drives, and other
subsystems as well as other buses, such a front side bus (FSB), which connects the
processor (570) to the computer system (500).
[0090] In another embodiment, operator and administrative interfaces, e.g.,
a display, keyboard, and cursor control device may also be coupled to the bus
(520) to support direct operator interaction with the computer system (500). Other
operator and administrative interfaces can be provided through network
connections connected through the communication port(s) (560). Components
described above are meant only to exemplify various possibilities. In no way
should the aforementioned exemplary computer system (500) limit the scope of
the present disclosure.
[0091] FIG. 6 illustrates a flow diagram (600) representing a method for
automatically managing a database (210) using a normalizer (214), in accordance
with an embodiment of the present disclosure. The method ensures efficient load
management and operational continuity with minimal manual intervention by
leveraging an AI model (212).
[0092] Step (602) includes monitoring, by an Artificial Intelligence (AI)
model (212), the load and health of the database (210). The monitoring involves
continuously gathering and analyzing various metrics, such as the number of
requests, throughput, success and failure rates, and other operational parameters to
assess the current state of the database (210).
[0093] At step (604), the AI model (212) determines the failed and
successful operations at the database (210) to ascertain the health of the database
(210).
[0094] At step (606), the processing engine (208) automatically pauses the
normalizer (214) when the load on the database (210) is above a threshold level or when the health of the database (210) is suboptimal. This automatic pausing mechanism helps prevent overload conditions and potential data loss by temporarily halting data processing activities.
[0095] At step (608), the AI model (212) continuously monitors the
database (210) to check if the operating condition of the database (210) returns to normal. The normal operating condition is defined as the load on the database (210) being below the threshold level or the health of the database (210) being in a desired operating state.
[0096] At step (610), the processing engine (208) resumes the normalizer
(214) automatically once the operating condition is normal. By resuming the normalizer (214), the data processing activities are restarted as soon as the database (210) is capable of handling the load without issues, thereby minimizing downtime.
[0097] At step (612), the processing engine (208) increases the processing
rate to compensate for the time period during which the normalizer (214) was
paused and unprocessed data accumulated. By accelerating the processing rate,
the system (102) can quickly catch up on the backlog of data, ensuring that all
data is processed in a timely manner and maintaining overall system efficiency.
[0098] Although the disclosure describes automatically managing a
database (210) or an application by utilizing a normalizer (214) in a server, the
disclosure is equally applicable to databases and applications of the UE (108).
[0099] The present disclosure provides technical advancement related to
load balancing for databases or applications. This advancement addresses the
limitations of existing solutions that include improper load balancing. The
disclosure involves the inventive aspects of monitoring loads on the
databases/applications, pausing when the load exceeds the threshold and resuming
when the load falls below the threshold. The disclosure offers significant
improvements by automatic management of workload, which prevents loss of data
and efficient management of data.
[0100] While considerable emphasis has been placed herein on the
preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be implemented merely as illustrative of the disclosure and not as a limitation.
ADVANTAGES OF THE INVENTION
[0101] The present disclosure provides a system and a method for load
balancing of the database in a network.
[0102] The present disclosure provides a system and a method that reduce
time and efforts in application management/operations.
[0103] The present disclosure provides a system and a method that
minimize the failures at network.
[0104] The present disclosure provides a system and a method that is
economical and easy to implement.
We claim:
1. A method for automatically managing a database (210) by utilizing a
normalizer (214), the method comprising:
monitoring, by an Artificial Intelligence (AI) model (212), a load on the database (210) and a health of the database (210);
determining, by the AI model (212), failed and successful operations at the database (210) to ascertain the health of the database (210);
automatically pausing the normalizer (214), by a processing engine (208), when the load on the database (210) is above a threshold level or when the health of the database (210) is suboptimal;
monitoring, by the AI model (212), the database (210) to check if operating condition of the database (210) returns to normal, wherein the normal operating condition comprises the load on the database being below the threshold level or when the health of database (210) is in a desired operating state;
resuming, by the processing engine (208), the normalizer (214) automatically once the operating condition returns to the normal; and
increasing, by the processing engine (208), processing rate to compensate for time period during which the normalizer (214) was paused and unprocessed data was accumulated.
2. The method claimed in claim 1, wherein the monitoring includes analyzing throughput affecting the database (210).
3. The method claimed in claim 1, wherein the automatic pausing and the resuming operations prevents data loss during connectivity issues.
4. The method claimed in claim 1, wherein the AI model (212) is continuously trained on statistics associated with operations of the database (210).
5. The method claimed in claim 1, comprising storing data in the database (210) or providing the data to an external system when the load on the database (210) is below the threshold level, or when the health of the database (210) is optimal.
6. A system (102) for automatically managing a database (210), by utilizing a normalizer (214), the system (102) comprising:
an Artificial Intelligence (AI) model (212) configured to:
monitor a load on the database (210) and a health of the
database (210); and
determine the failed and successful operations at the
database (210) to ascertain the health of the database (210);
a processing engine (208) configured to automatically pause the normalizer (214) when the load on the database (210) is above a threshold level or when the health of database (210) is suboptimal; and
the AI model (212) is further configured to monitor the database (210) to check if operating condition of the database (210) returns to normal, wherein the normal operating condition comprises the load on the database being below the threshold level or when the health of database (210) is in a desired operating state;
the processing engine (208) configured to:
automatically resume the normalizer (214) once the
operating condition returns to the normal; and
increase processing rate of the database (210), using the AI
model (212), to compensate for time period during which the
normalizer (214) was paused and unprocessed data had
accumulated.
7. The system (102) claimed in claim 6, wherein step of monitoring includes
analyzing throughput affecting the database (210).
8. The system (102) claimed in claim 6, wherein the automatic pausing and the resuming operations prevent data loss during connectivity issues.
9. The system (102) claimed in claim 6, wherein the AI model (212) is continuously trained on statistics associated with operations of the database (210).
10. The system (102) claimed in claim 6, further comprising components configured to store data in the database (210) or provide it to an external system when the load on the database (210) is below the threshold level, or when the health of the database (210) is optimal.
11. A user equipment (UE) (108) communicatively coupled with a network (104), the coupling comprises steps of:
receiving, by the network (104), a connection request from the UE (108);
sending, by the network (106), an acknowledgment of the connection request to the UE (104); and
transmitting a plurality of signals in response to the connection request, wherein the UE (108) comprises a database, and wherein the UE (108) is configured to manage a database (210) of the UE (108) as claimed in method claim of claim 1.
| # | Name | Date |
|---|---|---|
| 1 | 202321047043-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf | 2023-07-12 |
| 2 | 202321047043-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf | 2023-07-12 |
| 3 | 202321047043-FORM 1 [12-07-2023(online)].pdf | 2023-07-12 |
| 4 | 202321047043-DRAWINGS [12-07-2023(online)].pdf | 2023-07-12 |
| 5 | 202321047043-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2023(online)].pdf | 2023-07-12 |
| 6 | 202321047043-FORM-26 [13-09-2023(online)].pdf | 2023-09-13 |
| 7 | 202321047043-FORM-26 [05-03-2024(online)].pdf | 2024-03-05 |
| 8 | 202321047043-FORM 13 [08-03-2024(online)].pdf | 2024-03-08 |
| 9 | 202321047043-AMENDED DOCUMENTS [08-03-2024(online)].pdf | 2024-03-08 |
| 10 | 202321047043-Request Letter-Correspondence [03-06-2024(online)].pdf | 2024-06-03 |
| 11 | 202321047043-Power of Attorney [03-06-2024(online)].pdf | 2024-06-03 |
| 12 | 202321047043-Covering Letter [03-06-2024(online)].pdf | 2024-06-03 |
| 13 | 202321047043-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf | 2024-06-14 |
| 14 | 202321047043-ENDORSEMENT BY INVENTORS [26-06-2024(online)].pdf | 2024-06-26 |
| 15 | 202321047043-DRAWING [26-06-2024(online)].pdf | 2024-06-26 |
| 16 | 202321047043-CORRESPONDENCE-OTHERS [26-06-2024(online)].pdf | 2024-06-26 |
| 17 | 202321047043-COMPLETE SPECIFICATION [26-06-2024(online)].pdf | 2024-06-26 |
| 18 | 202321047043-ORIGINAL UR 6(1A) FORM 26-020924.pdf | 2024-09-09 |
| 19 | Abstract.jpg | 2024-10-09 |
| 20 | 202321047043-FORM-9 [16-10-2024(online)].pdf | 2024-10-16 |
| 21 | 202321047043-FORM 18A [17-10-2024(online)].pdf | 2024-10-17 |
| 22 | 202321047043-FORM 3 [07-11-2024(online)].pdf | 2024-11-07 |
| 23 | 202321047043-FER.pdf | 2024-12-30 |
| 24 | 202321047043-FORM 3 [01-01-2025(online)].pdf | 2025-01-01 |
| 25 | 202321047043-FORM 3 [01-01-2025(online)]-1.pdf | 2025-01-01 |
| 26 | 202321047043-Proof of Right [20-01-2025(online)].pdf | 2025-01-20 |
| 27 | 202321047043-FER_SER_REPLY [20-01-2025(online)].pdf | 2025-01-20 |
| 28 | 202321047043-ORIGINAL UR 6(1A) FORM 1-270125.pdf | 2025-01-29 |
| 29 | 202321047043-US(14)-HearingNotice-(HearingDate-17-10-2025).pdf | 2025-09-03 |
| 30 | 202321047043-Correspondence to notify the Controller [14-10-2025(online)].pdf | 2025-10-14 |
| 31 | 202321047043-Written submissions and relevant documents [30-10-2025(online)].pdf | 2025-10-30 |
| 1 | SearchStrategyE_24-12-2024.pdf |