Sign In to Follow Application
View All Documents & Correspondence

System And Processor Implemented Method For Automatic Simulation Of Database Configuration And Storage

Abstract: A system (108) and a method (500) for automatic simulation of database configura-tion and storage is provided. The method (500) includes simulating (502), by one or more database simulators (308), a data and storing the simulated data in a distributed data lake (312) via a data ingestion layer (310). The method further includes monitor-ing continuously (504), by the AI model (306), the distributed data lake (312) and a data centre (318) for determining a change in performance, and upon determining a drop in performance from a predetermined level of performance, providing (506), by the AI model (306), a tuned configuration and an optimal value. The method further includes tuning (508), by the database manager (304), the configuration of the data-base simulators (308) and the distributed data lake (312) and displaying a current configuration on the UI (302) upon a need for manual changes arising. Fig. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
12 July 2023
Publication Number
50/2024
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

JIO PLATFORMS LIMITED
Office-101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India.

Inventors

1. BHATNAGAR, Aayush
Tower-7, 15B, Beverly Park, Sector-14 Koper Khairane, Navi Mumbai - 400701, Maharashtra, India.
2. MURARKA, Ankit
W-16, F-1603, Lodha Amara, Kolshet Road, Thane West - 400607, Maharashtra, India.
3. KOLARIYA, Jugal Kishore
C 302, Mediterranea CHS Ltd, Casa Rio, Palava, Dombivli - 421204, Maharashtra, India.
4. KUMAR, Gaurav
1617, Gali No. 1A, Lajjapuri, Ramleela Ground, Hapur - 245101, Uttar Pradesh, India.
5. SAHU, Kishan
Ajay Villa, Gali No. 2, Ambedkar Colony, Bikaner - 334003, Rajasthan, India.
6. VERMA, Rahul
A-154, Shradha Puri Phase-2, Kanker Khera, Meerut - 250001, Uttar Pradesh, India.
7. MEENA, Sunil
D-29/1, Chitresh Nagar, Borkhera, District - Kota - 324001, Rajasthan, India.
8. GURBANI, Gourav
I-1601, Casa Adriana, Downtown, Palava Phase 2, Dombivli - 421204 Maharashtra, India.
9. CHAUDHARY, Sanjana
Jawaharlal Road, Muzaffarpur - 842001, Bihar, India.
10. GANVEER, Chandra Kumar
Village - Gotulmunda, Post - Narratola, Dist. - Balod - 491228, Chhattisgarh, India.
11. DE, Supriya
G2202, Sheth Avalon, Near Jupiter Hospital Majiwada, Thane West - 400601, Maharashtra, India.
12. KUMAR, Debashish
Bhairaav Goldcrest Residency, E-1304, Sector 11, Ghansoli, Navi Mumbai - 400701, Maharashtra, India.
13. TILALA, Mehul
64/11, Manekshaw Marg, Manekshaw Enclave, Delhi Cantonment, New Delhi - 110010, India.
14. KALIKIVAYI, Srinath
3-61, Kummari Bazar, Madduluru Village, S N Padu Mandal, Prakasam District, Andhra Pradesh - 523225, India
15. PANDEY, Vitap
D 886, World Bank Barra, Kanpur - 208027, Uttar Pradesh, India.

Specification

FORM 2
PATENTS ACT, 1970 (39 of 1970) PATENTS
RULES, 2003

COMPLETE SPECIFICATION
TITLE OF THE INVENTION
SYSTEM AND PROCESSOR-IMPLEMENTED METHOD FOR AUTOMATIC SIMULATION OF DATABASE CONFIGURATION AND STORAGE
APPLICANT
380006, Gujarat, India; Nationality: India
following specification particularly describes the invention and the manner in which it is to be performed

RESERVATION OF RIGHTS
[0001] A portion of the disclosure of this patent document contains material, which is subject to intellectual property rights such as, but are not limited to, copyright, design, trademark, Integrated Circuit (IC) layout design, and/or trade dress protection, belong-5 ing to Jio Platforms Limited (JPL) or its affiliates (hereinafter referred as owner). The owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights whatsoever. All rights to such intellectual property are fully reserved by the owner. 10
TECHNICAL FIELD
[0002] The present disclosure relates to the field of communications network. More particularly, the present disclosure relates to a system and a processor-implemented method for automatic simulation of database configuration and storage. 15
BACKGROUND
[0003] The following description of related art is intended to provide background in-formation pertaining to the field of the disclosure. This section may include certain aspects of the art that may be related to various features of the present disclosure. How-20 ever, it should be appreciated that this section be used only to enhance the understand¬ing of the reader with respect to the present disclosure, and not as admissions of prior art.
[0004] Typically, database configuration may be performed to create or change objects and attributes, and to customize a database. In conventional systems, database config-25 uration may be performed manually and may therefore lack automatic simulation of database configuration, resulting in suboptimal performance, inefficient resource allo¬cation, scalability challenges, potential compliance and security risks. Administrators may also resort to a trial-and-error approach, which involves manually adjusting data¬base configuration parameters without a clear understanding of their impact. Moreover,
2

manual adjustments may be time-consuming, error-prone, and may not yield optimal results.
[0005] To address these challenges, there is a need in the art to provide an efficient technique for performing automatic simulation of database configuration by overcom-5 ing the deficiencies of the prior arts.
OBJECT OF THE INVENTION
[0006] Some of the objects of the present disclosure, which at least one embodiment
herein satisfies are as listed herein below. 10 [0007] A primary object of the embodiments of the present invention is to provide a
system to automatically simulate configuration and storage of a database.
[0008] Yet another object of the embodiments of the present invention is to provide a
system to streamline the process of optimizing the database.
[0009] Yet another object of the embodiments of the present invention is to provide 15 improved performance, efficient resource utilization, cost-saving, and to yield optimal
simulation results.
[0010] These and other objectives and advantages of the embodiments of the present
invention will become readily apparent from the following detailed description taken
in conjunction with the accompanying drawings. 20
SUMMARY
[0011] The following details present a simplified summary of the embodiments of the present invention to provide a basic understanding of the several aspects of the embod-25 iments of the present invention. This summary is not an extensive overview of the em-bodiments of the present invention. It is not intended to identify key/critical elements of the embodiments of the present invention or to delineate the scope of the embodi¬ments of the present invention. Its sole purpose is to present the concepts of the em¬bodiments of the present invention in a simplified form as a prelude to the more detailed
30 description that is presented later.
3

[0012] The other objects and advantages of the embodiments of the present invention will become readily apparent from the following description taken in conjunction with the accompanying drawings. It should be understood, however, that the following de-scriptions, while indicating preferred embodiments and numerous specific details 5 thereof, are given by way of illustration and not of limitation. Many changes and mod-ifications may be made within the scope of the embodiments of the present invention without departing from the spirit thereof, and the embodiments of the present invention include all such modifications. [0013] Embodiments herein relate to a system and a processor-implemented method to
10 automatically simulate database configuration and database storage. The present dis-closure provides the system to streamline a process of optimizing the database. The present disclosure provides the system to automatically determine optimal configura-tions and storage options for the database. The present disclosure provides an artificial intelligence/machine learning (AI/ML) model inside the system, which feeds back the
15 determined results into the system for closed-loop actions, and facilitates continuous optimization and configuration of one or more parameters. The one or more parameters include, but are not limited to database latency, network latency, jitter, database phys¬ical resources consumption, virtual resource consumption, and the like. Thereby, re¬sulting in improved performance or threshold benchmark based on available hardware.
20 The present disclosure achieves improved performance, efficient resource utilization, cost savings, and yields optimal simulation results.
[0014] According to an aspect of the present technology, a processor-implemented method for automatic simulation of database configuration and storage is provided. The method includes simulating, by one or more database simulators associated with an
25 artificial intelligence (AI) model of a user interface (UI), a data and storing the simu¬lated data in a distributed data lake via a data ingestion layer. The method further in¬cludes monitoring continuously, by the AI model, the distributed data lake and a data centre for at least one of: a configuration, a storage, and a load for determining a change in performance, and upon determining a drop in performance from a predetermined
4

level of performance, providing, by the AI model, a tuned configuration and an optimal value, to the database manager. The method further includes tuning, by the database manager, the configuration of the database simulators and the distributed data lake based on the tuned configurations, and displaying a current configuration on the UI 5 which enables a user to manually update the configuration of the database.
[0015] According to one embodiment of the present technology, monitoring continu¬ously includes analysing, by the AI model, a workload and a configuration of the dis¬tributed data lake and a storage of the data in the distributed data lake. The method further includes providing feedback to the system based on the analysis performed by 10 the AI model. The system performs closed loop actions for continuous optimization and configuration of one or more parameters which results in improvement in perfor¬mance and improvement in threshold benchmark for a hardware infrastructure. The hardware infrastructure includes at least one of a cloud and the data centre associated with the distributed data lake. The method further includes updating, by the database 15 manager, the configuration of the distributed data lake based on the configuration of the distributed data lake as analysed by the AI model. In the present disclosure, the AI model and AI engine have been used interchangeably.
[0016] According to one embodiment of the present technology, the distributed data lake is in connection with at least one of the AI model, the database manager, the data 20 ingestion layer, and the data centre.
[0017] According to one embodiment of the present technology, the one or more pa¬rameters includes at least one of a database latency, a network latency, a jitter, a data¬base physical resources consumption, and a virtual resource consumption. [0018] According to an aspect of the present technology, a system for automatic sim-25 ulation of database configuration and storage is provided. The system includes a pro¬cessor to fetch and execute computer-readable instructions stored in a memory of a system. The system further includes a memory to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, fetched and executed to create or share data packets over a network service. The system further
5

includes an interface to provide a communication pathway for one or more components of the system. The system further includes a database which comprises data either stored or generated as a result of functionalities implemented by any of the components of the processor or the system. The system further includes an artificial intelligence 5 (AI) engine configured for simulating a data and storing the simulated data in a distrib¬uted data lake via a data ingestion layer and monitoring continuously, the distributed data lake and a data centre for at least one of a configuration, a storage, and a load for determining a change in performance, and upon determining a drop in performance from a predetermined level of performance, providing, a tuned configuration and an 10 optimal value associated with the configuration, to a database manager. The system further includes the database manager configured for tuning the configuration of the database simulators and the distributed data lake based on the tuned configurations, and displaying a current configuration on the UI which enables au user to update the configuration manually. 15 [0019] According to one embodiment of the present technology, the AI engine is fur¬ther configured for analysing, a workload and a configuration of the distributed data lake and a storage of the data in the distributed data lake and providing as a feedback, one or more results of analysis into a system for closed loop actions, and for continuous optimization and configuration of one or more parameters, resulting in one of: im-20 proved performance or threshold benchmark based on a hardware infrastructure for configuration, wherein the hardware infrastructure includes at least one of a cloud and the data centre associated with the distributed data lake. The database manager updates the configuration of the distributed data lake based on the analysis/feedback provided by the AI model. 25 [0020] According to one embodiment of the present technology, the distributed data lake is in communication with at least one of the AI model, the database manager, the data ingestion layer, and the data centre. [0021] According to one embodiment of the present technology, the one or more pa-
6

rameters includes at least one of a database latency, a network latency, a jitter, a data¬base physical resources consumption, and a virtual resource consumption. [0022] According to yet another aspect of the present technology, a computer program product comprising a non-transitory computer-readable medium is provided. The non-5 transitory computer-readable medium comprises instructions that, when executed by one or more processors, cause one or more processors to perform a method. a method for automated maintenance of databases is provided. The method includes simulating, by one or more database simulators associated with an artificial intelligence (AI) model of a user interface (UI), data and storing the simulated data in a distributed data lake 10 via a data ingestion layer. The method further includes monitoring continuously, by the AI model, the distributed data lake and a data centre for at least one of: a configuration, a storage, and a load for determining a change in performance, and upon determining a drop in performance from a predetermined level of performance, providing, by the AI model, a tuned configuration and an optimal value, to the database manager. The 15 method further includes tuning, by the database manager, the configuration of the da¬tabase simulators and the distributed data lake based on the tuned configurations, and displaying a current configuration on the UI at least whenever a need for manual changes arises.
[0023] The various embodiments of the present technology offers a system to automat-20 ically simulate configuration and storage of a database. The present disclosure provides the system to streamline a process of optimizing the database. The present disclosure provides the system to automatically determine optimal configurations and storage op¬tions for the database. The present disclosure provides the system to continuously mon¬itor the configuration, the storage, and the load of the database, and to feed back the 25 monitored results into the system for closed loop actions, and for continuous optimiza¬tion and configuration of one or more parameters. The present disclosure achieves im¬proved performance, efficient resource utilization, and cost savings, and yields optimal simulation results.
7

[0024] A user equipment communicatively coupled to a system, the coupling com¬prises steps of simulating data by one or more database simulators associated with an artificial intelligence (AI) model and storing the simulated data in a distributed data lake via a data ingestion layer. The AI model monitors the distributed data lake and a 5 data centre for at least one of: a configuration, a storage, and a load for determining a change in performance, and upon determining a drop in performance from a predeter¬mined level of performance, the AI model provides a tuned configuration and an opti¬mal value associated with the configuration, to the database manager. The database manager tunes the configuration of the database simulators and the distributed data lake
10 based on the tuned configurations and displays a current configuration on a UI which enables a user to manually update the configuration of the database. [0025] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments of the present invention that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific
15 embodiments without departing from the generic concept, and, therefore, such adapta-tions and modifications should and are intended to be comprehended within the mean¬ing and range of equivalents of the disclosed embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS 20
[0026] The accompanying drawings, which are incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which like reference numerals refer to the same parts throughout the differ¬ent drawings. Components in the drawings are not necessarily to scale, emphasis in-25 stead being placed upon clearly illustrating the principles of the present disclosure. Some drawings may indicate the components using block diagrams and may not rep¬resent the internal circuitry of each component. It will be appreciated by those skilled in the art that disclosure of such drawings includes the disclosure of electrical compo-
8

nents, electronic components or circuitry commonly used to implement such compo-nents.
[0027] FIG. 1 illustrates an exemplary network architecture, in which or with which embodiments of the present disclosure may be implemented. 5 [0028] FIG. 2 illustrates an exemplary block diagram of a system for automatic simu¬lation of database configuration and storage, in accordance with an embodiment of the present disclosure;
[0029] FIG. 3 illustrates an exemplary architecture of the system for automatic simu-lation of database configuration and storage, in accordance with an embodiment of the
10 present disclosure;
[0030] FIG. 4 illustrates an exemplary sequential flow diagram depicting a process of automatic simulation of database configuration and storage, in accordance with an em¬bodiment of the present disclosure; [0031] FIG. 5 illustrates a flowchart of a processor-implemented method for automatic
15 simulation of database configuration and storage, in accordance with an embodiment of the present disclosure; and
[0032] FIG. 6 illustrates an exemplary computer system in which or with which em-bodiments of the present disclosure may be implemented.
20 LIST OF REFERENCE NUMERALS
100- Network Architecture
102- User
104 – User Equipment
106 – Network 25 108 – System
110- Database Nodes
202- Processor
204- Memory
206- Interface
9

208- Processing engines
210- Database
212-Artificial intelligence (AI) engine
214- Other engines 5 302- User Interface
304- Database manager
306- AI model
308- Data/Database simulators
312- Distributed Data Lake 10 310- Data ingestion layer
314- Hardware infra
316- Cloud
318- Data centre
600- Computer system 15 610- External storage device
620- Bus
630- Main memory
640- Read only memory
650- Mass Storage Device 20 660- Communication Port
670- Computer System Processor
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0033] In the following description, for the purposes of explanation, various specific 25 details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address all of the problems discussed
10

above or might address only some of the problems discussed above. Some of the prob¬lems discussed above might not be fully addressed by any of the features described herein.
[0034] The ensuing description provides exemplary embodiments only, and is not in-5 tended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of ele¬ments without departing from the spirit and scope of the disclosure as set forth. 10 [0035] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it will be understood by one of ordinary skill in the art that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown as components in block diagram form in order not to obscure the embodiments 15 in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
[0036] Also, it is noted that individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure dia-20 gram, or a block diagram. Although a flowchart may describe the operations as a se¬quential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed but could have additional steps not included in a figure. A process may correspond to a method, a function, a procedure, a subroutine, a 25 subprogram, etc. When a process corresponds to a function, its termination can corre¬spond to a return of the function to the calling function or the main function. [0037] The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example, instance, or illustration. For the avoidance of doubt, the subject matter
11

disclosed herein is not limited by such examples. In addition, any aspect or design de¬scribed herein as “exemplary” and/or “demonstrative” is not necessarily to be con¬strued as preferred or advantageous over other aspects or designs, nor is it meant to preclude equivalent exemplary structures and techniques known to those of ordinary 5 skill in the art. Furthermore, to the extent that the terms “includes,” “has,” “contains,” and other similar words are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements. [0038] Reference throughout this specification to “one embodiment” or “an embodi-10 ment” or “an instance” or “one instance” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular 15 features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[0039] The terminology used herein is for the purpose of describing particular embod¬iments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, 20 unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the 25 term “and/or” includes any and all combinations of one or more of the associated listed items.
[0040] The present disclosure provides a system to automatically simulate configura-tion and storage of a database. The present disclosure leverages artificial intelli-gence/machine learning (AI/ML) techniques to automate the process of determining
12

optimal configurations and storage options for the database. The present disclosure achieves improved performance, efficient resource utilization, and cost savings, and yields optimal simulation results. The database simulators simulate the data and store the data in the data lake via data ingestion layer. The AI/ML layer continuously moni-5 tors the data lake and hardware infra for configuration, storage, and load. The AI/ML module identifies a dip in the performance then it provides the tuned configurations as per the performance and provide the optimal value to the database manager. The data¬base manager tunes the configuration of simulators and the data lake based on the tuned configurations and also displays the current configuration on the user interface so that 10 if any manual changes are required, it can be done. There is a lack of auto simulation of database (DB) configuration in the telecom domain which results in suboptimal per¬formance, inefficient resource allocation, scalability challenges, and potential compli¬ance and security risks. Administrators may also resort into a trial-and-error approach, manually adjusting database configuration parameters without a clear understanding of 15 their impact. This can be time-consuming, error-prone, and may not yield optimal re¬sults. The present technology adopts the simulation of DB configuration and DB stor¬age, to streamline the process of optimizing the database systems, leading to improved performance, efficient resource utilization, cost savings and may yield optimal results. In some embodiments, the data definition language (DDL) auto simulates DB config-20 uration by leveraging AI/ML techniques to automate the process of determining opti¬mal configurations and storage options for a database system. The expression “Data Definition Language (DDL)” used hereinafter in the specification refers to combining the data dictionary updates, storage engine operations, and binary log writes associated with a DDL operation into a single, atomic operation. 25 [0041] The various embodiments of the present disclosure will be explained in detail with reference to FIGs. 1 to 6.
[0042] FIG. 1 illustrates an exemplary network architecture (100) in which or with which embodiments of the present disclosure may be implemented. [0043] Referring to FIG. 1, the network architecture (100) may include one or more
13

computing devices or user equipment’s (104-1, 104-2…104-N) associated with one or more users (102-1, 102-2…102-N) in an environment. A person of ordinary skill in the art will understand that one or more users (102-1, 102-2…102-N) may be individually referred to as the user (102) and collectively referred to as the users (102). Similarly, a 5 person of ordinary skill in the art will understand that one or more user equipment (104-1, 104-2…104-N) may be individually referred to as the user equipment (104) and col-lectively referred to as the user equipment (104). A person of ordinary skill in the art will appreciate that the terms “computing device(s)” and “user equipment” may be used interchangeably throughout the disclosure. Although three user equipment (104) are
10 depicted in FIG. 1, however any number of the user equipment (104) may be included without departing from the scope of the ongoing description.
[0044] In an embodiment, the user equipment (104) may include smart devices oper-ating in a smart environment, for example, an Internet of Things (IoT) system. In such an embodiment, the user equipment (104) may include, but is not limited to, smart
15 phones, smart watches, smart sensors (e.g., mechanical, thermal, electrical, magnetic, etc.), networked appliances, networked peripheral devices, networked lighting system, communication devices, networked vehicle accessories, networked vehicular devices, smart accessories, tablets, smart television (TV), computers, smart security system, smart home system, other devices for monitoring or interacting with or for the users
20 (102) and/or entities, or any combination thereof. A person of ordinary skill in the art will appreciate that the user equipment (104) may include, but is not limited to, intelli¬gent, multi-sensing, network-connected devices, that can integrate seamlessly with each other and/or with a central server or a cloud-computing system or any other device that is network-connected.
25 [0045] In an embodiment, the user equipment (104) may include, but is not limited to, a handheld wireless communication device (e.g., a mobile phone, a smart phone, a pha-blet device, and so on), a wearable computer device(e.g., a head-mounted display com¬puter device, a head-mounted camera device, a wristwatch computer device, and so on), a Global Positioning System (GPS) device, a laptop computer, a tablet computer,
14

or another type of portable computer, a media playing device, a portable gaming sys¬tem, and/or any other type of computer device with wireless communication capabili¬ties, and the like. In an embodiment, the user equipment (104) may include, but is not limited to, any electrical, electronic, electro-mechanical, or an equipment, or a combi-5 nation of one or more of the above devices such as virtual reality (VR) devices, aug¬mented reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device, wherein the user equipment (104) may include one or more in-built or externally cou¬pled accessories including, but not limited to, a visual aid device such as a camera, an
10 audio aid, a microphone, a keyboard, and input devices for receiving input from the user (102) or the entity such as touch pad, touch enabled screen, electronic pen, and the like. A person of ordinary skill in the art will appreciate that the user equipment (104) may not be restricted to the mentioned devices and various other devices may be used. [0046] Referring to FIG. 1, the user equipment (104) may communicate with a system
15 (108), for example, a system, through a network (106). The system (108) may auto-matically simulate database configuration by leveraging AI/ML techniques to automate a process of determining optimal configurations and storage options for a database. [0047] In an embodiment, the network (106) may include at least one of a Fifth Gen¬eration (5G) network, 6G network, or the like. The network (106) may enable the user
20 equipment (104) to communicate with other devices in the network architecture (100) and/or with the system (108). The network (106) may include a wireless card or some other transceiver connection to facilitate this communication. In another embodiment, the network (106) may be implemented as, or include any of a variety of different com¬munication technologies such as a wide area network (WAN), a local area network
25 (LAN), a wireless network, a mobile network, a Virtual Private Network (VPN), the Internet, the Public Switched Telephone Network (PSTN), or the like. [0048] Although FIG. 1 shows exemplary components of the network architecture (100), in other embodiments, the network architecture (100) may include fewer com-
15

ponents, different components, differently arranged components, or additional func¬tional components than depicted in FIG. 1. Additionally, or alternatively, one or more components of the network architecture (100) may perform functions described as be¬ing performed by one or more other components of the network architecture (100). 5 [0049] FIG. 2 illustrates an exemplary block diagram (200) of a system (108) for au¬tomatic simulation of database configuration and storage, in accordance with an em¬bodiment of the present disclosure. The system (108) includes a processor (202), a memory (204), an interface (206), a database (210), an artificial intelligence engine (212) and a database manager (216). 10 [0050] In an aspect, the system (108) may include one or more processor(s) (202). The one or more processor(s) (202) may be implemented as one or more microprocessors, microcomputers, microcontrollers, edge or fog microcontrollers, digital signal proces¬sors, central processing units, logic circuitries, and/or any devices that process data based on operational instructions. The processor (202) is configured to fetch and exe-15 cute computer-readable instructions stored in a memory (204) of the system (108). Among other capabilities, one or more processor(s) (202) may be configured to fetch and execute computer-readable instructions stored in the memory (204) of the system (108). The memory (204) may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which 20 may be fetched and executed to create or share data packets over a network service. The memory (204) may comprise any non-transitory storage device including, for ex¬ample, volatile memory such as Random-Access Memory (RAM), or non-volatile memory such as Erasable Programmable Read-Only Memory (EPROM), flash memory, and the like. The memory (204) is configured to store one or more computer-25 readable instructions or routines in a non-transitory computer-readable storage me¬dium, fetched and executed to create or share data packets over a network service. [0051] In an embodiment, the system (108) may include an interface(s) (206). The in-terface(s) (206) may include a variety of interfaces, for example, interfaces for data input and output devices, referred to as I/O devices, storage devices, and the like. The
16

interface(s) (206) may facilitate communication of the system (108). The interface(s) (206) may also provide a communication pathway for one or more components of the system (108). Examples of such components include, but are not limited to, processing unit/engine(s) (208) and a database (210). The interface (206) is configured to provide 5 a communication pathway for one or more components of the system (108).
[0052] The processing unit/engine(s) (208) may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processing engine(s) (208). In examples described herein, such combinations of hardware and programming may be implemented in sev-10 eral different ways. For example, the programming for the processing engine(s) (208) may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processing engine(s) (208) may comprise a processing resource (for example, one or more processors), to execute such instruc¬tions. In the present examples, the machine-readable storage medium may store in-15 structions that, when executed by the processing resource, implement the processing engine(s) (208). In such examples, the system (108) may comprise the machine-reada¬ble storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the system (108) and the processing resource. In other examples, the processing en-20 gine(s) (208) may be implemented by an electronic circuitry.
[0053] The processing engine (208) may include one or more engines such as an arti¬ficial intelligence (AI) engine (212), and other engine(s) (214). [0054] The AI engine (212) may automate a process of determining optimal database configurations and storage options for the database (210) by employing one or more 25 AI models (used interchangeably with the term AI/ML models). In some embodiments, the AI engine (212) may include one or more pre-trained AI models that may be con¬figured to continuously monitor the configuration, the storage, and/or the load of the database (210). The AI model may feedback the results of monitoring into the system (108) for closed-loop actions, and for continuous optimization and configuration of one
17

or more parameters. In some embodiments, the one or more parameters may include, but are not limited to, database latency, network latency, jitter, database physical re¬sources consumption, and virtual resource consumption. The automatic determination of optimal database configurations results in improved performance or threshold 5 benchmark based on an available hardware. The AI engine (212) is configured for sim¬ulating a data and storing the simulated data in a distributed data lake via a data inges-tion layer and monitoring continuously, the distributed data lake and a data centre for at least one of a configuration, a storage, and a load for determining a change in per¬formance, and upon determining a drop in performance from a predetermined level of
10 performance, providing, a tuned configuration and an optimal value, to a database man-ager (216). The AI engine (212) is further configured for analysing, a workload and a configuration of the distributed data lake and a storage of the data in the distributed data lake and providing as a feedback, one or more results of analysis into a system for closed loop actions, and for continuous optimization and configuration of one or more
15 parameters, resulting in one of: improved performance or threshold benchmark based on a hardware infrastructure for configuration, wherein the hardware infrastructure in-cludes at least one of a cloud and the data centre associated with the distributed data lake. In an embodiment, the database manager is configured for tuning, by the database manager, the configuration of the database simulators and the distributed data lake
20 based on the tuned configurations, and displaying a current configuration on the UI for manual changes int the configuration based upon requirements. The database manager updates the configuration of the distributed data lake based on the analysis by the AI model. [0055] In an embodiment, the database (210) may comprise data that may be either
25 stored or generated as a result of functionalities implemented by any of the components of the processor(s) (202) or the processing engine(s) (208) or the system (108). [0056] Although FIG. 2 shows an exemplary block diagram (200) of the system (108), in other embodiments, the system (108) may include fewer components, different com¬ponents, differently arranged components, or additional functional components than
18

depicted in FIG. 2. Additionally, or alternatively, one or more components of the sys-tem (108) may perform functions described as being performed by one or more other components of the system (108).
[0057] FIG. 3 illustrate an exemplary architecture (300) of the system (108), in accord-5 ance with an embodiment of the present disclosure.
[0058] Referring to FIG. 3, the system (108) may include a user interface (UI) (302), at least one AI model (306), a database manager (304), one or more database simulators (308), a distributed data lake (312), a data ingestion layer (310), and a hardware (314) comprising a cloud (316) and a data centre (318).
10 [0059] The database simulators (308) may be configured to simulate data and the sim-ulated data is provided to a data ingestion layer (310). The data ingestion layer (310) collates the received data and stores the collated data in the distributed data lake (312)). [0060] The AI model (306) may be configured to analyse workload and configuration of the distributed data lake. The configuration of the distributed data lake may include
15 configuring a list of sources or data simulators capable of storing the data in the dis-tributed data lake. The distributed data lake (312) may be interchangeably referred to as a database. The distributed data lake (312) may be in communication with the AI model (306), the database manager (304), the data ingestion layer (310), and the data centre (318). The AI model (306) may also analyse the storage of the distributed data
20 lake (312). The AI model (306) may continuously monitor the distributed data lake (312) and the data centre (318) for configuration, storage and load. Upon the AI model (306) identifying any dip in the performance during monitoring, the AI model (306) provides the tuned configurations as per the performance and provides an optimal value associated with the configuration to the database manager (304).
25 [0061] The AI model (306) feeds back the determined results into the system for closed-loop actions, and for continuous optimization and configuration of one or more parameters, resulting in improved performance or threshold benchmark based on avail¬able hardware (314). [0062] The database manager (304) may be configured to update the configuration of
19

the distributed data lake (312) based on the configuration of the distributed data lake (312) analysed by the AI model (304). The database manager (304) may also tune the configuration of the database simulators (308) and the distributed data lake (312) as per the configuration, and displays the current configuration on the UI (302) so that if 5 any manual changes are required it may be done.
[0063] FIG. 4 illustrates an exemplary sequential flow diagram (400) depicting a pro-cess of automatic simulation of database configuration and storage, in accordance with an embodiment of the present disclosure. [0064] At step 402, the data simulators (308) simulate the data and the simulated data
10 is provided to a data ingestion layer (310). The data ingestion layer (310) collates the received data and stores the collated data in the distributed data lake (312). The data simulators use certain rules to generate data, typically by writing the generation rules with a priori information about the particular situation known, and thereby generating data such as random numbers, names of persons etc. The simulated data may include
15 network traffic data, network latency or error rate etc. At step 404, the data is inserted from database ingestion layer (310) to distributed data lake (312). At step 406, the da¬tabase manager (304) requests for auto simulation of database configuration and stor¬age to AI engine (306). At step 408, the AI engine (306) performs database workload analysis and stores database workload analysis in the distributed data lake (312). At
20 step 410, the AI engine (306) performs configuration exploration and stores it in the distributed data lake (312). At step 412, the storage analysis of hardware (314) is per-formed. At step 414, AI engine (306) performs modelling and at step 416, the AI engine (306) performs simulation and evaluation. At step 418, the AI engine (306) provides automated recommendations for the optimal configuration settings and storage options
25 to database manager (304). At step 420 and 422, the database manager (304) provides updated configuration as per recommendations from AI engine (306) to the DB simu-lators and the distributed data lake (312) . At step 424, the database manager (304) updates the automated recommendations to user interface (302).
20

[0065] FIG. 5 illustrates a flowchart of a processor-implemented method (500) for au-tomatic simulation of database configuration and storage, in accordance with an em-bodiment of the present disclosure. At step 502, a data is simulated, by one or more database simulators associated with an artificial intelligence (AI) model of a user in-5 terface (UI), and the simulated data is stored in a distributed data lake via a data inges-tion layer. At step 504, the distributed data lake and a data centre is monitored contin¬uously, by the AI model, for at least one of a configuration, a storage, and a load for determining a change in performance, and upon determining a drop in performance from a predetermined level of performance, a tuned configuration and an optimal value 10 is provided, by the AI model, to the database manager. At step 506, the configuration of the database simulators and the distributed data lake is tuned, by the database man¬ager, based on the tuned configurations, and a current configuration is displayed on the UI when a need for manual changes arises.
[0066] According to one embodiment of the present technology, monitoring continu-15 ously (504) includes analysing, by the AI model (306), a workload and a configuration of the distributed data lake (312) and a storage of the data in the distributed data lake (312). The method further includes providing a feedback, by the AI model (306), one or more results of analysis into a system for closed loop actions, and for continuous optimization and configuration of one or more parameters, resulting in one of: im-20 proved performance or threshold benchmark based on a hardware infrastructure (314) for configuration, wherein the hardware infrastructure (314) comprises at least one of: a cloud (316) and the data centre (318) associated with the distributed data lake (312). The method further includes updating, by the database manager (304), the configura¬tion of the distributed data lake (312) based on the configuration of the distributed data 25 lake (312) as analysed by the AI model (306).
[0067] According to one embodiment of the present technology, the distributed data lake (312) is in connection with at least one of: the AI model (306), the database manager (304), the data ingestion layer (310), and the data centre (318). [0068] According to one embodiment of the present technology, the one or more
21

parameters includes at least one of a database latency, a network latency, a jitter, a database physical resources consumption, and a virtual resource consumption. [0069] FIG. 6 illustrates an exemplary computer system (600) in which or with which embodiments of the present disclosure may be implemented. As shown in FIG. 6, the 5 computer system (600) may include an external storage device (610), a bus (620), a main memory (630), a read only memory (640), a mass storage device (650), a com¬munication port (660), and a processor (670). A person skilled in the art will appreciate that the computer system (600) may include more than one processor (670) and com¬munication ports (660). Processor (670) may include various modules associated with
10 embodiments of the present disclosure.
[0070] A user equipment (102) communicatively coupled to a system (108), the cou-pling comprises steps of simulating (502) data by one or more database simulators (308) associated with an artificial intelligence (AI) model (306) and storing the simu-lated data in a distributed data lake (312) via a data ingestion layer (310). The AI
15 model (306) monitors the distributed data lake (312) and a data centre (318) for at least one of: a configuration, a storage, and a load for determining a change in performance, and upon determining a drop in performance from a predetermined level of perfor¬mance, the AI model (306) provides a tuned configuration and an optimal value asso¬ciated with the configuration, to the database manager (304). The database manager
20 (304) tunes the configuration of the database simulators (308) and the distributed data lake (312) based on the tuned configurations and displays a current configuration on a UI (302) which enables a user to manually update the configuration of the database. [0071] In an embodiment, the communication port (660) may be any of an RS-232 port for use with a modem-based dialup connection, a 10/100 Ethernet port, a Gigabit or 10
25 Gigabit port using copper or fiber, a serial port, a parallel port, or other existing or future ports. The communication port (660) may be chosen depending on a network, such a Local Area Network (LAN), Wide Area Network (WAN), or any network to which the computer system (600) connects. [0072] In an embodiment, the memory (630) may be Random Access Memory (RAM),
22

or any other dynamic storage device commonly known in the art. Read-only memory (640) may be any static storage device(s) e.g., but not limited to, a Programmable Read Only Memory (PROM) chips for storing static information e.g., start-up or Basic In¬put/Output System (BIOS) instructions for the processor (670). 5 [0073] In an embodiment, the mass storage (650) may be any current or future mass storage solution, which may be used to store information and/or instructions. Exem¬plary mass storage solutions include, but are not limited to, Parallel Advanced Tech¬nology Attachment (PATA) or Serial Advanced Technology Attachment (SATA) hard disk drives or solid-state drives (internal or external, e.g., having Universal Serial Bus
10 (USB) and/or Firewire interfaces), one or more optical discs, Redundant Array of In-dependent Disks (RAID) storage, e.g., an array of disks (e.g., SATA arrays). [0074] In an embodiment, the bus (620) communicatively couples the processor(s) (670) with the other memory, storage and communication blocks. The bus (620) may be, e.g., a Peripheral Component Interconnect (PCI)/PCI Extended (PCI-X) bus, Small
15 Computer System Interface (SCSI), Universal Serial Bus (USB) or the like, for con-necting expansion cards, drives and other subsystems as well as other buses, such a front side bus (FSB), which connects the processor (670) to the computer system (500). [0075] Optionally, operator and administrative interfaces, e.g., a display, keyboard, joystick, and a cursor control device, may also be coupled to the bus (620) to support
20 direct operator interaction with the computer system (600). Other operator and admin-istrative interfaces may be provided through network connections connected through the communication port (660). Components described above are meant only to exem-plify various possibilities. In no way should the aforementioned exemplary computer system (600) limit the scope of the present disclosure.
25 [0076] While the foregoing describes various embodiments of the present disclosure, other and further embodiments of the present disclosure may be devised without de-parting from the basic scope thereof. The scope of the present disclosure is determined by the claims that follow. The present disclosure is not limited to the described embod¬iments, versions or examples, which are included to enable a person having ordinary
23

skill in the art to make and use the present disclosure when combined with information and knowledge available to the person having ordinary skill in the art.
ADVANTAGES OF THE PRESENT DISCLOSURE
[0077] The present disclosure described herein above has several technical ad-
vantages including, but not limited to, the realization of the system and the method that:
1. to provide a system to automatically simulate configuration and storage of a data-base.
2. to provide the system to streamline a process of optimizing the database.
3. to provide the system to automatically determine optimal configurations and storage options for the database.
4. to provide the system to continuously monitor the configuration, storage, and load of the database, and to feed back of the monitored results into the system for closed loop actions, and for continuous optimization and configuration of one or more param¬eters.
5. to achieve improved performance, efficient resource utilization, and cost savings, and yields optimal simulation results.
[0078] While considerable emphasis has been placed herein on the preferred embodi-ments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the princi-ples of the disclosure. These and other changes in the preferred embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter to be implemented merely as illustrative of the disclosure and not as limitation.

We Claim:
1. A method (500) for automatic simulation of database configuration and storage, the
method (500) comprising:
simulating (502) data by one or more database simulators (308) associated with an artificial intelligence (AI) model (306) and storing the simulated data in a distributed data lake (312) via a data ingestion layer (310);
monitoring continuously (504), by the AI model (306), the distributed data lake (312) and a data centre (318) for at least one of: a configuration, a storage, and a load for determining a change in performance;
upon determining a drop in performance from a predetermined level of performance, providing (506) by the AI model (306), a tuned configuration and an optimal value associated with the configuration, to a database manager (304); and
tuning (508), by the database manager (304), the configuration of the one or more database simulators (308) and the distributed data lake (312) based on the tuned configurations.
2. The method (500) as claimed in claim 1, further comprising:
analysing, by the AI model (306), a workload and a configuration of the distributed data lake (312) and a storage of the data in the distributed data lake (312);
providing, by the AI model (306), a feedback including one or more results of analysis to a system for continuous optimization and configuration of one or more parameters, resulting in one of: improved performance or threshold benchmark for a hardware infrastructure (314), wherein the hardware infrastructure (314) comprises at least one of: a cloud (316) and the data centre (318) associated with the distributed data lake (312); and
updating, by the database manager (304), the configuration of the distributed data lake (312) based on the feedback provided by the AI model (306).

3. The method (500) as claimed in claim 1, wherein the distributed data lake (312) is in connection with at least one of: the AI model (306), the database manager (304), the data ingestion layer (310), and the data centre (318).
4. The method (500) as claimed in claim 2, wherein the one or more parameters com-prises at least one of: a database latency, a network latency, a jitter, a database physical resources consumption, and a virtual resource consumption.
5. The method (500) as claimed in claim 1, further comprising displaying the tunned configuration on a UI (302) which enables a user to manually update the configuration of the database.
6. A system (108) for automatic simulation of database configuration and storage, the system comprising:
a processor (202) to fetch and execute computer-readable instructions stored in a memory (204) of the system (108), wherein the memory (204) stores one or more computer-readable instructions or routines in a non-transitory computer-readable stor-age medium which are fetched and executed to create or share data packets over a network service;
an interface (206) to provide a communication pathway for one or more compo-nents of the system (108);
a database (210) comprises data either stored or generated as a result of function¬alities implemented by any of the components of the processor (202) or the system (108);
a database simulator (308) configured for:
simulating a data and storing the simulated data in a distributed data lake (312) via a data ingestion layer (310); and
an artificial intelligence (AI) engine (212, 306) configured for:

monitoring continuously, the distributed data lake (312) and a data centre (318) for at least one of: a configuration, a storage, and a load for determining a change in performance;
upon determining a drop in performance from a predetermined level of performance, providing, a tuned configuration and an optimal value associated with the configuration, to a database manager (304); and
the database manager (304) configured for tuning, the configuration of the one or more database simulators (308) and the distributed data lake (312) based on the tuned configurations.
7. The system (108) as claimed in claim 6, wherein the AI engine (212) is further con¬
figured for:
analysing, a workload and a configuration of the distributed data lake (312) and a storage of the data in the distributed data lake (312); and
providing as a feedback, one or more results of analysis into a system for closed loop actions, and for continuous configuration of one or more parameters, resulting in one of: improved performance or threshold benchmark for a hardware infrastructure (314), wherein the hardware infrastructure (314) comprises at least one of: a cloud (316) and the data centre (318) associated with the distributed data lake (312),
wherein the database manager (304) updates the configuration of the distributed data lake (312) based on the feedback provided by the AI model (306).
8. The system (108) as claimed in claim 6, wherein the distributed data lake (312) is in
communication with at least one of: the AI model (306), the database manager (304),
the data ingestion layer (310), and the data centre (318).

9. The system (108) as claimed in claim 7, wherein the one or more parameters com¬
prises at least one of: a database latency, a network latency, a jitter, a database physical
resources consumption, and a virtual resource consumption.
10. A user equipment (102) communicatively coupled to a system (108), the coupling
comprises steps of:
simulating (502) data by one or more database simulators (308) associated with an artificial intelligence (AI) model (306) and storing the simulated data in a distributed data lake (312) via a data ingestion layer (310);
monitoring continuously (504), by the AI model (306), the distributed data lake (312) and a data centre (318) for at least one of: a configuration, a storage, and a load for determining a change in performance;
upon determining a drop in performance from a predetermined level of performance, providing (506) by the AI model (306), a tuned configuration and an optimal value associated with the configuration, to the database manager (304); and
tuning (508), by the database manager (304), the configuration of the database simulators (308) and the distributed data lake (312) based on the tuned configurations.

Documents

Application Documents

# Name Date
1 202321047044-STATEMENT OF UNDERTAKING (FORM 3) [12-07-2023(online)].pdf 2023-07-12
2 202321047044-PROVISIONAL SPECIFICATION [12-07-2023(online)].pdf 2023-07-12
3 202321047044-FORM 1 [12-07-2023(online)].pdf 2023-07-12
4 202321047044-DRAWINGS [12-07-2023(online)].pdf 2023-07-12
5 202321047044-DECLARATION OF INVENTORSHIP (FORM 5) [12-07-2023(online)].pdf 2023-07-12
6 202321047044-FORM-26 [13-09-2023(online)].pdf 2023-09-13
7 202321047044-FORM-26 [05-03-2024(online)].pdf 2024-03-05
8 202321047044-FORM 13 [08-03-2024(online)].pdf 2024-03-08
9 202321047044-AMENDED DOCUMENTS [08-03-2024(online)].pdf 2024-03-08
10 202321047044-Request Letter-Correspondence [03-06-2024(online)].pdf 2024-06-03
11 202321047044-Power of Attorney [03-06-2024(online)].pdf 2024-06-03
12 202321047044-Covering Letter [03-06-2024(online)].pdf 2024-06-03
13 202321047044-CORRESPONDANCE-WIPO CERTIFICATE-14-06-2024.pdf 2024-06-14
14 202321047044-ENDORSEMENT BY INVENTORS [26-06-2024(online)].pdf 2024-06-26
15 202321047044-DRAWING [26-06-2024(online)].pdf 2024-06-26
16 202321047044-CORRESPONDENCE-OTHERS [26-06-2024(online)].pdf 2024-06-26
17 202321047044-COMPLETE SPECIFICATION [26-06-2024(online)].pdf 2024-06-26
18 202321047044-ORIGINAL UR 6(1A) FORM 26-020924.pdf 2024-09-09
19 Abstract.jpg 2024-10-09
20 202321047044-FORM-9 [16-10-2024(online)].pdf 2024-10-16
21 202321047044-FORM 18A [18-10-2024(online)].pdf 2024-10-18
22 202321047044-FORM 3 [04-11-2024(online)].pdf 2024-11-04
23 202321047044-FER.pdf 2025-01-24
24 202321047044-Proof of Right [10-02-2025(online)].pdf 2025-02-10
25 202321047044-ORIGINAL UR 6(1A) FORM 1-130225.pdf 2025-02-14
26 202321047044-FORM 3 [27-03-2025(online)].pdf 2025-03-27
27 202321047044-FER_SER_REPLY [08-04-2025(online)].pdf 2025-04-08
28 202321047044-US(14)-HearingNotice-(HearingDate-01-12-2025).pdf 2025-11-04

Search Strategy

1 SearchHistory-202321047044E_21-01-2025.pdf