Sign In to Follow Application
View All Documents & Correspondence

System And Method For Synchronizing Data Across Mulitple Databases

Abstract: Disclosed herein is a method (400) for synchronizing data across a plurality of databases. The method includes establishing a connection between the plurality of databases which includes at least one source database and at least one target database by configuring a set of communication protocols. Upon establishing the connection between the plurality of databases, the method includes performing one or more data transformation operations on data in the at least one source database to align structure of the data with the at least one target database. Further, after performing the one or more data transformation operations, the method includes synchronizing the data between the at least one source database and at least one target database. FIG. 4

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
01 May 2024
Publication Number
45/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

Jio Platforms Limited
Office - 101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad 380006, Gujarat India

Inventors

1. Bhatnagar, Pradeep Kumar
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
2. Bhatnagar, Aayush
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
3. Reddy Polsoni, Chaitanya
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
4. Bhate, Hardika
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.
5. Jadon, Harendra
Reliance Corporate Park, Thane-Belapur Road, Ghansoli, Navi Mumbai, Maharashtra 400701, India.

Specification

DESC:FORM 2
THE PATENTS ACT, 1970 (39 OF 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See section 10 and rule 13)

SYSTEM AND METHOD FOR SYNCHRONIZING DATA ACROSS MULITPLE DATABASES

Jio Platforms Limited, an Indian company, having registered address at Office -101, Saffron, Nr. Centre Point, Panchwati 5 Rasta, Ambawadi, Ahmedabad - 380006, Gujarat, India

The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[0001] The embodiments of the present disclosure generally relate to the field of data management. More particularly, the present disclosure relates to a system and a method for synchronizing data across multiple databases.
BACKGROUND OF THE INVENTION
[0002] The subject matter disclosed in the background section should not be assumed or construed to be prior art merely because of its mention in the background section. Similarly, any problem statement mentioned in the background section or its association with the subject matter of the background section should not be assumed or construed to have been previously recognized in the prior art.
[0003] In the era of rapidly advancing technologies, an integration of Fifth Generation (5G) wireless technology and an emergence of Sixth Generation (6G) wireless technology have brought in a new wave of data-driven applications and use cases. Further, the advancements in the technologies have led to an exponential growth in data volume, diversity, and complexity. To harness the potential of the 5G and 6G wireless technologies, there are challenges in efficiently managing, storing, and accessing vast and varied datasets.
[0004] Traditionally, databases have been designed to cater to specific data types, such as tabular, logs, or geospatial data. While the database specialization in handling these data types has its advantages, it often results in fragmented data storage solutions, leading to inefficiencies in data retrieval and management. Furthermore, as applications and use cases evolve, a demand for real-time and near-time data processing has grown, placing additional strain on existing database systems.
[0005] Further, single database solutions, although convenient, often struggle to meet diverse and dynamic requirements posed by modern applications and face challenges in optimizing queries for specific database types, leading to suboptimal performance and increased processing times. Additionally, a lack of efficient data distribution across various databases further worsens these issues, impacting overall system performance and responsiveness.
[0006] In conventional methods, multiple databases are integrated to handle the different data types and to enhance capabilities of individual database. However, the conventional methods fail to synchronize the data across multiple databases efficiently. Therefore, mere integration of multiple databases does not provide a comprehensive and efficient approach to manage the growing complexity of modern data ecosystems.
[0007] Therefore, there lies a need for a more efficient system and a method for synchronizing data between a plurality of databases to enhance data retrieval efficiency for real-time and near-time applications while addressing the aforementioned shortcomings of the conventional methods.
SUMMARY
[0008] The following embodiments present a simplified summary to provide a basic understanding of some aspects of the disclosed invention. This summary is not an extensive overview, and it is not intended to identify key/critical elements or to delineate the scope thereof. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
[0009] In an embodiment, disclosed herein is a method for synchronizing data across a plurality of databases. The method includes establishing, by a pipeline creation engine, a connection between the plurality of databases including at least one source database and at least one target database by configuring a set of communication protocols. The method further includes performing, by an execution engine upon establishing the connection between the plurality of databases, one or more data transformation operations on data in the at least one source database to align structure of the data with the at least one target database. Further, the method includes synchronizing, by the execution engine, the data between the at least one source database and at least one target database after performing the one or more data transformation operations.
[0010] In some aspects of the present disclosure, the method further includes receiving, by a receiving module, one or more user queries associated with an end user use case. Further, the method includes implementing, by a query optimization engine, one or more query optimization techniques to optimize the one or more user queries. The one or more query optimization techniques are selected based on the at least one target database and the end user use case. Furthermore, the method includes extracting, by a data retrieval engine, optimized data from the at least one target database using the one or more optimized user queries.
[0011] In some aspects of the present disclosure, the method further includes creating, by the pipeline creation engine upon the establishment of the connection between the plurality of databases, a pipeline using a source connector and a sink connector to synchronize the data between the at least one source database and the at least one target database.
[0012] In some aspects of the present disclosure, the method further includes monitoring, by a data monitoring engine, changes in the data in the at least one source database. Further, the method includes updating, by the execution engine using the pipeline, the data into the at least one target database via the sink connector based on the monitored changes in the data to synchronize the data between the plurality of databases in real time or near real time.
[0013] In some aspects of the present disclosure, the set of communication protocols are configured using one or more open-source libraries. Further, the set of communication protocols include one or more predefined procedures for data exchange and the synchronization of the transformed data.
[0014] In some aspects of the present disclosure, the plurality of the databases includes one or more of Structured Query Language (SQL) databases or Not-only SQL (NoSQL) databases.
[0015] In some aspects of the present disclosure, the one or more data transformation operations are based on structure of data supported by the at least one source database and the at least one target database. Further, the data transformation includes at least one of data manipulation, data normalization, restructuring the data, adding additional metadata, attribute construction, data generalization, data discretization, data aggregation, or smoothing the data.
[0016] According to another aspect of the present disclosure, a system for synchronizing data between a plurality of databases is disclosed. The system includes a pipeline creation engine configured to establish a connection between the plurality of databases including at least one source database and at least one target database by configuring a set of communication protocols. The system further includes an execution engine configured to perform, upon establishing the connection between the plurality of databases, one or more data transformation operations on data in the at least one source database to align structure of the data with the at least one target database. The execution engine is configured to synchronize the data between the at least one source database and the at least one target database after performing the one or more data transformation operations.
BRIEF DESCRIPTION OF DRAWINGS
[0017] Various embodiments disclosed herein will become better understood from the following detailed description when read with the accompanying drawings. The accompanying drawings constitute a part of the present disclosure and illustrate certain non-limiting embodiments of inventive concepts. Further, components and elements shown in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. For consistency and ease of understanding, similar components and elements are annotated by reference numerals in the exemplary drawings.
[0018] FIG. 1 illustrates a diagram depicting an environment of a wireless communication network, in accordance with an embodiment of the present invention.
[0019] FIG. 2 illustrates a diagram depicting a system for synchronizing data across a plurality of databases, in accordance with an embodiment of the present disclosure.
[0020] FIG. 3 illustrates a block diagram of the system for synchronizing the data across the plurality of databases, in accordance with an embodiment of the present disclosure.
[0021] FIG. 4 illustrates a flow chart of a method for synchronizing the data across the plurality of databases, in accordance with an embodiment of the present disclosure.
[0022] FIG. 5 illustrates a schematic block diagram of a computing system for synchronizing the data across the plurality of databases, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE INVENTION
[0023] Inventive concepts of the present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of one or more embodiments of inventive concepts are shown. Inventive concepts may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Further, the one or more embodiments disclosed herein are provided to describe the inventive concept thoroughly and completely, and to fully convey the scope of each of the present inventive concepts to those skilled in the art. Furthermore, it should be noted that the embodiments disclosed herein are not mutually exclusive concepts. Accordingly, one or more components from one embodiment may be tacitly assumed to be present or used in any other embodiment.
[0024] The following description presents various embodiments of the present disclosure. The embodiments disclosed herein are presented as teaching examples and are not to be construed as limiting the scope of the present disclosure. The present disclosure should in no way be limited to the illustrative implementations, drawings, and techniques illustrated below, including the exemplary design and implementation illustrated and described herein, but may be modified, omitted, or expanded upon without departing from the scope of the present disclosure.
[0025] The following description contains specific information pertaining to embodiments in the present disclosure. The detailed description uses the phrases “in some embodiments” which may each refer to one or more or all of the same or different embodiments. The term “some” as used herein is defined as “one, or more than one, or all”. Accordingly, the terms “one”, “more than one”, “more than one, but not all” or “all” would all fall under the definition of “some.” In view of the same, the terms, for example, “in an embodiment” refers to one embodiment and the term, for example, “in one or more embodiments” refers to “at least one embodiment, or more than one embodiment, or all embodiments.”
[0026] The term “comprising,” when utilized, means “including, but not necessarily limited to;” it specifically indicates open-ended inclusion in the so-described one or more listed features, elements in a combination, unless otherwise stated with limiting language. Furthermore, to the extent that the terms “includes,” “has,” “have,” “contains,” and other similar words are used in either the detailed description, such terms are intended to be inclusive in a manner similar to the term “comprising.”
[0027] In the following description, for the purposes of explanation, various specific details are set forth in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features.
[0028] The description provided herein discloses exemplary embodiments only and is not intended to limit the scope, applicability, or configuration of the present disclosure. Rather, the foregoing description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing any of the exemplary embodiments. Specific details are given in the following description to provide a thorough understanding of the embodiments. However, it may be understood by one of the ordinary skilled in the art that the embodiments disclosed herein may be practiced without these specific details.
[0029] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein the description, the singular forms "a", "an", and "the" include plural forms unless the context of the invention indicates otherwise.
[0030] The terminology and structure employed herein are for describing, teaching, and illuminating some embodiments and their specific features and elements and do not limit, restrict, or reduce the scope of the present disclosure. Accordingly, unless otherwise defined, all terms, and especially any technical and/or scientific terms, used herein may be taken to have the same meaning as commonly understood by one having ordinary skill in the art.
[0031] An aspect of the present disclosure is to provide a system and a method for enabling real-time or near-real-time data synchronization between relational and non-relational databases without impacting existing database architectures and applications.
[0032] Another aspect of the present disclosure is to provide a method for adaptive advanced search to extract 5G or 6G data across multiple databases that are synchronized in the real-time or the near-real-time.
[0033] Embodiments of the present disclosure will be described below in detail with reference to the accompanying drawings. FIG. 1 and FIG. 5, discussed below, and the one or more embodiments used to describe the principles of the present disclosure are by way of illustration only and should not be construed in any way to limit the scope of the present disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged system or device.
[0034] FIG. 1 illustrates a diagram depicting an environment of a wireless communication network 100, in accordance with an embodiment of the present invention. The embodiment of the wireless communication network 100 shown in FIG. 1 is for illustration only. Other embodiments of the wireless communication network 100 may be used without departing from the scope of this disclosure.
[0035] The wireless communication network 100 may include various components such as a network 102, a user device 104, an application server 106, a database 108, a web server 110, processing modules 112, and other devices 114.
[0036] The user device 104 may communicate with the application servers 106 using one or more applications, for example a client application for accessing the database 108, installed in the user device 104. The user device 104 may communicate with the application servers 106 and with various other entities of the wireless communication network 100 (such as a base station, a core network, and in an external user device) via the network 102 using a communication technique, such as 2nd Generation (2G) communication technology, 3rd Generation (3G) communication technology, Long Term Evolution (LTE), 4th Generation (4G) LTE, 5th Generation (5G)/ New Radio (NR), Long Term Evolution Advanced (LTE-A), Worldwide Interoperability for Microwave Access (WiMAX), Wireless Fidelity (Wi-Fi), or other wireless communication techniques with multiple bands and carriers of telecom operators. The user device 104 may include smartphones, tablets, laptops, or desktop computers.
[0037] The application server 106 may be a physical machine, a virtual machine in a cloud environment a network of computers, a software framework, or a combination thereof, that may provide a generalized approach to create a server implementation. Examples of the application server 106 may include, but are not limited to, personal computers, laptops, mini-computers, mainframe computers, any non-transient and tangible machine that can execute a machine-readable code, cloud-based servers, distributed server networks, or a network of computer systems. The application server 106 may be realized through various web-based technologies such as, but not limited to, a Java web-framework, a .NET framework, a personal home page (PHP) framework, or any web-application framework.
[0038] The network 102 may include suitable logic, circuitry, and interfaces that may be configured to provide several network ports and several communication channels for transmission and reception of data related to operations of various entities of the wireless communication network 100. Each network port may correspond to a virtual address (or a physical machine address) for transmission and reception of the communication data. For example, the virtual address may be an Internet Protocol Version 4 (IPV4) (or an IPV6 address), and the physical address may be a Media Access Control (MAC) address. The network 102 may be associated with an application layer for implementation of communication protocols based on one or more communication requests from the various entities of the wireless communication network 100. The communication data may be transmitted or received via the communication protocols. Examples of the communication protocols may include, but are not limited to, Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Domain Network System (DNS) protocol, Common Management Interface Protocol (CMIP), Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Long Term Evolution (LTE) communication protocols, or any combination thereof. In some aspects of the present disclosure, the communication data may be transmitted or received via at least one communication channel of several communication channels in the network 102. The communication channels may include, but are not limited to, a wireless channel, a wired channel, a combination of wireless and wired channel thereof. The wireless or wired channel may be associated with a data standard which may be defined by one of a Local Area Network (LAN), a Personal Area Network (PAN), a Wireless Local Area Network (WLAN), a Wireless Sensor Network (WSN), Wireless Area Network (WAN), Wireless Wide Area Network (WWAN), a metropolitan area network (MAN), a satellite network, the Internet, an optical fiber network, a coaxial cable network, an infrared (IR) network, a radio frequency (RF) network, and a combination thereof. Aspects of the present disclosure are intended to include or otherwise cover any type of communication channel, including known, related art, and/or later developed technologies.
[0039] The database 108 may store data received from network components of the wireless communication network 100. For example, the database 108 may store data collected from the user device 104 via the web server 110.
[0040] The web server 110 includes software applications or interfaces that run on the user device 104 to communicate with the application server 106. The web server 110 may be a web browser or the web UI requesting data from the client application running on the user device 104.
[0041] The processing modules 112 may comprise a central processing unit (CPU) and a graphics processing unit (GPU) for performing computations and handling data processing tasks. The CPU may also be referred to as processor. The processor may include one or more general purpose processors and/or one or more special purpose processors, a microprocessor, a digital signal processor, an application specific integrated circuit, a microcontroller, a state machine, or ay any type of programmable logic array.
[0042] The other devices 114 may include a plurality of databases, an event streaming platform, framework servers, or any other connected devices to handle or store data required by a user associated with the user device 104. The plurality of databases includes one or more relational databases or one or more non-relational databases.
[0043] FIG. 2 illustrates a diagram depicting a system 200 for synchronizing data across the plurality of databases, in accordance with an embodiment of the present disclosure.
[0044] The system 200 includes various components such as the user device 104, the application server 106, the web server 110, the database 108, one or more relational databases 206, one or more non-relational databases 208, and an event streaming platform 210.
[0045] The user device 104 may communicate with the application servers 106 using one or more applications for accessing the database 108. Typically, the term “user device” can refer to any component such as “mobile station”, “subscriber station”, “remote terminal”, “wireless terminal”, “receive point”, “user equipment”, or the like. In an embodiment, the client application may be installed on the user device 104 to communicate with the application server 106. The client application may enable users to request the data. The users may initiate the requests through the client application, which are then forwarded to the web server 110 for further processing.
[0046] Upon receiving the request from the client application, the web server 110 may process a Hypertext Transfer Protocol (HTTP) request, identifying the requested data or action. The web server 110 may generate an appropriate HTTP response, which contains the requested data or acknowledgment of the action performed. The web server 110 may act as an intermediary between the client application and the application server 106, handling incoming HTTP requests and forwarding them to an appropriate destination. The web server 110 may facilitate communication between the client application and the application server 106, ensuring seamless data exchange and interaction.
[0047] The application servers 106 may handle queries or request from the user devices 104 and may perform a plurality of tasks such as processing, storage, data retrieval, and data synchronization between the other devices 114. The application servers 106 may be controlled by a processor to perform the plurality of tasks. The application server 106 may be a physical machine or a virtual machine in a cloud environment. The user device may communicate with the application server 106 via the network 102. The application servers 106 may also be referred as “server 106”.
[0048] The application server 106 may contains a business logic layer to handle requests from the user device and performs the plurality of tasks such as processing the user requests, performs the data transformations and executes the queries. The application server 106 may connect to the database 108 or the one or more relational databases 206, the one or more non-relational databases 208, and the event streaming platform 210 through the database 108.
[0049] The database 108 may serve as a core component responsible for storing, managing, and synchronizing the data across the one or more relational databases 206 and the one or more non-relational databases 208. The database 108 may connect to the one or more relational databases 206 and the one or more non-relational databases 208 via a source connector 202, a sink connector 204, and the event streaming platform 210 to facilitate the data extraction and the synchronization. In an embodiment, the database 108 may be configured to push the data from the one or more relational databases 206 (Structured Query Language (SQL) collections or indexes) to the one or more non-relational databases 208 (Not only SQL (NoSQL) collections or indexes) and vice versa, thereby ensuring the near real-time data synchronization without impacting the existing database architecture and applications.
[0050] The source connector 202 and the sink connector 204 may establish connections between the database server 108, the one or more relational databases 206, and the one or more non-relational databases 208. The source connector 202 pulls the data from external system into database and feeds the data into the event streaming platform 210. The source connector 202 connects the database 108 with the one or more relational databases 206. The one or more relational databases 206 push the data to the event streaming platform 210 which pushes the data to the one or more non-relational databases 208 via the sink connector 204. The sink connector 204 may take the data from the event streaming platform 210 and write the data to the non-relational databases 208. In a non-limiting example, the source connector 202 may be MYSQL source connector or a file source connector. In another non-limiting example, the sink connector 204 may be a PostgreSQL sink connector.
[0051] The event streaming platform 210 sets a Message Queuing System (MQS) to process the data in real-time or in a distributed fashion, facilitating seamless data integration and the synchronization. The MQS may manage the flow of data between the application server 106, the database server 108 ensuring reliable and efficient data transmission and play a crucial role in capturing the data changes in other databases (the one or more relational databases 206 and the one or more non-relational databases 208) and triggering data synchronization processes, thereby enhancing the overall performance and responsiveness of the system. For instance, the source connector 202 and the sink connector 204 pushes the data among different databases using the MQS.
[0052] The system 200 illustrates a comprehensive data synchronization and management solution, leveraging various components to facilitate the seamless interaction, the data transformation, and the optimization across multiple databases. The interconnected nature of these components ensures efficient data management and the synchronization, catering to the diverse needs of modern data-driven applications while maintaining the data integrity and the performance.
[0053] FIG. 3 illustrates a block diagram 300 of the system 200 for synchronizing the data across the plurality of databases, in accordance with an embodiment of the present disclosure.
[0054] The system 200 includes the application server 106 which further includes a processor 302, a memory 304, a communication unit 306, an Input-Output (I/O) interface 308, and the database 108, the one or more relational databases 206, the one or more non-relational databases 208, the event streaming platform 210, and one or more processing modules 112 (hereinafter may also be referred to as “processing modules 112”). Components of the application server 106 are coupled to each other via a communication bus 322.
[0055] The processor 302 may perform computations and data processing tasks of the application server 106. The processor 302 is configured to execute computer-readable instructions 304A (hereinafter also referred to as “a set of instructions 304A”) stored in the memory 304 and to cause the application server 106 to perform various processes. The processor 302 may include one or a plurality of processors, including a general-purpose processor, such as, for example, and without limitation, a central processing unit (CPU), an application processor (AP), a dedicated processor, a graphics-only processing unit such as a graphics processing unit (GPU) or the like, a programmable logic device, or any combination thereof.
[0056] The memory 304 stores the set of instructions 304A required by the processor 302 of the application server 106 for controlling its overall operations. The memory 304 may include non-volatile storage elements. Examples of such non-volatile storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory 304 may, in some examples, be considered a non-transitory storage medium. The "non-transitory" storage medium is not embodied in a carrier wave or a propagated signal. However, the term "non-transitory" should not be interpreted as the memory 304 is non-movable. In some examples, the memory 304 may be configured to store larger amounts of information. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache). The memory 304 may be an internal storage unit or an external storage unit of the application server 106, cloud storage, or any other type of external storage. In certain examples, the memory 304 configured as the non-transitory storage medium may include hard drives, solid-state drives, flash drives, Compact Disk (CD), Digital Video Disk (DVD), and the like. Further, the memory 304 may include any type of non-transitory storage medium, without deviating from the scope of the present disclosure.
[0057] More specifically, the memory 304 may store computer-readable instructions 304 A including instructions that, when executed by a processor (e.g., the processor 302) cause the application server 106 to perform various functions described herein. In some cases, the memory 304 may contain, among other things, a BIOS which may control basic hardware or software operation such as the interaction with peripheral components or devices.
[0058] The communication unit 306 may be configured to enable the application server 106 to communicate with various entities of the wireless communication network 100 via the network 102. Examples of the communication unit 306 may include, but are not limited to, a network interface such as an Ethernet card, a communication port, and/or a Personal Computer Memory Card International Association (PCMCIA) slot and card, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, and a local buffer circuit.
[0059] The I/O interface 308 may include suitable logic, circuitry, interfaces, and/or codes that may be configured to receive input(s) and present (or display) output(s) on the application server 106. For example, the I/O interface may have an input interface and an output interface. The input interface may be configured to enable a user to provide input(s) to trigger (or configure) the application server 106 to perform various operations for extracting optimized 5G or 6G data from across the plurality of databases which are synchronized in real time or near real time, such as but not limited to, configuring the application server 106 to receive one or more user queries to extract the 5G or 6G data. Examples of the input interface may include, but are not limited to, a touch interface, a mouse, a keyboard, a motion recognition unit, a gesture recognition unit, a voice recognition unit, or the like. Aspects of the present disclosure are intended to include or otherwise cover any type of the input interface including known, related art, and/or later developed technologies without deviating from the scope of the present disclosure. The output interface is configured to control the user device 104 to display a notification to end user. Examples of the output interface of the I/O interface 308 may include, but are not limited to, a digital display, an analog display, a touch screen display, an appearance of a desktop, and/or illuminated characters.
[0060] The one or more relational databases 206 (also referred to as Relational Database Management Systems (RDBMS)) is a database system that stores and retrieves data in form of a table organized in rows and columns. The one or more relational databases 206 establish a relationship between various data points of the tables based on data common to each table. The one or more relational databases 206 uses SQL queries to access the data. In a non-limiting example, the one or more relational databases 206 may include any SQL database. The one or more relational databases 206 may also be referred as “one or more source databases 206” or “at least one source database 206” throughput the disclosure.
[0061] The one or more non-relational databases 208 (also referred to as Not only SQL (NoSQL) database) is a database system that stores data as individual, unconnected files and can be used for complex, unstructured data types, such as documents or rich media files. The one or more non-relational databases 208 has peer-to-peer architecture so multiple machines can be add to the architecture. The one or more non-relational databases 208 use dynamic schema to query data. Further, the NoSQL database facilitates to access the same data simultaneously through different machines from different geographical zones because the one or more non-relational databases 208 is shared globally. The one or more non-relational databases 208 may also be referred as “one or more target databases 208” or “at least one target database 208” throughput the disclosure.
[0062] The event streaming platform 210 may collect, process, store, and integrate data at scale. The event streaming platform 210 is used for distributed streaming, stream processing, data integration, and pub/sub messaging. The event streaming platform 210 stores data as a series of events which may be any type of action, incident, or change that's identified or recorded by software or applications.
[0063] The processing module(s) 112 may be implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the application server 106. In non-limiting examples, described herein, such combinations of hardware and programming may be implemented in several different ways. For example, the programming for the processing modules(s) 112 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor 302 may comprise a processing resource (for example, one or more processors), to execute such instructions. In the present examples, the machine-readable storage medium may store instructions that, when executed by the processing resource, implement the processing module(s) 112. In such examples, the application server 106 may also comprise the machine-readable storage medium storing the instructions and the processing resource to execute the instructions, or the machine-readable storage medium may be separate but accessible to the application server 106 and the processing resource. In other examples, the processing module(s) 112 may be implemented using an electronic circuitry.
[0064] In one or more embodiments, the processing module(s) 112 may include a pipeline creation engine 310, an execution engine 312, a receiving module 314, a query optimization engine 316, a data retrieval engine 318, and a data monitoring engine 320.
[0065] In one or more aspects, the processor 302, using the pipeline creation engine 310, may establish a connection between the plurality of databases including the one or more source databases 206 and the one or more target databases 208 by configuring a set of communication protocols. For instance, the set of communication protocols include one or more predefined procedures for data exchange and synchronization of data. In a non-limiting example, the set of communication protocols may be configured using open-source libraries or native database protocols to connect the plurality of databases and fetch the required data. The data may correspond to 5G and 6G data that are stored across the plurality of databases in real time. In a non-limiting example, the data may be Fiber structure and equipment data or application data from various applications. The connection may be a direct connection using native connectors or database connectors.
[0066] Further, the processor 302, using the pipeline creation engine 310, may create a pipeline using the source connector 202 and the sink connector 204 upon the establishment of the connection between the plurality of databases. The pipeline is created to synchronize or manage the data between the one or more source databases 206 and the one or more target databases 208 in real time or in distributed manner. For instance, the pipeline may facilitate to-and-fro transfer of the data across the plurality of databases to leverage best possibilities of each of data sources.
[0067] In one or more aspects, upon establishing the connection between the plurality of databases or creating the pipeline between the plurality of databases, the processor 302, using the execution engine 312, may perform one or more data transformation operations on data in the one or more source databases 206. The data transformation operations are performed to align structure of the data with the one or more target databases 208. In a non-limiting example, the data in the one or more source databases 206 may be in a raw format or not in a format that we required for an end user use case. In another non-limiting example, when we are moving the data from one database to another database the structure of the data is not aligned. For example, in the SQL DBs, the structural of data may be in an excel and in No-SQL DBs data in JSON format or in a CSV format or any other format so we need to do some data transformation. Therefore, the one or more data transformation operations are based on structure of data supported by the one or more source databases 206 and the one or more target databases 208.
[0068] In a non-limiting example, the data transformation operations may include at least one of data manipulation, data normalization, restructuring the data, adding additional metadata, attribute construction, data generalization, data discretization, data aggregation, or smoothing the data.
[0069] In one or more aspects, the processor 302, using the execution engine 312, may further synchronize the data between the one or more source databases 206 and the one or more target databases 208 after performing the one or more data transformation operations. The processor 302 may synchronize the data in real time or near real time. Here, the synchronization of the data may involve writing data to a destination database among the one or more target databases 208 without any loss from a source database among the one or more source databases 206. Depending on the end user use case, the synchronization may be implemented as either a full data transfer or an incremental push, limiting the transfer to a specific set of tables or collections.
[0070] In one or more aspects, when the data is transferred from the one or more source databases 206 and the one or more target databases 208, later there is a requirement to do incremental push for real time or near real time synchronization. For instance, the processor 302, using the data monitoring engine 320, may monitor changes in the data in the one or more source databases 206. For instance, the processor 302, using the data monitoring engine 320, may monitor changes in the data by identifying any change event in the one or more source databases 206. Thereafter, the processor 302, using the execution engine 312, may update, by the execution engine using the pipeline, the data into the one or more target databases 208 via the sink connector 204 based on the monitored changes in the data to synchronize the data between the plurality of databases in the real time or near real time. For instance, updating the data may corresponds to reflecting the changes in the data into the one or more target databases 208 by copying the data into the one or more target databases 208. This may ensure that the data is kept up to date across both the database systems, facilitating the real-time and the near-real-time data availability for applications.
[0071] Once the data is synchronized, best use case of a particular database can be implemented and for that a 5G and 6G queries for use case may be optimized using one or more query optimization techniques. Here, a top data layer may be implemented to allow the client application to connect to the databases using standardized APIs. Through the REST APIs, applications may access the data stored in a destination database for performing the analytics on the data. In an embodiment, the top data layer may be used for the end user use case, for example but not limited to, advance search and data visualization.
[0072] In one or more aspects, the processor 302, using the receiving module 314, may receive one or more user queries (5G or 6G queries) associated with the end user use case. The one or more user queries may be received in response to user operation for the advanced search and data visualization. In one or more embodiments, the end user use case may be searching equipment based on inventory status code in Fiber structure and equipment data stored in a relational database. To achieve best possible search or quick search, data is migrated into a non-relational database and then to perform search. The query here may be a SQL based query to search the equipment.
[0073] Further, the processor 302, using the query optimization engine 316, may implement the one or more query optimization techniques to optimize the one or more user queries. For instance, selection of the one or more query optimization techniques is based on the one or more target databases 208 and the end user use case.
[0074] The one or more query optimization techniques includes at least one of Heuristic algorithms, cost-based algorithms, or rule-based algorithms. The Heuristic algorithms may include predefined rules & guidelines, pushdown predicates, apply algebraic transformations, and eliminate redundant operations. Further, the cost-based algorithms may include mathematical & statistic models to estimate cost and benefits, or query execution plans like CPU cycles, disk I/O’s, and network transfers. Further, the rule-based algorithms may include index scans or hash joins. The one or more query optimization techniques are applied to enhance the performance of both new and existing queries.
[0075] Further, the processor 302, using the data retrieval engine 318, may extract optimized data from the one or more target databases 208 using the one or more optimized user queries. For instance, the optimized data may be best suited data for the user queries.
[0076] FIG. 4 illustrates a flow chart of a method 400 for synchronizing the data across the plurality of databases, in accordance with an embodiment of the present disclosure. The method 400 comprises a series of operation steps indicated by blocks 402 through 410 performed by the system 200. The method 400 starts at block 402.
[0077] At block 402, the pipeline creation engine 310 may establish the connection between the plurality of databases including at least one source database 206 and at least one target database 208 by configuring the set of communication protocols. For instance, the connection is established to create the pipeline using the source connector 202 and the sink connector 204 to synchronize data between the at least one source database 206 and the at least one target database 208.
[0078] At block 404, the execution engine 312 may perform, upon creating the pipeline between the plurality of databases, the one or more data transformation operations on the data in the at least one source database 206 to align structure of the data with the at least one target database 208.
[0079] At block 406, the execution engine 312 may synchronize the data between the at least one source database 206 and at least one target database 208 after performing the one or more data transformation operations.
[0080] At block 408, the query optimization engine 316 may implement the one or more query optimization techniques to optimize the one or more user queries for extracting the optimized data. The one or more query optimization techniques are selected based on the at least one target database 208 and the end user use case.
[0081] At block 410, the data retrieval engine 318 may extract the optimized data from the at least one target database 208 using the one or more optimized user queries.
[0082] FIG. 5 illustrates a schematic block diagram of a computing system 500 for synchronizing the data across the plurality of databases, in accordance with an embodiment of the present disclosure.
[0083] The computing system 500 includes a network 502, a network interface 504, a processor 506 (similar in functionality to the processor 302 of FIG. 3), an Input/Output (I/O) interface 508 (similar in functionality to the I/O interface 308 of FIG. 3), and a non-transitory computer readable storage medium 510 (hereinafter may also be referred to as the “storage medium 510” or the “storage media 510”). The network interface 504 includes wireless network interfaces such as Bluetooth, Wi-Fi, Worldwide Interoperability for Microwave Access (WiMAX), General Packet Radio Service (GPRS), or Wideband Code Division Multiple Access (WCDMA) or wired network interfaces such as Ethernet, Universal Serial Bus (USB), or Institute of Electrical and Electronics Engineers-864 (IEEE-864).
[0084] The processor 506 may include various processing circuitry/modules and communicate with the storage medium 510 and the I/O interface 508. The processor 506 is configured to execute instructions stored in the storage medium 510 and to perform various processes. The processor 506 may include an intelligent hardware device including a general-purpose processor, such as, for example, and without limitation, the CPU, the AP, the dedicated processor, or the like, the graphics-only processing unit such as the GPU, the microcontroller, the FPGA, the programmable logic device, the discrete hardware component, or any combination thereof. The processor 506 may be configured to execute computer-readable instructions 510-1 stored in the storage medium 510 to cause the system 300 to perform various functions disclosed throughput the disclosure.
[0085] The storage medium 510 stores a set of instructions i.e., computer program instructions 510-1 (hereinafter may also be referred to as instructions 510-1) required by the processor 506 for controlling its overall operations. The storage media 510 may include an electronic storage medium, a magnetic storage medium, an optical storage medium, a quantum storage medium, or the like. For example, the storage media 510 may include, but are not limited to, hard drives, floppy diskettes, optical disks, ROMs, RAMs, EPROMs, EEPROMs, flash memory, magnetic or optical cards, solid-state memory devices, or other types of physical media suitable for storing electronic instructions. In one or more embodiments, the storage media 510 includes a Compact Disk-Read Only Memory (CD-ROM), a Compact Disk-Read/Write (CD-R/W), and/or a Digital Video Disc (DVD). In one or more implementations, the storage medium 510 stores computer program code configured to cause the computing system 500 to perform at least a portion of the processes and/or methods disclosed herein throughput the disclosure.
[0086] Embodiments of the present disclosure have been described above with reference to flowchart illustrations of methods and systems according to embodiments of the disclosure, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of the flowchart, and combinations of blocks (and/or steps) in the flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code. As will be appreciated, any such computer program instructions may be executed by one or more computer processors, including without limitation a general-purpose computer or special purpose computer, or other programmable processing apparatus to perform a group of operations comprising the operations or blocks described in connection with the disclosed method.
[0087] Further, these computer program instructions, such as embodied in computer-readable program code, may also be stored in one or more computer-readable memory or memory devices (for example, the memory 304 or the storage medium 510) that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions 510-1 stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s).
[0088] It will further be appreciated that the term “computer program instructions” as used herein refer to one or more instructions that can be executed by the one or more processors (for example, the processor 302 or the processor 506) to perform one or more functions as described herein. The instructions 510-1 may also be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely.
[0089] Now, referring to the technical abilities and advantageous effect of the present disclosure, operational advantages that may be provided by one or more embodiments may include providing the system and the method for creating the pipeline to synchronize the data between the plurality of databases thereby enhancing scalability and flexibility, and allowing the applications to leverage strengths of different database technologies based on the specific use cases and requirements. Another noteworthy advantage provided by the one or more embodiments, may include but not limited thereto, providing the system and the method that provides a unified, efficient, and scalable approach to work with diverse databases, thereby benefiting developers, applications users, and business end-users alike. Another advantage provided by the one or more embodiments may include facilitating optimizing queries for specific database types, taking advantage of database-specific query optimizations and execution plans, thereby leading to improved data extraction efficiency in real time or near real time applications by way of increased query performance and efficiency when accessing data from different types of databases.
[0090] Those skilled in the art will appreciate that the methodology described herein in the present disclosure may be carried out in other specific ways than those set forth herein in the above disclosed embodiments without departing from essential characteristics and features of the present invention. The above-described embodiments are therefore to be construed in all aspects as illustrative and not restrictive.
[0091] The drawings and the forgoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, orders of processes described herein may be changed and are not limited to the manner described herein. Any combination of the above features and functionalities may be used in accordance with one or more embodiments.
[0092] In the present disclosure, each of the embodiments has been described with reference to numerous specific details which may vary from embodiment to embodiment. The foregoing description of the specific embodiments disclosed herein may reveal the general nature of the embodiments herein that others may, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications are intended to be comprehended within the meaning of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and is not limited in scope.
LIST OF REFERENCE NUMERALS
[0093] The following list is provided for convenience and in support of the drawing figures and as part of the text of the specification, which describe innovations by reference to multiple items. Items not listed here may nonetheless be part of a given embodiment. For better legibility of the text, a given reference number is recited near some, but not all, recitations of the referenced item in the text. The same reference number may be used with reference to different examples or different instances of a given item. The list of reference numerals is:
100 - Wireless communication network
102 - Network
104 - User device
106 – Application Server
108 - Database
110 – Web server
112 – Processing modules
114 – Other devices
200 - system for synchronizing data across a plurality of databases
202 – Source connector
204 – Sink connector
206 – Relational databases
208 – Non-relational databases
210 – Event streaming platform
300 - Block diagram of the system 200
302 - Processor
304 - Memory
304 A - Set of instructions
306 – Communication Unit
308 - Input-Output (I/O) interface
310 - Pipeline creation engine
312 - Execution engine
314 - Receiving module
316 - Query optimization engine
318 - Data retrieval engine
320 - Data monitoring engine
322 - Communication bus
400 - Method for synchronizing the data across the plurality of databases
402-410 - Operation steps of the method 400
500 – Block diagram of a computing system
502 – Network
504 – Network interface
506 – Processor
508 – Input/Output (I/O) interface
510 – Non-transitory computer readable storage medium
510-1 - Set of instructions
,CLAIMS:I/We Claim:

1. A method (400) for synchronizing data across a plurality of databases, the method (400) comprising:
establishing, by a pipeline creation engine (310), a connection between the plurality of databases including at least one source database and at least one target database by configuring a set of communication protocols;
performing, by an execution engine (312) upon establishing the connection between the plurality of databases, one or more data transformation operations on data in the at least one source database to align structure of the data with the at least one target database; and
synchronizing, by the execution engine (312), the data between the at least one source database and at least one target database after performing the one or more data transformation operations.

2. The method (400) as claimed in claim 1, further comprising:
receiving, by a receiving module (314), one or more user queries associated with an end user use case;
implementing, by a query optimization engine (316), one or more query optimization techniques to optimize the one or more user queries, wherein the one or more query optimization techniques are selected based on the at least one target database and the end user use case; and
extracting, by a data retrieval engine (318), optimized data from the at least one target database using the one or more optimized user queries.

3. The method (400) as claimed in claim 1, further comprising creating, by the pipeline creation engine (310) upon the establishment of the connection between the plurality of databases, a pipeline using a source connector and a sink connector to synchronize the data between the at least one source database and the at least one target database.

4. The method (400) as claimed in claim 3, further comprising:
monitoring, by a data monitoring engine (320), changes in the data in the at least one source database; and
updating, by the execution engine (312) using the pipeline, the data into the at least one target database via the sink connector based on the monitored changes in the data to synchronize the data between the plurality of databases in real time or near real time.

5. The method (400) as claimed in claim 1, wherein
the set of communication protocols are configured using one or more open-source libraries, and
the set of communication protocols include one or more predefined procedures for data exchange and the synchronization of the transformed data.

6. The method (400) as claimed in claim 1, wherein the plurality of the databases includes one or more of Structured Query Language (SQL) databases or Not-only SQL (NoSQL) databases.

7. The method (400) as claimed in claim 1, wherein
the one or more data transformation operations are based on structure of data supported by the at least one source database and the at least one target database, and
the data transformation operations include at least one of data manipulation, data normalization, restructuring the data, adding additional metadata, attribute construction, data generalization, data discretization, data aggregation, or smoothing the data.

8. A system (200) for synchronizing data between a plurality of databases, the system (200) comprising:
a pipeline creation engine (310) configured to establish a connection between the plurality of databases including at least one source database and at least one target database by configuring a set of communication protocols; and
an execution engine (312) configured to:
perform, upon establishing the connection between the plurality of databases, one or more data transformation operations on data in the at least one source database to align structure of the data with the at least one target database; and
synchronize the data between the at least one source database and the at least one target database after performing the one or more data transformation operations.

9. The system (200) as claimed in claim 8, further comprising:
a receiving module (314) configured to receive one or more user queries associated with an end user use case;
a query optimization engine (316) configured to implement one or more query optimization techniques to optimize the one or more user queries, wherein the one or more query optimization techniques are selected based on the at least one target database and the end user use case; and
a data retrieval engine (318) configured to extract optimized data from the at least one target database using the one or more optimized user queries.

10. The system (200) as claimed in claim 8, wherein the pipeline creation engine (310) is further configured to create, upon the establishment of the connection between the plurality of databases, a pipeline using a source connector and a sink connector to synchronize the data between the at least one source database and the at least one target database.

11. The system (200) as claimed in claim 10, further comprising:
a data monitoring engine (320) configured to monitor changes in the data in the at least one source database, wherein
the execution engine (312) is further configured to update, using the pipeline, the data into the at least one target database via the sink connector based on the monitored changes in the data to synchronize the data between the plurality of databases in real time or near real time.

12. The system (200) as claimed in claim 8, wherein
the set of communication protocols are configured using one or more open-source libraries, and
the set of communication protocols include one or more predefined procedures for data exchange and the synchronization of the transformed data.

13. The system (200) as claimed in claim 8, wherein the plurality of databases includes one or more of Structured Query Language (SQL) databases or Not-only SQL (NoSQL) databases.

14. The system (200) as claimed in claim 8, wherein
the one or more data transformation operations are based on structure of data supported by the at least one source database and the at least one target database, and
the data transformation operations include at least one of data manipulation, data normalization, restructuring the data, adding additional metadata, attribute construction, data generalization, data discretization, data aggregation, or smoothing the data.

Documents

Application Documents

# Name Date
1 202421034732-STATEMENT OF UNDERTAKING (FORM 3) [01-05-2024(online)].pdf 2024-05-01
2 202421034732-PROVISIONAL SPECIFICATION [01-05-2024(online)].pdf 2024-05-01
3 202421034732-POWER OF AUTHORITY [01-05-2024(online)].pdf 2024-05-01
4 202421034732-FORM 1 [01-05-2024(online)].pdf 2024-05-01
5 202421034732-DRAWINGS [01-05-2024(online)].pdf 2024-05-01
6 202421034732-DECLARATION OF INVENTORSHIP (FORM 5) [01-05-2024(online)].pdf 2024-05-01
7 202421034732-Proof of Right [07-08-2024(online)].pdf 2024-08-07
8 202421034732-ORIGINAL UR 6(1A) FORM 1-060325.pdf 2025-03-10
9 202421034732-Request Letter-Correspondence [08-04-2025(online)].pdf 2025-04-08
10 202421034732-Power of Attorney [08-04-2025(online)].pdf 2025-04-08
11 202421034732-Form 1 (Submitted on date of filing) [08-04-2025(online)].pdf 2025-04-08
12 202421034732-Covering Letter [08-04-2025(online)].pdf 2025-04-08
13 202421034732-FORM 18 [30-04-2025(online)].pdf 2025-04-30
14 202421034732-DRAWING [30-04-2025(online)].pdf 2025-04-30
15 202421034732-CORRESPONDENCE-OTHERS [30-04-2025(online)].pdf 2025-04-30
16 202421034732-COMPLETE SPECIFICATION [30-04-2025(online)].pdf 2025-04-30
17 Abstract.jpg 2025-05-28