Abstract: 10 The embodiments of the present invention provide a method and apparatus for optimizing redo processing, where, the method comprises, extracting (S301) redo data from a redo buffer on a subscriber device; and performing (S3 02) redo processing on the redo data in parallel by invoking a plurality of threads. According to the method and apparatus, the efficiency of redo processing can be 15 enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced. FIGURE 3
FIELD OF THE INVENTION
The present invention relates to database, and in particular to a method and apparatus for optimizing redo processing.
BACKGROUND OF THE INVENTION
Availability of database is one of the main concerns when database is deployed over a network. One of the methods to achieve availability is to use a hot-standby mechanism which supports failover of a main database server deployed in a site to a secondary database server which may or may not reside in the same site. The main database server is often termed as a master device, and the secondary database server is termed as a subscriber device. As shown in Figure 1 and Figure 2, the master device 101 and the subscriber device 102 may be directly connected to one another (as shown in Figure 1), or there might be intermediate servers 103 between them (as shown in Figure 2). These intermediate servers are termed as propagators and they form a part of availability architecture to reduce the load on the master device. Irrespective of the configuration, the subscriber device maintains a mirror image of data. In most of the cases applications operate on the master device for write operations and on the subscriber device for read operations. This is to distribute the load of applications on the master device and the subscriber device. In those cases the subscriber device is generally read-only server.
Although the subscriber device is logically similar to the master device, because of the mirroring data over the network, the subscriber device might be few transactions behind the master device. This is termed as lag. Whenever the master device fails, the next candidate for the master device is the server with least lag compared to the master device.
Mirroring of data is often termed as replication. One way to do this is to create a redo-log of transaction operations for each transaction run in the master device, transfer the redo-log to the propagator/subscriber device over the network. Redo-log is a record of all data items written to the server like insert, delete and update. Redo-log is a sequential ordered log of the operations performed on the database. Redo-log is replayed on the subscriber device in fashion similar to database recovery in order to make the subscriber device's data similar to the master device's data.
Whenever the master device fails, and the subscriber device is to be promoted to the new master device, all the redo-log which have not been replayed on the subscriber device, will be replayed to bring its state as close as possible to the master device before failure and then the subscriber device is promoted to the master device. Speeding up this process will ensure more availability of database over network as well as reduce the application down time, as without master device database, the application cannot perform the write operations, thus limiting the part of the application service not available to user.
The existing prior art falls mainly into two categories. On the master device side, those which aim mainly at speeding up the replication by parallelizing the redo-log writing and shipping it over the network. On the subscriber device side, those that work towards finding out which transactions can be run in parallel without risking the consistency of data. These are done by introducing a specific set of rules that evaluate which one of the multiple transactions can be run in parallel.
It should be noted that the above introduction to the background art is given for the clear and complete description of the technical solution of the present invention and for the understanding by those skilled in the art. The above technical solutions should not be deemed as being known to those skilled in the art for having been described in the background art of the present invention.
SUMMARY OF THE INVENTION
Embodiments of the present invention is to provide a method and apparatus for optimizing redo processing, so as to enhance the efficiency of redo processing, and reduce the time taken to ensure availability and persistency of redo data on a subscriber device.
According to an aspect of the present invention, there is provided a method for optimizing redo processing, the method includes, extracting redo data from a redo buffer on a subscriber device; and performing redo processing on the redo data in parallel by invoking a plurality of threads.
According to another aspect of the present invention, there is provided an apparatus for optimizing redo processing, the apparatus includes, a extracting unit configured to extract redo data from a redo buffer on a subscriber device; and a performing unit configured to perform redo processing on the redo data in parallel by invoking a plurality of threads.
According to still another aspect of the present invention, there is provided a subscriber device, wherein, the subscriber device including a redo buffer and the apparatus as described above.
According to still another aspect of the present invention, there is provided a database server, the database server includes: a processor and a memory coupled to the processor; wherein the processor is configured to: extract redo data from a redo buffer in the memory on the database server; and perform redo processing on the redo data in parallel by invoking a plurality of threads.
The advantages of the embodiments of the present invention exist in that, according to the method and apparatus, a redo processing on a redo data is performed in parallel, such that the efficiency of the redo processing can be enhanced, and/or the time taken to ensure availability and persistency of the redo data on the subscriber device can be reduced.
Particular embodiments of the present invention will be described in detail below with reference to the following description and attached drawings and the schemes of using the principle of the present invention are pointed out. It should be understood that the implementation of the present invention is not limited thereto in scope. Rather, the invention includes all changes, modifications and equivalents coming within the spirit and terms of the appended claims.
Features that are described and/or illustrated with respect to one embodiment may be used in the same way or in a similar way in one or more other embodiments and/or in combination with or instead of the features of the other embodiments.
It should be emphasized that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
The drawings are included to provide further understanding of the present invention, which constitute a part of the specification and illustrate the preferred embodiments of the present invention, and are used for setting forth the principles of the present invention together with the description. The same element is represented with the same reference number throughout the drawings.
In the drawings:
Figure 1 is a schematic diagram of master-subscriber topology;
Figure 2 is a schematic diagram of master-propagator-subscriber topology;
Figure 3 is a flowchart of a method for optimizing redo processing according to one embodiment of the present invention;
Figure 4 is a flowchart of a method for redo buffer initialization; Figure 5 is a flowchart of a method for replication; Figure 6 is a flowchart of a method for redo processing;
Figure 7 is a flowchart of a method for redo flushing;
Figure 8 is a flowchart of a method for redo replaying;
Figure 9 is a flowchart of a method for optimizing redo processing according to another embodiment of the present invention;
Figure 10a is a schematic structure diagram of an apparatus for optimizing redo processing according to one embodiment of the present invention;
Figure 10b is a schematic structure diagram of an apparatus for optimizing redo processing according to another embodiment of the present invention;
Figure 10c is a schematic structure diagram of an apparatus for optimizing redo processing according to still another embodiment of the present invention; and
Figure 11 is a schematic structure diagram of a database server according to an embodiment of the present invention.
DETAILED DESCRIPTION OF THE IVNENTIQN
The many features and advantages of the embodiments are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the embodiments that fall within the true spirit and scope thereof. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the inventive embodiments to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope thereof.
In the present application, embodiments of the invention are described primarily in the context of subscriber device and replication agent. However, it shall be appreciated that the invention is not limited to the context of the subscriber device and the replication agent, and may relate to any type of database instance having the function of read-only secondary site.
In order to replicate data using physical replication, redo data has to be sent from master device to subscriber device. On the subscriber device, redo data should be applied to the database to make it available to the user. It should also be stored in persistent format to enable recovery of subscriber device in case of any failure. Until both the operations are done, recovery cannot be guaranteed on the subscriber device.
This embodiment of the present invention aims to reduce the time taken to ensure availability and persistency of redo data on the subscriber device.
The method and apparatus according to the embodiments of the present invention will be described in detail in the following in connection with the figures.
Embodiment 1
The embodiment of the present invention provides a method for optimizing redo processing, figure 3 is a flowchart of the method according to an embodiment of the present invention. As shown in figure 3, the method includes:
step 301: extracting redo data from a redo buffer on a subscriber device; and step 302: performing redo processing on the redo data in parallel by invoking a plurality of threads.
In one implementation of the embodiment, the method can be achieved by replication agent, the redo processing includes replaying the redo data from the redo buffer to a database and flushing the redo data in the redo buffer to a redo file, and the step 302 includes: step 3021: replaying the redo data from the redo buffer to a database by invoking one of the plurality of threads; and step 3022: simultaneously flushing the redo data in the r4edo buffer to a redo file by invoking another of the plurality of threads.
Figure 4 is a flowchart of a method for initialization of the redo buffer. As shown in figure 4, first the subscriber device gets configured page size (S401) and redo-buffer size (S402), and then adjusts the size of the redo buffer as multiples of page size (S403), and organizes the redo buffer as a set of pages (S404), thus the initialization can be achieved.
In this implementation, redo data is received from the master device directly to the redo buffer on the subscriber device, and in step 301, replication agent can pick up the redo data from the redo buffer for replaying and ensuring persistence.
Figure 5 is a flowchart of a method for receiving the redo packets from the master device for replication. As shown in figure 5, if the connection to the master device is valid (Yes for S501) and a redo packet is received from the master device ( Yes for S502), the subscriber device will judge whether the space is available in the redo buffer (S503), if it is available, the subscriber device will copy redo data from the redo packets to the redo buffer (S504) until all the redo data in the redo packets are copied, if it is unavailable, the subscriber device will wait for the redo buffer available event (S505), and if the event is received (Yes for S506), the subscriber device will copy the redo data from the redo packet to the redo buffer until all the redo data in the redo packet are copied, if the event is not received (No for S506), the subscriber device continue to wait for the event.
In this implementation, replication agent is made up of at least two threads which work in parallel, where, one thread is used to replay the redo data from the redo buffer to a database, another thread is used to flush the redo data in the redo buffer to a redo file to achieve persistence. In step 302, the replication agent performs replaying the redo data from the redo buffer to a database by invoking the one thread, and meanwhile, the replication agent performs flushing the redo data in the redo buffer to a redo file to achieve persistence by invoking the another thread.
Figure 6 is a flowchart of a method for redo processing of the step 302. As shown in figure 6, if the redo data in the redo buffer is available (Yes for S601), the replication agent will get the first non processed page of the redo buffer (S602) and the redo sequence number of the last redo log in the redo buffer page (S603), and then generate a redo process event (redo sequence number) (S604) and wait for redo flush event and redo replay event (S605). If the events are received (Yes for S606), the replication agent will clear the processed redo from the redo buffer (S607) and generate the redo buffer available event (S608).
Figure 7 is a flowchart of a method for redo flushing. As shown in figure 7, the redo flushing thread waits for the redo processing event (redo sequence number) (S701), if the event is received (Yes for S702), the redo flushing thread will read redo data (first non processed page) from the redo buffer (S703) and flush the redo to redo file, and then generate the redo flushing event.
Figure 8 is a flowchart of a method for redo replaying. As shown in figure 8, the redo replaying thread waits for the redo processing event (redo sequence number) (S801), if the event is received (Yes for S802), the redo replaying thread will replay the next redo record from the redo buffer (S803), if the replayed redo sequence number is same as redo sequence number of the redo processing event (Yes for S804), the redo replaying thread will generate the redo replaying event (S805).
In this implementation, for each redo-log received on the subscriber device, replication agent is responsible for replaying redo data on a database as well as store in a persistent medium for usage of recovery. According to the embodiment, the replication agent invokes the two threads to perform the above two work in parallel, and whichever of the two threads finishes first, waits for the other thread to complete its execution. After both threads are done with the execution, control comes back to the subscriber device for further processing.
In this implementation, the total time for performing the redo processing is the processing time of one of the threads which taking the most time to finish its execution. For example, if one thread takes a time 'X' to finish its work and the other thread takes a time 'Y' to finish its work, the effective time taken to finish the two work by the replication agent is either X (if X > Y) or Y (if Y > X), compared with the current method which takes a time of 'X+Y' for finishing the two work, the total time is reduced, and the performance of the replication agent can be improved greatly.
For the better understanding of the implementation of the embodiment, the method of the embodiment of the present invention shall be described in detail in reference to the figure 9.
Figure 9 is a flowchart of a method for performing the method of the embodiment of the present invention, reference to figure 9, after receiving the redo data from the redo buffer, the replication agent generate a redo-receive event (S901) for invoking a redo-flush thread to flush the redo data to persistent storage and invoking a redo-replay thread to replay the redo data in memory, and wait for redo-flush event and redo-replay event (S902), if the redo-flush event and the redo-replay event is received by the replication agent, the replication agent will receives the next set of redo log from the master device to the redo buffer (S903), or else, the replication agent continue to wait for the redo-flush event and the redo-replay event.
Reference to the figure 9. After receiving the redo-receive event generated by the replication agent (904), the redo-flush thread will write the redo log to file (S905) and generated a redo-flush event (S906) to provide to the replication agent, and then wait for the next redo-receive event (S907). Similarly, after receiving the redo-receive event generated by the replication agent (904'), the redo-replay thread will replay redo data on the specific memory page (9205'), and generate a redo-replay event (S906') to provide to the replication agent, and then wait for the next redo-receive event (S907').
In another implementation of the embodiment, if the subscriber device is behaving as a propagator/cascading node (source for other nodes) for replication, a third thread can be present which works to transfer the redo data from propagator to another subscriber device, in parallel with the other two threads.
In this implementation, the redo processing includes replaying the redo data from the redo buffer to a database, flushing the redo data in the redo buffer to a redo file, and transferring the redo data to another subscriber device, different from the above implementation, the replication agent is made up of three threads which work in parallel, where, one thread is used to replay the redo data from the redo buffer to a database, another thread is used to flush the redo data in the redo buffer to a redo file to achieve persistence, still another thread is used to transfer the redo data to another subscriber device.
In this implementation, the step 302 includes:
step 3021': replaying the redo data from the redo buffer to a database by invoking one of the plurality of threads;
step 3022'simultaneously flushing the redo data in the redo buffer to a redo file by invoking another of the plurality of the threads; and
step 3023': simultaneously transferring the redo data to another subscriber devices by invoking still another of the plurality of the threads.
In this implementation, the replication agent invokes the one thread to perform replaying the redo data from the redo buffer to a database, and meanwhile invokes the another thread to perform flushing the redo data in the redo buffer to a redo file to achieve persistence, and meanwhile invokes the still another thread to perform transferring the redo data to another subscriber device.
In the embodiment of the present invention, the method can be applied even for transactions that can run in parallel on subscriber device.
In the embodiment of the present invention, the method can be used in cascading replication to let the subscriber device to catch up faster with propagator based on the parallelization of redo application.
In the embodiment of the present invention, the method can be used in any form of homogenous replication, such as asynchronous, semi-synchronous, and synchronous.
According to the method of the embodiment, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
Embodiment 2
This embodiment of the present invention further provides an apparatus for optimizing redo processing. This embodiment corresponds to the method of the above Embodiment 1, and the same content will not be described any further.
Figure 10a is a schematic diagram of the apparatus according to the Embodiment 2 of the present invention, as shown in figure 10, the apparatus 100 includes a extracting unit 1001 and a performing unit 1002, where, the extracting unit 1001 configured to extract redo data from a redo buffer on a subscriber device; and the performing unit 1002 configured to perform redo processing on the redo data in parallel by invoking a plurality of threads.
In one implementation of the embodiment, as shown in figure 10b, the redo processing includes replaying the redo data from the redo buffer to a database and flushing the redo data in the redo buffer to a redo file, the performing unit 1002 includes a first replaying module 10021 and a first flushing module 10022. The first replaying module 10021 is configured to replay the redo data from the redo buffer to a database by invoking one of the plurality of threads. The first flushing module 10022 is configured to flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads.
In another implementation of the embodiment, as shown in figure 10c, the redo processing includes replaying the redo data from the redo buffer to a database, flushing the redo data in the redo buffer to a redo file, and transferring the redo data to another subscriber device, the performing unit 1002 includes a second replaying module 10023, a second flushing module 10024, and a transferring module 10025. The second replaying module 10023 is configured to replay the redo data from the redo buffer to a database by invoking one of the plurality of threads, the second flushing module 10024 is configured to flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads to achieve persistence, and the transferring module 10025 is configured to transfer the redo data to another subscriber device simultaneously by invoking still another of the plurality of threads.
In the embodiment, the total time for performing the redo processing is the processing time of one of the threads which taking the most time to finish its execution. The details has been described in Embodiment 1, and the content is combined here and does not described any further.
According to the apparatus of the embodiment, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
Embodiment 3
This embodiment of the present invention further provides a subscriber device. In this embodiment, the subscriber device includes a redo buffer and the apparatus as described in Embodiment 2, and the content in Embodiment 2 is combined here and does not described any further.
According to the subscriber device of the embodiment, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
Embodiment 4
This embodiment of the present invention further provides a database server. Figure 11 is a schematic structure diagram of the database server according to an embodiment of the present invention. As shown in Figure 11, the database server includes a processor 111 and a memory 112 coupled to the processor 111.
The memory 112 is configured to store program. Specifically, the program includes program code, the program code includes computer operating instruction.
The processor 111 is configured to extract redo data from a redo buffer in the memory 112 on the database server; and perform redo processing on the redo data in parallel by invoking a plurality of threads.
The memory 112 may include a high speed RAM and a non-volatile memory.
The processor 111 may be a Central Processing Unit (CPU), or can be Application Specific Integrated Circuit (ASIC), or can be configured to one or more ASIC.
According to the above database server, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
Further in the step of the performing redo processing on the redo data in parallel by invoking a plurality of threads, the processor is specifically configured to replay the redo data from the redo buffer to a database of the database server by invoking one of the plurality of threads, and flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads.
According to the above data base server, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
Further in the step of the performing redo processing on the redo data in parallel by invoking a plurality of threads, the processor is specifically configured to replay the redo data from the redo buffer to a database of the database server by invoking one of the plurality of threads, flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads, and transfer the redo data to another database server simultaneously by invoking still another of the plurality of threads.
According to the above data base server, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
In the embodiment of the present invention, a total time for performing the redo processing is a processing time of one of the threads which taking a most time to finish an execution of the one of the threads. Such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
In the embodiment of the present invention, the processor can be used in any form of asynchronous or semi-synchronous or synchronous of homogenous replication. And the embodiment is not limited thereto.
According to the above data base server, a redo processing on a redo data is performed in parallel, such that the efficiency of redo processing can be enhanced, and/or the time taken to ensure availability and persistency of redo data on the subscriber device can be reduced.
It should be understood that each of the parts of the present invention may be implemented by hardware, software, firmware, or a combination thereof. In the above embodiments, multiple steps or methods may be realized by software or firmware that is stored in the memory and executed by an appropriate instruction executing system. For example, if it is realized by hardware, it may be realized by any one of the following technologies known in the art or a combination thereof as in another embodiment: a discrete logic circuit having a logic gate circuit for realizing logic functions of data signals, application-specific integrated circuit having an appropriate combined logic gate circuit, a programmable gate array (PGA), and a field programmable gate array (FPGA), etc.
The description or blocks in the flowcharts or of any process or method in other manners may be understood as being indicative of including one or more modules, segments or parts for realizing the codes of executable instructions of the steps in specific logic functions or processes, and that the scope of the preferred embodiments of the present invention includes other implementations, wherein the functions may be executed in manners different from those shown or discussed, including executing the functions according to the related functions in a substantially simultaneous manner or in a reverse order, which should be understood by those skilled in the art to which the present invention pertains.
The logic and/or steps shown in the flowcharts or described in other manners here may be, for example, understood as a sequencing list of executable instructions for realizing logic functions, which may be implemented in any computer readable medium, for use by an instruction executing system, device or apparatus (such as a system including a computer, a system including a processor, or other systems capable of extracting instructions from an instruction executing system, device or apparatus and executing the instructions), or for use in combination with the instruction executing system, device or apparatus.
The above literal description and drawings show various features of the present invention. It should be understood that those skilled in the art may prepare appropriate computer codes to carry out each of the steps and processes as described above and shown in the drawings. It should be also understood that all the terminals, computers, servers, and networks may be any type, and the computer codes may be prepared according to the disclosure to carry out the present invention by using the apparatus.
Particular embodiments of the present invention have been disclosed herein. Those skilled in the art will readily recognize that the present invention is applicable in other environments. In practice, there exist many embodiments and implementations. The appended claims are by no means intended to limit the scope of the present invention to the above particular embodiments. Furthermore, any reference to "a device to..." is an explanation of device plus function for describing elements and claims, and it is not desired that any element using no reference to "a device to..." is understood as an element of device plus function, even though the wording of "device" is included in that claim.
Although a particular preferred embodiment or embodiments have been shown and the present invention has been described, it is obvious that equivalent modifications and variants are conceivable to those skilled in the art in reading and understanding the description and drawings. Especially for various functions executed by the above elements (portions, assemblies, apparatus, and compositions, etc.), except otherwise specified, it is desirable that the terms (including the reference to "device") describing these elements correspond to any element executing particular functions of these elements (i.e. functional equivalents), even though the element is different from that executing the function of an exemplary embodiment or embodiments illustrated in the present invention with respect to structure. Furthermore, although the a particular feature of the present invention is described with respect to only one or more of the illustrated embodiments, such a feature may be combined with one or more other features of other embodiments as desired and in consideration of advantageous aspects of any given or particular application.
WE CLAIM
1. A method for optimizing redo processing, comprising: extracting redo data from a redo buffer on a subscriber device; and performing redo processing on the redo data in parallel by invoking a plurality of threads.
2. The method as claimed in claim 1, wherein, the step of performing redo processing on the redo data in parallel by invoking a plurality of threads comprises:
replaying the redo data from the redo buffer to a database by invoking one of the plurality of threads; and simultaneously flushing the redo data in the redo buffer to a redo file by invoking another of the plurality of threads.
3. The method as claimed in claim 1, wherein, the step of performing redo processing on the redo data in parallel by invoking a plurality of threads comprises: replaying the redo data from the redo buffer to a database by invoking one of the plurality of threads; simultaneously flushing the redo data in the redo buffer to a redo file by invoking another of the plurality of threads; and simultaneously transferring the redo data to another subscriber device by invoking still another of the plurality of threads.
4. The method as claimed in any one of claims 1-3, wherein, a total time for performing the redo processing is a processing time of one of the threads which taking a most time to finish an execution of the one of the threads.
5. The method as claimed in claim 1, wherein the method can be used in any form of asynchronous or semi-synchronous or synchronous of homogenous replication.
6. An apparatus for optimizing redo processing, comprising: a extracting unit configured to extract redo data from a redo buffer on a subscriber device; and a performing unit configured to perform redo processing on the redo data in parallel by invoking a plurality of threads.
7. The apparatus as claimed in claim 6, wherein, the performing unit comprises, a first replaying module configured to replay the redo data from the redo buffer to a database by invoking one of the plurality of threads, and a first flushing module configured to flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads.
8. The apparatus as claimed in claim 6, wherein, the performing unit comprises, a second replaying module configured to replay the redo data from the redo buffer to a database by invoking one of the plurality of threads, a second flushing module configured to flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads, and a transferring module configured to transfer the redo data to another subscriber device simultaneously by invoking still another of the plurality of threads.
9. The apparatus as claimed in claim 6 or 7 or 8, wherein, a total time for performing the redo processing is a processing time of one of the threads which taking a most time to finish an execution of the one of the threads.
10. A subscriber device, wherein, the subscriber device comprising a redo buffer and the apparatus as claimed in any one of claims 6-9.
11. A database server, comprising: a processor and a memory coupled to the processor; wherein the processor is configured to: extract redo data from a redo buffer in the memory on the database sever; and perform redo processing on the redo data in parallel by invoking a plurality of threads.
12. The database server as claimed in claim 11, wherein in the step of performing redo processing on the redo data in parallel by invoking a plurality of threads, the processor is specifically configured to: replay the redo data from the redo buffer to a database of the database server by invoking one of the plurality of threads, and flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads.
13. The database server as claimed in claim 11, wherein in the step of performing redo processing on the redo data in parallel by invoking a plurality of threads, the processor is specifically configured to: replay the redo data from the redo buffer to a database of the database server by invoking one of the plurality of threads, flush the redo data in the redo buffer to a redo file simultaneously by invoking another of the plurality of threads, and transfer the redo data to another database server simultaneously by invoking still another of the plurality of threads.
14. The database server as claimed in claim 11 or 12 or 13, wherein a total time for performing the redo processing is a processing time of one of the threads which taking a most time to finish an execution of the one of the threads.
15. The database server as claimed in claim 11, wherein the processor can be used in any form of asynchronous or semi-synchronous or synchronous of homogenous replication.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 6072-CHE-2013 POWER OF ATTORNEY 24-12-2013.pdf | 2013-12-24 |
| 1 | 6072-CHE-2013-RELEVANT DOCUMENTS [14-09-2023(online)].pdf | 2023-09-14 |
| 2 | 6072-CHE-2013 FORM-3 24-12-2013.pdf | 2013-12-24 |
| 2 | 6072-CHE-2013-ASSIGNMENT WITH VERIFIED COPY [09-03-2022(online)].pdf | 2022-03-09 |
| 3 | 6072-CHE-2013-FORM-16 [09-03-2022(online)].pdf | 2022-03-09 |
| 3 | 6072-CHE-2013 FORM-2 24-12-2013.pdf | 2013-12-24 |
| 4 | 6072-CHE-2013-POWER OF AUTHORITY [09-03-2022(online)].pdf | 2022-03-09 |
| 4 | 6072-CHE-2013 FORM-1 24-12-2013.pdf | 2013-12-24 |
| 5 | 6072-CHE-2013-IntimationOfGrant31-01-2022.pdf | 2022-01-31 |
| 5 | 6072-CHE-2013 DRAWINGS 24-12-2013.pdf | 2013-12-24 |
| 6 | 6072-CHE-2013-PatentCertificate31-01-2022.pdf | 2022-01-31 |
| 6 | 6072-CHE-2013 DESCRIPTION (COMPLETE) 24-12-2013.pdf | 2013-12-24 |
| 7 | 6072-CHE-2013-Written submissions and relevant documents [28-12-2021(online)].pdf | 2021-12-28 |
| 7 | 6072-CHE-2013 CORRESPONDENCE OTHERS 24-12-2013.pdf | 2013-12-24 |
| 8 | 6072-CHE-2013-Correspondence to notify the Controller [08-12-2021(online)].pdf | 2021-12-08 |
| 8 | 6072-CHE-2013 CLAIMS 24-12-2013.pdf | 2013-12-24 |
| 9 | 6072-CHE-2013 ABSTRACT 24-12-2013.pdf | 2013-12-24 |
| 9 | 6072-CHE-2013-FORM-26 [08-12-2021(online)].pdf | 2021-12-08 |
| 10 | 6072-CHE-2013 FORM-18 03-01-2014.pdf | 2014-01-03 |
| 10 | 6072-CHE-2013-US(14)-HearingNotice-(HearingDate-14-12-2021).pdf | 2021-11-15 |
| 11 | 6072-CHE-2013 CORRESPONDENCE OTHERS 03-01-2014.pdf | 2014-01-03 |
| 11 | 6072-CHE-2013-CLAIMS [03-10-2019(online)].pdf | 2019-10-03 |
| 12 | 6072-CHE-2013 FORM-1 24-06-2014.pdf | 2014-06-24 |
| 12 | 6072-CHE-2013-FER_SER_REPLY [03-10-2019(online)].pdf | 2019-10-03 |
| 13 | 6072-CHE-2013 CORRESPONDENCE OTHERS 24-06-2014.pdf | 2014-06-24 |
| 13 | 6072-CHE-2013-OTHERS [03-10-2019(online)].pdf | 2019-10-03 |
| 14 | 6072-CHE-2013 CORRESPONDENCE OTHERS 30-06-2014.pdf | 2014-06-30 |
| 14 | 6072-CHE-2013-FER.pdf | 2019-07-29 |
| 15 | 6072-CHE-2013 CORRESPONDENCE OTHERS 07-07-2014.pdf | 2014-07-07 |
| 15 | Correspondence by Agent_Assignment_16-04-2018.pdf | 2018-04-16 |
| 16 | 6072-CHE-2013-8(i)-Substitution-Change Of Applicant - Form 6 [05-04-2018(online)].pdf | 2018-04-05 |
| 16 | FORM NO. INC-22.pdf ONLINE | 2015-02-25 |
| 17 | FORM 13 _Applicant Address Change_.pdf ONLINE | 2015-02-25 |
| 17 | 6072-CHE-2013-ASSIGNMENT DOCUMENTS [05-04-2018(online)].pdf | 2018-04-05 |
| 18 | 6072-CHE-2013-PA [05-04-2018(online)].pdf | 2018-04-05 |
| 18 | FORM NO. INC-22.pdf | 2015-03-13 |
| 19 | 6072-CHE-2013 FORM-13 20-20-2015.pdf | 2015-07-23 |
| 19 | FORM 13 _Applicant Address Change_.pdf | 2015-03-13 |
| 20 | 6072-CHE-2013 CORRESPONDENCE OTHERS 06-07-2015.pdf | 2015-07-06 |
| 21 | 6072-CHE-2013 FORM-13 20-20-2015.pdf | 2015-07-23 |
| 21 | FORM 13 _Applicant Address Change_.pdf | 2015-03-13 |
| 22 | 6072-CHE-2013-PA [05-04-2018(online)].pdf | 2018-04-05 |
| 22 | FORM NO. INC-22.pdf | 2015-03-13 |
| 23 | 6072-CHE-2013-ASSIGNMENT DOCUMENTS [05-04-2018(online)].pdf | 2018-04-05 |
| 23 | FORM 13 _Applicant Address Change_.pdf ONLINE | 2015-02-25 |
| 24 | FORM NO. INC-22.pdf ONLINE | 2015-02-25 |
| 24 | 6072-CHE-2013-8(i)-Substitution-Change Of Applicant - Form 6 [05-04-2018(online)].pdf | 2018-04-05 |
| 25 | Correspondence by Agent_Assignment_16-04-2018.pdf | 2018-04-16 |
| 25 | 6072-CHE-2013 CORRESPONDENCE OTHERS 07-07-2014.pdf | 2014-07-07 |
| 26 | 6072-CHE-2013 CORRESPONDENCE OTHERS 30-06-2014.pdf | 2014-06-30 |
| 26 | 6072-CHE-2013-FER.pdf | 2019-07-29 |
| 27 | 6072-CHE-2013 CORRESPONDENCE OTHERS 24-06-2014.pdf | 2014-06-24 |
| 27 | 6072-CHE-2013-OTHERS [03-10-2019(online)].pdf | 2019-10-03 |
| 28 | 6072-CHE-2013 FORM-1 24-06-2014.pdf | 2014-06-24 |
| 28 | 6072-CHE-2013-FER_SER_REPLY [03-10-2019(online)].pdf | 2019-10-03 |
| 29 | 6072-CHE-2013 CORRESPONDENCE OTHERS 03-01-2014.pdf | 2014-01-03 |
| 29 | 6072-CHE-2013-CLAIMS [03-10-2019(online)].pdf | 2019-10-03 |
| 30 | 6072-CHE-2013 FORM-18 03-01-2014.pdf | 2014-01-03 |
| 30 | 6072-CHE-2013-US(14)-HearingNotice-(HearingDate-14-12-2021).pdf | 2021-11-15 |
| 31 | 6072-CHE-2013 ABSTRACT 24-12-2013.pdf | 2013-12-24 |
| 31 | 6072-CHE-2013-FORM-26 [08-12-2021(online)].pdf | 2021-12-08 |
| 32 | 6072-CHE-2013 CLAIMS 24-12-2013.pdf | 2013-12-24 |
| 32 | 6072-CHE-2013-Correspondence to notify the Controller [08-12-2021(online)].pdf | 2021-12-08 |
| 33 | 6072-CHE-2013 CORRESPONDENCE OTHERS 24-12-2013.pdf | 2013-12-24 |
| 33 | 6072-CHE-2013-Written submissions and relevant documents [28-12-2021(online)].pdf | 2021-12-28 |
| 34 | 6072-CHE-2013 DESCRIPTION (COMPLETE) 24-12-2013.pdf | 2013-12-24 |
| 34 | 6072-CHE-2013-PatentCertificate31-01-2022.pdf | 2022-01-31 |
| 35 | 6072-CHE-2013 DRAWINGS 24-12-2013.pdf | 2013-12-24 |
| 35 | 6072-CHE-2013-IntimationOfGrant31-01-2022.pdf | 2022-01-31 |
| 36 | 6072-CHE-2013 FORM-1 24-12-2013.pdf | 2013-12-24 |
| 36 | 6072-CHE-2013-POWER OF AUTHORITY [09-03-2022(online)].pdf | 2022-03-09 |
| 37 | 6072-CHE-2013-FORM-16 [09-03-2022(online)].pdf | 2022-03-09 |
| 37 | 6072-CHE-2013 FORM-2 24-12-2013.pdf | 2013-12-24 |
| 38 | 6072-CHE-2013-ASSIGNMENT WITH VERIFIED COPY [09-03-2022(online)].pdf | 2022-03-09 |
| 38 | 6072-CHE-2013 FORM-3 24-12-2013.pdf | 2013-12-24 |
| 39 | 6072-CHE-2013-RELEVANT DOCUMENTS [14-09-2023(online)].pdf | 2023-09-14 |
| 39 | 6072-CHE-2013 POWER OF ATTORNEY 24-12-2013.pdf | 2013-12-24 |
| 1 | 2019-07-2911-53-53_29-07-2019.pdf |