Sign In to Follow Application
View All Documents & Correspondence

System And Method For Agentless Data Migration

Abstract: A method for performing an agentless data migration is provided. A source node and a destination node are identified on a network. The source node has a source node Virtual Machine (VM) with data, which is replicated to a destination node VM on the destination node. A full replication of the data is performed by copying the data, which is present on a disk used by the source node VM, to the destination node VM. An incremental replication of the data from the source node VM to the destination node VM is initiated by copying incremental data changes to the destination node VM. A dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node VM. Reference figure: FIG. 2 and FIG. 3

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
20 May 2025
Publication Number
23/2025
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

TRIANZ DIGITAL CONSULTING PRIVATE LIMITED
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076

Inventors

1. Anil Kumar Gupta
2nd Floor, Building No.14, K Raheja Mindspace, Hitech City, Hyderabad, Telangana - 500081, India
2. Kalapana Mandloi
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
3. K Niyaz Ahmed
165/2, 1st Floor, Wing B, Kalyani Magnum, Doraisanipalya, Bannerghatta Road, Bangalore South, Karnataka, India – 560076
4. Sambasivarao Adapala
2nd Floor, Building No.14, K Raheja Mindspace, Hitech City, Hyderabad - 500081, Telangana, India
5. Saikrishna Merugumala
2nd Floor, Building No.14, K Raheja Mindspace, Hitech City, Hyderabad - 500081, Telangana, India

Specification

Description:FIELD
[0001] Various embodiments of the disclosure relate generally to maintaining synchronization among Virtual Machines (VMs). More specifically, various embodiments of the disclosure relate to methods and systems for performing dynamic synchronization during an agentless data migration from a source node VM to a destination node VM.
BACKGROUND
[0002] An accelerated adoption of cloud-based platforms has become a strategic imperative for organizations that seek to achieve digital transformation and enhance operational agility. However, transition of an on-premises Virtual Machine (VM) to a cloud-based VM presents substantial challenges for the organizations. The challenges particularly relate to issues arising during data migration. This is because, during a data migration process, the organizations have to typically encounter prolonged system downtime and operational inefficiencies, thereby impeding a concept of seamless data migration.
[0003] In addition, in scenarios when the data migration process is carried out while keeping applications present on the on-premises VM operational, there is an accumulation of incremental data changes at the on-premises VM that need to be synchronized with the cloud-based VM. Dynamic synchronization of the incremental data changes at the on-premises VM with the cloud-based VM is critical, as delays or failures during the synchronization shall lead to operational malfunctions and data integrity compromises, including potential data loss at the cloud-based VM.
[0004] Further, for initiating the data migration process, there is a prerequisite to install an agent such as a software program at the on-premises VM which helps to execute, manage and migrate the data from the on-premises VM to the cloud-based VM. An agent based data migration, while offering control and flexibility, presents challenges like performance impact and complexity in setup, especially for large environments.
[0005] In light of the foregoing, there exists a need for a technical and reliable solution that overcomes the above-mentioned problems and enables dynamic synchronization of the data changes using an agentless data migration.
SUMMARY
[0006] Methods and systems for dynamic synchronization during an agentless data migration are provided substantially as shown in and described in connection with, at least one of the figures, as set forth more completely in the claims.
[0007] In an embodiment of the present disclosure, a method for an agentless data migration is provided. The method includes identifying on a network a source node and a destination node. The source node has a source node Virtual Machine (VM) with data to be replicated to a destination node VM on the destination node. The method further includes performing a full replication of the data from the source node VM to the destination node VM by copying the data present on a disk used by the source node VM in an operational state to the destination node VM. The method further includes initiating an incremental replication of the data from the source node VM to the destination node VM by copying incremental data changes present on the disk to the destination node VM. A dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node VM.
[0008] In another embodiment of the present disclosure, a system for an agentless data migration is provided. The system includes one or more processors, and a memory operatively coupled to the one or more processors, wherein the memory comprises processor-executable instructions, which on execution, causes the one or more processors to identify, on the network, the source node and the destination node, the source node having a source node VM with data to be replicated to the destination node VM on the destination node. The one or more processors perform a full replication of the data from the source node VM to the destination node VM by copying the data present on the disk used by the source node VM in the operational state to the destination node VM. The one or more processors initiate the incremental replication of the data from the source node VM to the destination node VM by copying the incremental data changes present on the disk to the destination node VM. The dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node VM.
[0009] In another embodiment of the present disclosure, a source node VM executed on a cloud lifecycle platform for performing agentless data migration is provided. The source node VM includes one or more processors, and a memory operatively coupled to the one or more processors. The memory comprises processor-executable instructions, which on execution, causes the one or more processors to identify, on a network, a source node and a destination node, the source node having the source node VM with data to be replicated to a destination node VM on the destination node. The one or more processors perform a full replication of the data from the source node VM to the destination node VM by copying the data present on a disk used by the source node VM in an operational state to the destination node VM. The one or more processors initiate an incremental replication of the data from the source node VM to the destination node VM by copying incremental data changes present on the disk to the destination node VM. A dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node.
[00010] In some embodiments, at a cut-over, a shutdown of the source node VM is executed along with transmission of pending incremental data changes on the disk to the destination node VM. The execution of the cut-over includes performing a full synchronization operation of copying the pending incremental data changes on the disk to the destination node VM.
[00011] In some embodiments, each unique instance of an incremental data change of the incremental data changes present on the disk is copied and replicated to the destination node VM until the execution of the cut-over.
[00012] In some embodiments, the incremental data changes present on the disk are stored in a staging area of a storage before initiating the copying of the incremental data changes to the destination node VM.
[00013] In some embodiments, the incremental replication of the data present on the disk of the source node VM is initiated after completion of the full replication of the data to the destination node VM.
[00014] In some embodiments, during start of a replication cycle, the full replication of the data from the source node VM to the destination node VM is performed by capturing a snapshot of the disk.
[00015] In some embodiments, the source node is located in an on-premises platform and the destination node is located in a cloud-based platform that is geographically separated from the on-premises platform.

BRIEF DESCRIPTION OF THE DRAWINGS
[00016] FIG. 1 is a block diagram that illustrates a system environment for facilitating the dynamic synchronization during an agentless data migration, in accordance with an exemplary embodiment of the present disclosure;
[00017] FIG. 2 is a sequence diagram that illustrates steps executed for performing the dynamic synchronization during the agentless data migration, in accordance with an exemplary embodiment of the present disclosure;
[00018] FIG. 3 is a flow diagram that illustrates a series of steps starting with initiation till termination of the incremental data migration, in accordance with an exemplary embodiment of the present disclosure;
[00019] FIG. 4 illustrates an example screenshot showing the incremental data migration being initiated at the source node VM, in accordance with an embodiment of the present disclosure;
[00020] FIG. 5 is a block diagram that illustrates a system architecture of a computer system, in accordance with an embodiment of the present disclosure; and
[00021] FIG. 6 represents a high-level flowchart that illustrates a method to execute the dynamic synchronization during the agentless data migration, in accordance with an embodiment of the present disclosure.

DETAILED DESCRIPTION
[00022] The present disclosure is best understood with reference to the detailed figures and description set forth herein. Various embodiments are discussed below with reference to the figures. However, those skilled in the art will readily appreciate that the detailed descriptions given herein with respect to the figures are simply for explanatory purposes as the methods and systems may extend beyond the described embodiments. In one example, the teachings presented and the needs of a particular application may yield multiple alternate and suitable approaches to implement the functionality of any detail described herein. Therefore, any approach may extend beyond the particular implementation choices in the following embodiments that are described and shown.
[00023] References to “an embodiment”, “another embodiment”, “yet another embodiment”, “one example”, “another example”, “yet another example”, “for example”, and so on, indicate that the embodiment(s) or example(s) so described may include a particular feature, structure, characteristic, property, element, or limitation, but that not every embodiment or example necessarily includes that particular feature, structure, characteristic, property, element or limitation. Furthermore, repeated use of the phrase “in an embodiment” does not necessarily refer to the same embodiment.
OVERVIEW
[00024] Organizations seeking digital transformation need to adapt to cloud-based platforms, however migration of Virtual Machines (VMs) from an on-premises platform to the cloud-based platform presents challenges, particularly during data migration. During the data migration, the VMs at the on-premises platform have to suffer a prolonged system downtime leading to operational inefficiencies.
[00025] Further, when applications present on an on-premises VM remain operational during the data migration, then incremental data changes accumulate at the on-premises VM, which must be synchronized with cloud-based VMs. Any mismatches, delays or failures during synchronization shall result in malfunction and data loss at the cloud-based VMs.
[00026] In addition, for execution of the data migration, an agent needs to be installed on the on-premises VM, which, while offering control, shall also impact performance of the on-premises VM.
[00027] Various embodiments of the present disclosure provide a method and a system to resolve aforementioned problems by enabling dynamic synchronization of the data changes between the on-premises VM and the cloud-based VM, present on a network, during an agentless data migration. The system includes a processor that executes instructions to identify a source node and a destination node on a network. The source node has a source node VM with data to be replicated to a destination node VM on the destination node.
[00028] A full replication of the data is performed to the destination node VM by copying the data present on a disk used by the source node VM in an operational state to the destination node VM. Further, an incremental data replication is performed by copying the incremental data changes present on the disk to the destination node VM. When copying the incremental data changes, an incremental update synchronization operation is performed. The incremental update synchronization operation is performed during each instance of the copying of the incremental data changes to the destination node VM.
[00029] On execution of a cut-over, the source node VM is shut down and pending incremental data changes on the disk are transmitted to the destination node VM. In addition, on execution of the cut-over, a full synchronization operation is performed to copy the pending incremental data changes on the disk to the destination node VM.

TERMS DESCRIPTION (in addition to plain and dictionary meaning)
[00030] Virtualization refers to a process of creating a virtual version (non-physical) of computer hardware platforms, storage devices, and computer network resources. The virtualization process streamlines management, resource deployment and resource retrieval in a datacenter. The virtualization process enables creation of a dynamic datacenter and increases efficiency through automation while mitigating periods of planned and unplanned downtime.
[00031] A VM refers to a computing environment that functions as an isolated software computer with its own CPU, memory, network interface, and storage, created from a pool of hardware resources. The VM includes a set of specifications and configuration data files and are supported by physical resources of a host. The VM has virtual devices that emulate similar functionality as physical machines. The VM has additional benefits over the physical machines with respect to portability, manageability, and security.
[00032] A host refers to a virtual representation of a physical server or a physical computing system having a memory and a processor. The host represents computing and memory resources of the physical server.
[00033] Migration refers to a process of transferring data, the applications, and other business elements from an on-premises host or a VM running on a host computer under a hypervisor to a cloud computing host, with limited downtime or service interruption at the host computer. The migration process may involve moving data from servers located in physical locations, and databases to cloud-based servers. In addition, tracking performance of migrated applications and data for completeness to ensure that the data has been migrated correctly is considered a part of the migration process.
[00034] Agentless data migration is a migration option (another one being agent-based migration) that does not require deploying any software (agents) on the VM running on the host computer, from where the data is to be migrated. The agentless data migration option orchestrates the data replication by integrating with functionality provided by a virtualization provider.
[00035] A cut-over refers to a process of migrating any last minute changes executed and determined on the source node VM to the destination node VM so that the source node VM and the destination node VM are in synchronization. The cut-over further includes executing resolution steps to solve issues that occur at last minute during the migration process. To ensure a smooth transition of the last minute changes, the cut-over includes detailed contingency protocols for failure scenarios.
[00036] A disk is a virtual or physical device and operates using any suitable storage mechanism, including but not limited to, solid-state drives, optical drives, and the like.
[00037] An incremental replication (also known as delta replication) refers to the data changes that have occurred since beginning of a last completed replication cycle, and are to be replicated and written to a staging area. The incremental replication keeps the data replication in sync by replicating the data changes happening on the source node VM to the destination node VM.
[00038] An operational state refers to a VM status being in a running state. This may include the VM executing multiple operations.
[00039] FIG. 1 is a block diagram that illustrates a system environment 100 for facilitating the dynamic synchronization during the agentless data migration, in accordance with an exemplary embodiment of the present disclosure. The system environment 100 includes components such as a source node 102 and a destination node 104. The source node 102 may refer to a computing device, or a computer program (i.e., executing on the computing device), and the destination node 104 may represent a cloud-based system, for example, a public-cloud system, a private cloud system, or a hybrid-cloud system.
[00040] The source node 102 may include a source node VM 110 and the destination node 104 may include a destination node VM 112.
[00041] A cloud lifecycle platform 106 may be configured to the source node 102 to provide an agentless migration environment. The cloud lifecycle platform 106 may be a suite of tools and processes with migration, management, modernization, and maximizing capabilities. The cloud lifecycle platform 106 may be used to manage and monitor the data migration, from the source node 102 to the destination node 104. The cloud lifecycle platform 106 may simplify and expedite the data migration from the source node 102 to the destination node 104, and may act as an interface for administrators to provide them with visibility into a running status of the source node VM 110 along with set up and management of a hybrid cloud environment using the destination node VM 112. In addition, the cloud lifecycle platform 106 may work towards replicating and synchronizing the data changes that occur at the source node VM 110 to the destination node VM 112. In an example, the cloud lifecycle platform 106 may be implemented as engines or modules including any combination of hardware and programming. In addition, functions of the cloud lifecycle platform 106 may also be implemented by a respective processor of the source node 102.
[00042] The source node 102 and the destination node 104 may communicate over a network 108, or over a direct connection 138, or over both the network 108 and the direct connection 138. The network 108 may be a managed Internet protocol (IP) network administered by a service provider. For example, the network 108 may be implemented using wireless protocols and technologies, such as WiFi, WiMax, and the like. In other examples, the network 108 may also be a packet-switched network such as a Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), Internet network, or other similar type of network environment. In yet other examples, the network 108 may be a fixed wireless network, a wireless LAN, a wireless WAN, a personal area network (PAN), a virtual private network (VPN), intranet or other suitable network system and includes equipment for receiving and transmitting signals. The direct connection 138 may be, for example, a private point-to-point link network.
[00043] The source node 102 may include one or more host computer systems 114. The host computer systems 114 may be constructed on a server grade hardware platform 116, such as an x86 architecture platform, a desktop, a laptop, and the like. As shown, the server grade hardware platform 116 of each of the host computer systems 114 may include conventional components of the computing device, such as a central processing unit (CPU) 118, a system memory 120, a network interface 122 (also referred to as a network interface controller (NIC) 122), a storage 124 (also referred to as a local storage 124), and other I/O devices such as, for example, a mouse and keyboard (not shown). The CPU 118 may be configured to execute instructions, for example, executable instructions that perform one or more operations and may be stored in the system memory 120 and in the local storage 124.
[00044] The system memory 120 may be a device that allows information, such as executable instructions, cryptographic keys, virtual disks, configurations, and other data, to be stored and retrieved. The system memory 120 may include, for example, one or more random access memory (RAM) modules, read only memory (ROM), or a combination thereof.
[00045] The network interface controller 122 may enable each of the host computer systems 114 to communicate with another device by a communication medium, such as a local network 126 (for example, a LAN) within the source node 102. The network interface controller 122 may be, for example, one or more network adapters. The local storage 124 may represent local storage devices (for example, one or more hard disks, flash memory modules, solid state disks, and optical disks) or a storage interface that enables the host computer systems 114 to communicate with one or more network data storage systems. Examples of a storage interface are a host bus adapter (HBA) that couples each of the host computer systems 114 to one or more storage arrays, such as a Storage Area Network (SAN) or a Network-Attached Storage (NAS), as well as other network data storage systems. In the example, the host computer systems 114 may be configured for communication with a SAN 128 over the local network 126.
[00046] Each of the host computer systems 114 may be configured to provide a virtualization layer that abstracts processor, memory, storage, and networking resources of the server grade hardware platform 116 into multiple VMs (not shown) that run concurrently on the same hosts. For ease of description, only the source node VM 110 is shown to be running on the host computer systems 114.
[00047] The multiple VMs, in some examples, may operate with their own guest operating systems on the computing device using resources of the computing device virtualized by a virtualization software (e.g., a hypervisor, virtual machine monitor, and the like) which enables sharing of hardware resources of the host computer systems 114 by the VMs. In an example, the virtualization software may include a hypervisor 130. The hypervisor 130 may run on top of an operating system of the host computer systems 114 or directly on hardware components of the host computer systems 114.
[00048] Each of the VMs may be implemented as a set of files, such as a VM configuration data file, virtual disk file(s), log file(s), snapshot file(s), and the like. The VM configuration data files may be stored in one or more datastores accessible by the hypervisor 130. The datastore may serve as a storage repository (for example, a physical disk) for VM files and provide a uniform storage model for storage resources required by the VMs. The datastores may be stored on the local storage 124 accessible by a single host (local datastores), in the SAN 128 accessible by multiple hosts (shared datastores), or both. In addition, management of the storage model and access to the VM configuration data file may be performed by one or more Storage Virtual Machines (SVMs) or other storage applications providing Software as a Service (SaaS), such as a storage software service.
[00049] The destination node 104 may include an infrastructure platform 132 upon which cloud computing environment(s) 134 may be executed. Each cloud computing environment 134 may include a plurality of VMs (not shown). For ease of explanation, only the destination node VM 112 is shown to be included in the cloud computing environment 134.
[00050] The cloud computing environment(s) 134 may also include other virtual resources, such as one or more virtual networks (not shown) used to communicate between the VMs. The VMs may provide abstractions of processor, memory, storage, and networking resources of hardware resources 136. The virtual networks may provide abstractions of networks, such as Local Area Networks (LANs), Wide Area Networks (WANs), and the like. At a given time, some of the VMs may be active (i.e., executing) while other VMs may be inactive (i.e., not executing).
[00051] In an example, the infrastructure platform 132 includes the hardware resources 136 and the virtualization software (for example, hypervisors 140). The hardware resources 136 include computing resources, storage resources, network resources, and the like. In the embodiment shown, the hardware resources 136 include a plurality of host computers (not shown) and a SAN 142. The hardware resources 136 may be configured to provide the hypervisors 140 which support the execution of the VMs across the host computers. The hypervisor 140 may have a similar implementation to the hypervisor 130 in the source node 102.
[00052] In an embodiment, the full replication of the data is performed to the destination node VM 112. This is done by copying the data present on the disk used by the source node VM 110 in an operational state to the destination node VM 112. The operational state is the running state of the source node VM. In the operational state, for example, the source node VM may perform operations such as a live migration which involves moving a running VM to another physical server without any downtime. In the operational state, for example, certain check-points may be created in time for restoring to or creating backups of a present state of the source node VM. In the operational state, for example, a duplicate copy of the source node VM may be created which may be used for testing or other purposes.
[00053] The data may be copied by capturing a snapshot of the disk. During a start of a replication cycle, the full replication may encompass copying all the data present on the disk of the source node VM 110 to the destination node VM 112.
[00054] On completion of the full replication of the data, the incremental replication of the data may be initiated. This is done by copying the incremental data changes present on the disk to the destination node VM 112. The incremental replication of the data may encompass moving/replicating the data changes from the source node VM 110 to the destination node VM 112 continuously or periodically so that the data available in the destination node VM 112 is a replica or substantial replica of the data present on the disk used by the source node VM 110. After execution of each instance of copying of the incremental data changes to the destination node VM 112, a dynamic incremental update synchronization operation is performed. In addition, the incremental replication may include replication of the VM files and of the VM configuration data for the hypervisor 130.
[00055] As may be appreciated, there is no restriction on a total number of cycles that may be executed with respect to the incremental replication of the data. These cycles may conclude when a user initiates the cut-over. Further, the copying of the incremental data changes may conclude with dynamic initiation of a restoration phase which includes performing the dynamic incremental update synchronization operation. As part of the data migration process, the restoration phase may ensure that all the data is fully and accurately copied to the destination node VM 112.
[00056] On execution of the cut-over, the source node VM 110 is shutdown. In addition, pending incremental data changes on the disk may be transmitted to the destination node VM 112. The execution of the cut-over includes performing the full synchronization operation of copying any of the pending incremental data changes on the disk to the destination node VM 112. This is done by the dynamic execution of the restoration phase.
[00057] Finalization of the cut-over is a decision that may be taken by a cross-functional team, including stakeholders from business and information technology team of the organization, which includes a project manager or a senior leader. The finalization of the cut-over may require the source node VM 110 to stop execution of the applications, resulting in no further creation of additional incremental data. In addition, during the finalization of the cut-over, a migrator associated with the source node VM 110 may be instructed to further cease extraction of the additional incremental data. The cut-over finalization may be carried out either manually or automatically through a predefined program.
[00058] In an embodiment, the incremental data changes present on the disk are stored in the staging area of the local storage 124 before initiating the copying of the incremental data changes to the destination node VM 112. The staging area is an intermediate storage area where the incremental data changes are temporarily held and processed before being moved to destination node VM 112.
[00059] In an embodiment, the incremental replication of the data may be implemented as the SaaS service which may enable performing and optimizing the agentless data replication in any of a wide variety of remote environments (e.g., public clouds or container services, private clouds or container clusters).
[00060] FIG. 2 is an example sequence diagram 200 that illustrates steps executed for performing the dynamic synchronization during the agentless data migration, in accordance with an exemplary embodiment of the present disclosure.
[00061] As illustrated, the sequence diagram 200 includes components such as the source node 102 and the destination node 104. The source node 102 may represent the computing device, and the destination node 104 may represent the cloud-based system. The source node 102 may include the source node VM 110, and the destination node 104 may include the destination node VM 112.
[00062] The source node 102 may include the cloud lifecycle platform 106 (not shown) that may track and send data from the source node VM 110 to the destination node VM 112. The cloud lifecycle platform 106 may be used to manage and monitor data migration process. The cloud lifecycle platform 106 may act as an interface with the administrators to provide them with visibility into status of the source node VM 110 and the destination node VM 112 during the data migration process.
[00063] At step 202, the source node VM 110 may initiate a full replication of the data present on the disk. The replication may be initiated by copying the data present on the disk used by the source node VM 110 in an operational state to the destination node VM 112. By way of an example, the time it may take to perform the full replication may be a function of a size of the data and available bandwidth of the network 108, among other factors.
[00064] In some examples, during the full replication of the data, a replica copy of the data may be created at the destination node VM 112. If desired, a seed copy of the data may be placed at the destination node 104 to minimize time and bandwidth required for the full replication. The seed copy of the source node VM 110 may include a disk file that may be positioned at the destination node 104. The seed copy may be positioned using methods such as offline copying, a file transfer protocol (FTP), a disk image (e.g., International Organization for Standardization (ISO) file) or a VM clone. The full relocation of the data may occur while the source node VM 110 is powered on.
[00065] At steps 204, 206, and 208, the source node VM 110 may initiate an incremental replication (also known as the delta replication) of the data. When the full replication is finished, the incremental replication may be performed by copying only the incremental (i.e., changed) data from the source node VM 110 to the destination node VM 112. To identify the incremental data, any writes or updates to the source node VM 110 may be tracked and data blocks that have changed are identified and replicated, between different incremental replication cycles, to the destination node 104. The incremental replication may be performed between the source node VM 110 and the destination node VM 112 multiple times over multiple replication intervals. Respective durations of predetermined time intervals may be determined for the multiple replication intervals. The predetermined time intervals may be determined to effectively minimize network traffic. As may be appreciated, the steps executed for performing the incremental replication may be executed in a loop until the cut-over is executed.
[00066] The copying of the incremental data changes may end with dynamic initiation of the restoration phase. Initiation of the restoration phase may ensure that all the data is fully and accurately copied and synchronized to the destination node VM 112. In an embodiment, for each instance of copying the incremental data changes to the destination node VM 112, the restoration phase ensures execution of a dynamic synchronization operation to synchronize the data changes between the source node VM 110 and the destination node VM 112.
[00067] At step 210, the source node VM 110 may perform a cut-over (also known as switchover) leading to a shutdown of the source node VM 110 and transmission of the pending incremental data changes on the disk to the destination node VM 112. On execution of the cut-over, the full synchronization operation may be performed. During the full synchronization operation, the pending incremental data changes present on the disk are copied to the destination node VM 112. In some instances, the cut-over may be performed when the incremental replication is finished. In some examples, the cut-over may be executed immediately or may be delayed until a predetermined time.
[00068] At step 210, the incremental data may be tracked to determine the incremental data changes in the data block. The incremental data changes may be tracked to the disk belonging to the source node VM 110. For example, data contents of the source node VM’s disk and a corresponding destination node VM’s disk is compared using checksums or other suitable error checking mechanism. The checksums may refer to a digit representing a sum of correct digits in a piece of stored or transmitted digital data, against which later comparisons may be made to detect errors in the data. The comparison process may identify differences between the data present at the disk of the source node VM 110 and the disk of the destination node VM 112.
[00069] Comparing the disks associated with the source node VM 110 and the destination node VM 112 may involve reading entire contents of each of the disks and generation of the checksums. Creation and comparison of the checksums of the source node VM 110 and the destination node VM 112 may be done at the CPU 118. While the checksum comparisons may be calculated and compared, any incremental data (i.e., differences in the checksum) that are discovered may be replicated to the destination node VM 112.
[00070] As further depicted in the sequence diagram 200, during the cut-over, at step 210, the data migration process may be stalled and the incremental data in the data blocks may be migrated from the source node VM 110 to the destination node VM 112. Subsequently, the full synchronization operation is performed, such that the data at the destination node VM 112 is an exact replica of the data present at source node VM 110. Thereafter, the destination node VM 112 may be powered on.
[00071] FIG. 3 is a flow diagram 300 that illustrates a series of steps starting with initiation of the incremental data migration till termination of the incremental data migration, in accordance with an exemplary embodiment of the present disclosure;
[00072] With reference to FIG. 3, the incremental data migration comprises a structured sequence of operations that facilitate the incremental transfer and restoration of data, while ensuring data integrity between the source node VM 110 and the destination node VM 112. The incremental data migration begins, at step 302, with initiation of the incremental data migration activity. At step 304, a full backup of the data present on the source node VM 110 is performed to establish a baseline dataset for subsequent incremental synchronization. At step 306, a success check for the full backup is conducted. If the check is unsuccessful, a debug and error rectification phase is initiated, at step 308, to identify and resolve issues prior to retrying the data migration.
[00073] If the full backup is successfully verified, the process proceeds to step 310, where the incremental synchronization is carried out. Here, the incremental data changes generated since the full backup are captured and prepared for migration. If the synchronization is unsuccessful, at step 312, the process again enters a debug and error rectification phase to address and correct the issues before retrying.
[00074] At step 314, an incremental synchronization success check is performed to evaluate success of the incremental synchronization. If successful, the sequence advances to a restore phase at step 316, during which the accumulated data, comprising both full and incremental backups, is restored to a destination environment.
[00075] At step 318, a validation of restore success check is conducted to verify completion and integrity of the restoration phase. If this validation is successful, the sequence moves to completion at step 320. Upon successful restoration, at step 320, the incremental migration process is formally concluded. If unsuccessful, debugging and corrective actions are carried out at step 322 before reattempting the restoration.
[00076] By way of an example, a process of a dynamic time-dependent incremental synchronization is explained. The user may specify a time interval, denoted as δt, during which a delta recognizer component may identify and comprehend data variance that has occurred during the incremental synchronization of the agentless data migration, represented as δx.
[00077] The delta recognizer component may have access to all the data that is available at initiation of the agentless data migration along with the data variance that has occurred during the incremental synchronization. This is termed as aggregate data. Within the delta recognizer component, logical operators may be employed, which may function in accordance with temporal relationships.
[00078] The aggregate data present at the initiation of an incremental data migration activity, at the source node VM 110, may be represented as (x + δx), where the data transmitted at a specific time (t) is (x).
[00079] The logical operators may perform a subtraction operation, and calculate (x + δx) - (x), thereby identifying the incremental data as δx. This identified δx may then be directed to and stored at the destination node VM 112. The incremental data migration activity may be initiated by a trigger provided by the user.
[00080] Further, a synchronization operation of synchronizing the identified δx with the destination node VM 112 may be performed dynamically during the restoration phase. During the restoration phase, a surveillance may be performed to identify completion of transfer of the δx. During the surveillance, temporal measurement techniques may be employed to ascertain whether complete transfer of the incremental data has concluded. Upon completion of the δx transfer, a command to initiate the restoration phase may be initiated. The restoration phase may be executed by an automated instruction set which includes instructions to manage each stage of the restoration phase. The instructions may be, for example, a reconfiguration instruction which involves updating network parameters and ensuring compatibility with destination node VM 112 hardware, based on the δx and associated VMs.
[00081] In addition, operational context of the source node VM 110 including its memory contents and hardware configurations, may be reinstated on the destination node VM 112 to ensure that the destination node VM 112 resumes its operations seamlessly. Similar operations may be performed on δx to enable its integration with the destination node VM 112. Operations of the destination node VM 112 may then be checked for any errors or issues arising during the incremental data migration activity. All the aforementioned steps are executed automatically in a sequential manner. This automatic restoration ensures data integrity as the entire process operates in near real-time and continuously.
[00082] The duration of the time interval δt may be adjusted by the user and may range from, for example, one second, but over time, may decrease to microseconds, nanoseconds.
[00083] Each completed migration cycle (an initial one) becomes a previous migration cycle for a subsequent cycle. The subsequent migration cycle may generate a new δx, and a pattern of incremental data migration activity is continued. This may be represented as:

Operation of an initial migration cycle X transforms to X + δx (denoted as Y).
Subsequently, operation of the Y transforms to Y + δy (denoted as Z), and so forth.

[00084] To prevent an unmanaged execution of the incremental data migration activity, data generation from the source node VM 110 may cease at a specific point in time and may be referred to as the cut-over. The cut-over may be achieved by terminating the applications running at the source node VM 110 or by severing/ neutralizing the network 108 between the source node VM 110 and the destination node VM 112.
[00085] The above discussed steps of the delta migration process may be continuously repeated until the cut-over to ensure complete transfer of the data from the source node VM 110 and the destination node VM 112 along with subsequent restoration at the destination node VM 112.
[00086] FIG. 4 illustrates an example screenshot 400 showing the incremental migration process being initiated at the source node VM 110, in accordance with an embodiment of the present disclosure. FIG. 4 shows an interface which is a part of an agentless migration platform that supports near-zero downtime migration for the VMs. It is designed to help the users schedule and manage the migration process efficiently by grouping the VMs into a ‘move groups’ tab and providing control over the incremental data synchronization. To navigate through various stages of the migration process, a ‘schedule migration’ tab is used to set up and plan migration tasks. A ‘review migration status’ tab is used to check progress and current state of ongoing migrations. A ‘validation check’ tab is used to run a pre-migration validation check to ensure readiness of the agentless migration platform to execute the data migration. A ‘near-zero downtime’ tab acts as an interface and is focused on minimizing downtime during the data migration. A ‘replication status’ tab views status of the data replication between the source node VM 110 and the destination node VM 112. A ‘final validation check’ tab helps evaluate a final readiness validation of the agentless migration platform before the cut-over. A ‘final cut-over’ tab executes a final transition of the data from the source node VM 110 and the destination node VM 112. A ‘migration reports’ tab helps to access detailed logs and reports related to various executed migration tasks.
[00087] A migration control mode helps the users to choose between a manual mode or an automated mode for scheduling the incremental synchronization. A migration table displays a mapping between source hostnames (IP addresses) and their respective target IPs. The table also includes options to schedule incremental (delta) synchronization manually or automatically for each of the VMs.
[00088] FIG. 5 is a block diagram that illustrates a system architecture of a computer system 500, in accordance with an embodiment of the present disclosure. An embodiment of present disclosure, or portions thereof, may be implemented as computer readable code on the computer system 500. Hardware, software, or any combination thereof may embody modules and components used to implement the methods of FIG. 6.
[00089] The computer system 500 includes a CPU 502 that may be a special-purpose or a general-purpose processing device. The CPU 502 may be a single processor, multiple processors, or combinations thereof. The CPU 502 may have one or more processor cores. In one example, the CPU 502 is an octa-core processor. Further, the CPU 502 may be connected to a communication infrastructure 504, such as a bus, message queue, multi-core message-passing scheme, and the like. The computer system 500 may further include a main memory 506 and a secondary memory 508. Examples of the main memory 506 may include RAM, ROM, and the like. The secondary memory 408 may include a hard disk drive or a removable storage drive, such as a floppy disk drive, a magnetic tape drive, a compact disc, an optical disk drive, a flash memory, and the like.
[00090] The computer system 500 further includes an input/output (I/O) interface 510 and a communication interface 512. The I/O interface 510 includes various input and output devices that are configured to communicate with the CPU 502. Examples of the input devices may include a keyboard, a mouse, a joystick, a touchscreen, a microphone, and the like. Examples of the output devices may include a display screen, a speaker, headphones, and the like. The communication interface 512 may be configured to allow data to be transferred between the computer system 500 and various devices that are communicatively coupled to the computer system 500. Examples of the communication interface 512 may include a modem, a network interface, i.e., an Ethernet card, a communication port, and the like. Data transferred via the communication interface 512 may correspond to signals, such as electronic, electromagnetic, optical, or other signals as will be apparent to a person skilled in the art.
[00091] FIGS. 6 represents a flowchart 600 that illustrates a method for the dynamic synchronization during the agentless data migration, in accordance with an exemplary embodiment of the present disclosure.
[00092] With reference to FIG. 6, at step 602, the source node 102 and the destination node 104 are identified on either the network 108 or the direct connection 138. The source node 102 has the source node VM 110 with data to be replicated to the destination node VM 112 on the destination node 104. The source node 102 is located on the on-premises platform and the destination node 104 is located on the cloud-based platform that is geographically separated from the on-premises platform.
[00093] At step 604, a full replication of the data is performed from the source node VM 110 to the destination node VM 112. This is done by copying the data present on the disk used by the source node VM 110 in the operational state. During the start of the replication cycle, the full replication of the data to the destination node VM 112 is performed by capturing the snapshot of the disk.
[00094] At step 606, the incremental replication of the data is initiated by copying the incremental data changes present on the disk to the destination node VM 112. In addition, the dynamic incremental update synchronization operation is performed between the source node VM 110 and the destination node VM 112 during each instance of copying the incremental data changes to the destination node VM 112. The incremental replication of the data present on the disk is initiated after completion of the full replication of the data to the destination node VM.
[00095] The incremental data changes present on the disk are stored in a staging area of the local storage 124 before initiating the copying of the incremental data changes to the destination node VM 112.
[00096] Embodiments in the disclosure enable the dynamic synchronization of the data during the agentless data migration. The disclosed method facilitates to enhance efficiency of the data migration process along with providing data integrity by automatically initiating the restoration phase immediately after the completion of incremental synchronization. The disclosed method ensures that all the data present in the source node VM 110 is transferred and dynamically synchronized without any omission, to the destination node VM 112. The restoration phase continues seamlessly until the initiation of the cut-over, thereby ensuring synchronization of the incremental data generated at the source node VM 110 to the destination node VM 112. By way of an example, when the user controls timing of the cut-over, a complete and consistent data transfer from the source node VM 110 to the destination node VM 112 is ensured. Additionally, the disclosed method allows the applications running on the source node VM 110 to remain operational throughout the migration process. The disclosed method provides a user interface to initiate, manage and access the data migration process. During the data migration process, various backend migration services are triggered via micro-services to handle various used data payloads. The generated incremental data is collected and stored in the staging area. During the restoration phase, the incremental data is dynamically copied to the destination node VM 112. If, after completion of the restoration phase any additional incremental data is generated, the method continues to dynamically synchronize the incremental data until the cut-over is finalized. This streamlined and automated workflow not only reduces downtime of the source node VM 110 but also enhances reliability and user control throughout the data migration process.
[00097] In the claims, the words ‘comprising’, ‘including’ and ‘having’ do not exclude the presence of other elements or steps then those listed in a claim. The terms “a” or “an,” as used herein, are defined as one or more than one. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.
[00098] Techniques consistent with the present disclosure provide, among other features, systems and methods for validating current and future validity and value of payment modes. While various embodiments of the present disclosure have been illustrated and described, it will be clear that the present disclosure is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and equivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the present disclosure, as described in the claims.
, Claims:We claim:

1. A method for performing an agentless data migration, the method comprising:
identifying, on a network, a source node and a destination node, the source node comprising a source node Virtual Machine (VM) with data to be replicated to a destination node VM on the destination node;
performing, on the network, a full replication of the data from the source node VM to the destination node VM by copying the data which is present on a disk used by the source node VM in an operational state, to the destination node VM; and
initiating, on the network, an incremental replication of the data from the source node VM to the destination node VM by copying incremental data changes present on the disk to the destination node VM, wherein a dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node VM.

2. The method as claimed in claim 1, further comprising:
executing, on the network, at a cut-over, a shutdown of the source node VM and transmission of pending incremental data changes on the disk to the destination node VM, wherein the execution of the cut-over includes performing a full synchronization operation of copying the pending incremental data changes on the disk to the destination node VM.

3. The method as claimed in claim 2, wherein each unique instance of an incremental data change of the incremental data changes present on the disk is copied and replicated to the destination node VM until the execution of the cut-over.

4. The method as claimed in claim 1, wherein the incremental data changes present on the disk are stored in a staging area of a storage before initiating the copying of the incremental data changes to the destination node VM.

5. The method as claimed in claim 1, wherein, the incremental replication of the data which is present on the disk of the source node VM is initiated after completion of the full replication of the data to the destination node VM.

6. The method as claimed in claim 1, wherein during start of a replication cycle, the full replication of the data from the source node VM to the destination node VM is performed by capturing a snapshot of the disk.

7. The method as claimed in claim 1, wherein the source node is located on an on-premises platform and the destination node is located on a cloud-based platform that is geographically separated from the on-premises platform.

8. A system for performing an agentless data migration, the system comprising:
one or more processors; and
a memory operatively coupled to the one or more processors, wherein the memory comprises processor-executable instructions, which on execution, causes the one or more processors to:
identify, on a network, a source node and a destination node, the source node comprising a source node Virtual Machine (VM) with data to be replicated to a destination node VM on the destination node;
perform, on the network, a full replication of the data from the source node VM to the destination node VM by copying the data which is present on a disk used by the source node VM in an operational state, to the destination node VM; and
initiate, on the network, an incremental replication of the data from the source node VM to the destination node VM by copying incremental data changes present on the disk to the destination node VM, wherein a dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node VM.

9. The system as claimed in claim 8, wherein the one or more processors are further configured to:
execute, on the network, at a cut-over, a shutdown of the source node VM and transmission of pending incremental data changes on the disk to the destination node VM, wherein the execution of the cut-over includes performing a full synchronization operation of copying the pending incremental data changes on the disk to the destination node VM.

10. The system as claimed in claim 9, wherein each unique instance of an incremental data change of the incremental data changes present on the disk is copied and replicated to the destination node VM until the execution of the cut-over.

11. The system as claimed claim 8, wherein the incremental data changes present on the disk are stored in a staging area of a storage before initiating the copying of the incremental data changes to the destination node VM.

12. The system as claimed in claim 8, wherein, the incremental replication of the data present on the disk of the source node VM is initiated after completion of the full replication of the data to the destination node VM.

13. The system as claimed in claim 8, wherein during start of a replication cycle, the full replication of the data from the source node VM to the destination node VM is performed by capturing a snapshot of the disk.

14. The system as claimed in claim 8, wherein the source node is located on an on-premises platform and the destination node is located on a cloud-based platform that is geographically separated from the on-premises platform.

15. The system as claimed in claim 8, wherein the system is implemented as a SaaS service configured to optimize performing the agentless data migration.

16. A source node VM executed on a cloud lifecycle platform for performing agentless data migration, the source node VM comprising:
one or more processors; and
a memory operatively coupled to the one or more processors, wherein the memory comprises processor-executable instructions, which on execution, causes the one or more processors to:
identify, on a network, a source node and a destination node, the source node comprising the source node Virtual Machine (VM) with data to be replicated to a destination node VM on the destination node;
perform, on the network, a full replication of the data from the source node VM to the destination node VM by copying the data which is present on a disk used by the source node VM in an operational state, to the destination node VM; and
initiate, on the network, an incremental replication of the data from the source node VM to the destination node VM by copying incremental data changes present on the disk to the destination node VM, wherein a dynamic incremental update synchronization operation is performed during each instance of copying the incremental data changes to the destination node VM.

Dated this 20th day of May 2025

Ojas Sabnis
Agent for the Applicant
IN/PA- 2644

Documents

Application Documents

# Name Date
1 202541048663-FORM-5 [20-05-2025(online)].pdf 2025-05-20
2 202541048663-FORM 3 [20-05-2025(online)].pdf 2025-05-20
3 202541048663-FORM 1 [20-05-2025(online)].pdf 2025-05-20
4 202541048663-DRAWINGS [20-05-2025(online)].pdf 2025-05-20
5 202541048663-COMPLETE SPECIFICATION [20-05-2025(online)].pdf 2025-05-20
6 202541048663-FORM-9 [21-05-2025(online)].pdf 2025-05-21
7 202541048663-Proof of Right [07-08-2025(online)].pdf 2025-08-07
8 202541048663-FORM-26 [07-08-2025(online)].pdf 2025-08-07