Abstract: Due to individual nature of applications and different environments (production or non-production), and different migration stages, migration process requires orchestration for repeatability, reusability, and standardization of process. Systems and methods of present disclosure perform migration of information technology (IT) components for entities. Risk maturity score (RMS) for IT component(s) is computed using migration requirements and inventory matrix (IM) is generated. Migration complexity analysis is performed on IM, and network connection data to obtain reports to generate an application information form (AIF) report for application(s) for generation of target infrastructure designs for IT component(s) based on design patterns and AIF report. Terraform code and automation script are generated using design blueprint document, and landing zone documents for migration of IT components from one infrastructure to another infrastructure based on RMS and an execution of migration planning runbook specific to the levels, a migration schedule, the terraform code and the automation script. [To be published with FIG. 2]
FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEMS AND METHODS FOR PERFORMING MIGRATION OF INFORMATION TECHNOLOGY (IT) COMPONENTS FOR ENTITIES
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description:
The following specification particularly describes the invention and the manner in which it is to be performed.
2
TECHNICAL FIELD
[001]
The disclosure herein generally relates to migration techniques, and, more particularly, to systems and methods for performing migration of information technology (IT) components for entities.
5
BACKGROUND
[002]
Enterprise customers are adopting Hybrid or Multi Cloud technology for different use cases such as on-premises datacenter exit, modernizing infrastructure(s), application(s) and data estate(s) so that they can achieve business growth through scalability, agility and innovation. Even though there is a cloud 10 strategy in place for the client organizations, migration of existing workloads is delayed due to multiple factors. Such migrations are primarily planned based on the manual data collection and analysis of the existing inventory systems that are mostly outdated. Cloud designs are created with limited inputs which makes the migration unpredictable. Due to the individual nature of the applications and its 15 different environments (production or non-production), and different migration stages (discovery, analysis, design, build, remediation, cutover, testing), the migration process needed to be orchestrated well so that team responsible for migration can bring more automation, repeatability, reusability, and standardization of the process. Also, there will be blockers in different migration stages and 20 effective handling of migration risks is needed to make the migration successful.
SUMMARY
[003]
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical 25 problems recognized by the inventors in conventional systems.
[004]
For example, in one aspect, there is provided a processor implemented method for performing migration of information technology (IT) components for entities. The method comprises receiving, by using a migration orchestration unit, via one or more hardware processors, migration requirements 30 pertaining to migration of a plurality of components from a first infrastructure to a
3
second infrastructure, wherein the plurality of components are specific to an entity;
computing, by using the migration orchestration unit via the one or more hardware processors, a risk maturity score for each of the one or more components using the migration requirements; generating, by using an inventory consolidation engine via the one or more hardware processors, an inventory matrix for the plurality of 5 components based on a dynamic column mapping library, an output data obtained from a discovery tool and an inventory data obtained from the entity; performing, by using a migration analysis engine via the one or more hardware processors, a migration complexity analysis on the inventory matrix, and a network connection data to obtain a target sizing report, an optimized firewall rule report, and a 10 migration tool fitment report; generating, by using an application information extraction engine via the one or more hardware processors, an application information form (AIF) report for each application comprised in the plurality of components based on the inventory matrix, and one or more responses to one or more queries; creating, by using a target design recommendation engine via the one 15 or more hardware processors, one or more target infrastructure designs for the plurality of components based on one or more design patterns and the application information form (AIF) report generated for each application; generating, by using a design blueprint generator via the one or more hardware processors, a design blueprint document for each application based on the one or more target 20 infrastructure designs, the inventory matrix, the optimized firewall rule report, the target sizing report, the migration tool fitment report, and one or more associated landing zone documents; generating, by using code generation and automation engine via the one or more hardware processors, terraform code and automation scripts using the design blueprint document generated for each application, and the 25 one or more associated landing zone documents; generating, by using an automated migration planner via the one or more hardware processors, a migration planning runbook specific to one or more levels and an associated migration schedule based on the design blueprint document and one or more runbook templates; and performing migration of the plurality of components from the first infrastructure to 30 the second infrastructure for the entity based on the risk maturity score and an
4
execution of the migration planning runbook specific to the one or more levels, the
migration schedule, the terraform code and the automation scripts.
[005]
In an embodiment, the migration complexity analysis comprises applying one or more migration analysis rules on the inventory matrix, and the network connection data. 5
[006]
In an embodiment, the one or more target infrastructure designs for the plurality of components are obtained by applying one or more design selection rules on (i) the one or more design patterns and (ii) an associated comprehensive design questionnaire.
[007]
In an embodiment, the AIF report is further based on the one or more 10 associated landing zone documents, and one or more associated application design documents.
[008]
In another aspect, there is provided a processor implemented system for performing migration of information technology (IT) components for entities. The system comprises: a memory storing instructions; one or more communication 15 interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: receive, by using a migration orchestration unit, migration requirements pertaining to migration of a plurality of components from a first infrastructure to a second infrastructure, wherein the plurality of 20 components are specific to an entity; compute, by using the migration orchestration unit, a risk maturity score for each of the one or more components using the migration requirements; generate, by using an inventory consolidation engine, an inventory matrix for the plurality of components based on a dynamic column mapping library, an output data obtained from a discovery tool and an inventory 25 data obtained from the entity; perform, by using a migration analysis engine, a migration complexity analysis on the inventory matrix, and a network connection data to obtain a target sizing report, an optimized firewall rule report, and a migration tool fitment report; generate, by using an application information extraction engine, an application information form (AIF) report for each application 30 comprised in the plurality of components based on the inventory matrix, and one or
5
more responses to one or more queries; create, by using a target design
recommendation engine, one or more target infrastructure designs for the plurality of components based on one or more design patterns and the application information form (AIF) report generated for each application; generate, by using a design blueprint generator, a design blueprint document for each application based 5 on the one or more target infrastructure designs, the inventory matrix, the optimized firewall rule report, the target sizing report, the migration tool fitment report, and one or more associated landing zone documents; generate, by using code generation and automation engine, terraform code and automation scripts using the design blueprint document generated for each application, and the one or more associated 10 landing zone documents; generate, by using an automated migration planner, a migration planning runbook specific to one or more levels and an associated migration schedule based on the design blueprint document and one or more runbook templates; and perform migration of the plurality of components from the first infrastructure to the second infrastructure for the entity based on the risk 15 maturity score and an execution of the migration planning runbook specific to the one or more levels, the migration schedule, the terraform code and the automation scripts.
[009]
In an embodiment, the migration complexity analysis comprises applying one or more migration analysis rules on the inventory matrix, and the 20 network connection data.
[010]
In an embodiment, the one or more target infrastructure designs for the plurality of components are obtained by applying one or more design selection rules on (i) one or more design patterns and (ii) an associated comprehensive design questionnaire. 25
[011]
In an embodiment, the AIF report is further based on the one or more associated landing zone documents and one or more associated application design documents.
[012]
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more 30 instructions which when executed by one or more hardware processors cause
6
performing migration of information technology (IT) components for entities by
receiving, by using a migration orchestration unit, migration requirements pertaining to migration of a plurality of components from a first infrastructure to a second infrastructure, wherein the plurality of components are specific to an entity; computing, by using the migration orchestration unit, a risk maturity score for each 5 of the one or more components using the migration requirements; generating, by using an inventory consolidation engine, an inventory matrix for the plurality of components based on a dynamic column mapping library, an output data obtained from a discovery tool and an inventory data obtained from the entity; performing, by using a migration analysis engine, a migration complexity analysis on the 10 inventory matrix, and a network connection data to obtain a target sizing report, an optimized firewall rule report, and a migration tool fitment report; generating, by using an application information extraction engine, an application information form (AIF) report for each application comprised in the plurality of components based on the inventory matrix, and one or more responses to one or more queries; creating, 15 by using a target design recommendation engine, one or more target infrastructure designs for the plurality of components based on one or more design patterns and the application information form (AIF) report generated for each application; generating, by using a design blueprint generator, a design blueprint document for each application based on the one or more target infrastructure designs, the 20 inventory matrix, the optimized firewall rule report, the target sizing report, the migration tool fitment report, and one or more associated landing zone documents; generating, by using code generation and automation engine, terraform code and automation scripts using the design blueprint document generated for each application, and the one or more associated landing zone documents; generating, 25 by using an automated migration planner, a migration planning runbook specific to one or more levels and an associated migration schedule based on the design blueprint document and one or more runbook templates; and performing migration of the plurality of components from the first infrastructure to the second infrastructure for the entity based on the risk maturity score and an execution of the 30
7
migration planning runbook specific to the one or more levels, the migration
schedule, the terraform code and the automation scripts.
[013]
In an embodiment, the migration complexity analysis comprises applying one or more migration analysis rules on the inventory matrix, and the network connection data. 5
[014]
In an embodiment, the one or more target infrastructure designs for the plurality of components are obtained by applying one or more design selection rules on (i) the one or more design patterns and (ii) an associated comprehensive design questionnaire.
[015]
In an embodiment, the AIF report is further based on the one or more 10 associated landing zone documents and one or more associated application design documents.
[016]
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. 15
BRIEF DESCRIPTION OF THE DRAWINGS
[017]
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: 20
[018]
FIG. 1 depicts an exemplary system for performing migration of information technology (IT) components for entities, in accordance with an embodiment of the present disclosure.
[019]
FIG. 2 depicts an exemplary high level block diagram of the system of FIG. 1 for performing migration of information technology (IT) components for 25 entities, in accordance with an embodiment of the present disclosure.
[020]
FIGS. 3A and 3B depict an exemplary flow chart illustrating a method for performing migration of information technology (IT) components for entities, using the systems of FIG. 1-2, in accordance with an embodiment of the present disclosure. 30
8
DETAILED DESCRIPTION OF EMBODIMENTS
[021]
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer 5 to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[022]
Due to the individual nature of the applications and its different environments (production or non-production), and different migration stages 10 (discovery, analysis, design, build, remediation, cutover, testing), the migration process needed to be orchestrated well so that team responsible for migration can bring more automation, repeatability, reusability, and standardization of the process. Also, there will be blockers in different migration stages and effective handling of migration risks is needed to make the migration successful. 15
[023]
Embodiments of the present disclosure provide systems and methods for performing migration of information technology (IT) components for entities. The systems and methods of the present disclosure serve as a delivery model for cloud migration with a strong focus on standardization, repeatability, and automation by leveraging a factory model with Product Oriented Delivery (POD) 20 teams and a machine first approach. To accelerate the speed of cloud migration, the POD teams need to get accurate information about the workloads being migrated so that proper design can be done for the target cloud platform. The augmented intelligence is achieved by providing end to end orchestration of the migration life cycle stages and applying Artificial Intelligence (AI), Machine Learning, Natural 25 Language Processing (NLP) and Generative AI to bring more automation to the entire migration process. The systems and methods of the present disclosure include, various stages illustrated below by way of examples:
1.
Migration Discovery and Analysis (Macro design) Stage.
a.
Automated extraction of as-is information from different data 30 sources such as client IT Service Management (ITSM)
9
Configuration Management Database (CMDB), discovery tools,
monitoring and management tools etc. using a Self-Learning library and mapped to a consolidated inventory dataset with ‘m’ parameters (e.g., m=200) required for the migration analysis, design, planning and execution. 5
b.
Automated generation of Application Information Form (AIF) which is a summarized document with the technical details and business considerations for migration.
2.
Migration Design (Micro design) stage.
a.
Machine Learning (ML) based cloud design recommendation 10 engine to recommend best suitable migration design patterns based on best practices from a plurality of pillars of well architected framework and cloud services attributes.
b.
ML based infrastructure sizing recommendations to suggest the best suitable target cloud sizing based on current utilization and projected 15 growth.
c.
Dynamic generation of the blueprint design document (also referred to as Migration Design Document (MDD) and may be interchangeably used herein) after consolidating the inputs from consolidated inventory (also referred to as ‘inventory matrix’ and 20 may be interchangeably used herein), Application Information Form (AIF), and client landing zone details.
3.
Migration Build and Remediation stage.
a.
Automatically generate code from migration designs for building the infrastructure in the target cloud platform and remediating the 25 configuration changes such as firewall rules, load balancer, etc.
b.
Assisted Code remediation using Generative AI for identifying the integration points and changes required in different layers such as data access layer when the application is refactored while moving to cloud. 30
10
c.
Optimizing the firewall rules by ingesting and analyzing the network connection data from enterprise monitoring systems and deriving an optimized firewall rule through Internet Protocol (IP) summarization using Classless Inter Domain Routing (CIDR) notation. 5
4.
Migration Cutover
a.
Recommending a migration runbook which is a collection of migration tasks to be executed and providing automated scripts for different stages of migration cutover.
5.
End to End Orchestration 10
a.
Assessing the migration maturity based on a multi-dimensional framework which has a plurality of critical factors influencing migration speed and ‘n’ parameters for indicating the level of migration risk.
b.
Migration Virtual Assistant (MigVA) chatbot leveraging Generative 15 AI technology for answering queries from Migration Factory team with respect to cloud migration.
c.
Migration Knowledgebase which is a collection of migration related reusable artifacts with a search function leveraging Generative AI method. 20
[024]
Referring now to the drawings, and more particularly to FIGS. 1 through 3B, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. 25
[025]
FIG. 1 depicts an exemplary system 100 for performing migration of information technology (IT) components for entities, in accordance with an embodiment of the present disclosure. In an embodiment, the system 100 may also be referred to as ‘migration system’ and may be interchangeably used herein. In an embodiment, the system 100 includes one or more hardware processors 104, 30 communication interface device(s) or input/output (I/O) interface(s) 106 (also
11
referred as interface(s)), and one or more data storage devices or memory 102
operatively coupled to the one or more hardware processors 104. The one or more processors 104 may be one or more software processing components and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, 5 digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is/are configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the system 100 can be implemented in a variety of computing systems, such as laptop 10 computers, notebooks, hand-held devices (e.g., smartphones, tablet phones, mobile communication devices, and the like), workstations, mainframe computers, servers, a network cloud, and the like.
[026]
The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and 15 the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server. 20
[027]
The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random-access memory (SRAM) and dynamic-random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an 25 embodiment, a database 108 is comprised in the memory 102, wherein the database 108 comprises information pertaining migration requirements associated with entities (e.g., organization, and the like). The database 108 further comprises risk maturity score being computed, inventory matrix for a plurality of IT components, various reports being generated (e.g., a target sizing report, an optimized firewall 30 rule report, and a migration tool fitment report, and the like), application
12
information form (AIF) report(s) for each application, network connection data, one
or more associated landing zone documents, one or more associated application design documents, a plurality of target infrastructure designs, design patterns, design blueprint document(s), terraform code, automation script(s), level specific migration planning runbook, an associated migration schedule, and the like. The 5 memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis. 10
[028]
FIG. 2, with reference to FIG. 1, depicts an exemplary high level block diagram of the system 100 of FIG. 1 for performing migration of information technology (IT) components for entities, in accordance with an embodiment of the present disclosure. The migration system 100 helps various teams to execute the cloud migration in an efficient and structured manner, by integrating the People, 15 Process and Technology pillars of migration factory model. The objective is to achieve high volume, velocity, and variety for infrastructure, application and data migrations by increasing the re-usability, repeatability and automation. Embodiments of the present disclosure recognize that collecting and consolidating source data from entity (e.g., client) is a problematic because the entities/clients 20 often do not know accurate details of the source platform and service details required for migration. Conventionally, collection of basic details of both source and the target platforms are made manually or in a semi-automated manner and as such is a time-consuming task. The system 100 comprises a migration orchestration unit, an inventory consolidation engine, a migration analysis engine, an application 25 information extraction engine, a target design recommendation engine, a design blueprint generator, a code generation and automation engine, an automated migration planner, and the like. The system 100 may further comprise a source data center/cloud platform, and target cloud platforms such as AWS, Azure, and Google® cloud platform (GCP). The components such as the migration 30 orchestration unit, the inventory consolidation engine, the migration analysis
13
engine,
the application information extraction engine, the target design recommendation engine, the design blueprint generator, the code generation and automation engine, the automated migration planner are implemented as at least one of a logically self-contained part of a software program, a self-contained hardware component, and/or, a self-contained hardware component with a logically 5 self-contained part of a software program embedded into each of the hardware component that when executed perform the above method described herein.
[029]
FIGS. 3A and 3B, with reference to FIGS. 1-2, depict an exemplary flow chart illustrating a method for performing migration of information technology (IT) components for entities, using the systems 100 of FIG. 1-2, in accordance with 10 an embodiment of the present disclosure. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to 15 components of the system 100 of FIG. 1, the block diagram of the system 100 depicted in FIG. 2, and the flow diagrams as depicted in FIGS. 3A-3B. Although process steps, method steps, techniques or the like may be described in a sequential order, such processes, methods, and techniques may be configured to work in alternate orders. In other words, any sequence or order of steps that may be 20 described does not necessarily indicate a requirement that the steps be performed in that order. The steps of processes described herein may be performed in any order practical. Further, some steps may be performed simultaneously.
[030]
At step 202 of the method of the present disclosure, the one or more hardware processors 104 receive, by using a migration orchestration unit, migration 25 requirements pertaining to migration of a plurality of components from a first infrastructure to a second infrastructure, wherein the plurality of components are specific to an entity. The plurality of components comprises IT components such as but not limited to, database(s), datacenters, applications, servers, and the like. Migration requirements are used to define the migration strategy and approach and 30 highly influence the structure of the migration factory and different assembly lines.
14
For example, if a business objective is to ‘Exit the on
-premises datacenter by 2025 due to contract expiry’, then the focus of the migration is more Lift and Shift nature with a Rehost approach than modernization with a Replatform or Refactor approach, since the latter needs more duration for migration execution. These requirements are captured as part of the ‘Cloud assessment stage’ and during initial 5 kick-off of the cloud migration program. These are the guidelines established for the cloud migration program governance. Below Table 1 illustrates examples of migration requirements:
Table 1
Sl. No
Requirement category
1
Data Center Exit with a lift & shift compute migration
2
Legacy application migration & modernization
3
Cloud Native Adoption with microservices architecture
4
Database migration & modernization
5
One-Time massive dataset migration
6
Open-source software adoption with license optimization
7
Cloud based disaster recovery and business continuity
8
Cross cloud migration – GCP
9
Cross cloud migration – AWS to GCP
10
Cross cloud migration – Azure to GCP
10
[031]
For migration program governance, these requirements are represented in the form of Objectives and Key Results (OKR) where objective represents what is to be achieved and Key Results are used to benchmark and monitor how to get to the objective. An example is shown in the below Table 2:
Table 2 15 Sl. No Objective (O) and Key Results (KR) O1 Reduce the number of non-critical applications at on-premises datacenter to zero by CY 2025
15
KR1 Migrate Wave1 which consists of 100 non-critical applications to Google Cloud VMware Engine (GCVW) by Q2-2024 KR2 Migrate Wave2 which consists of 100 non-critical applications to Google Cloud VMware Engine (GCVW) by Q4-2024 KR3 Enhance landing zone for Wave3 applications which will move to Google Kubernetes Engine by Q1 2025 KR4
Define migration wave and strategy for migrating off Oracle databases to a cloud-native service by Q2 2025 KR5
Migrate Wave3 which consists of 70 applications which will move to Google Kubernetes (GKE) Engine (GKE) and Cloud Native Databases (Cloud SQL) by end of Q3 2025 KR6
Complete migration of all 250 non-critical applications and decommission of old hardware at on-premises datacenter by end of Q4 2025
[032]
The use cases of the migration program may be ‘exiting the datacenter due to nearing contract expiry’ or ‘reducing the technical debt due to aging hardware’ with a time sensitive lift & shift migration (Rehost migration path) approach leveraging Infrastructure as a Services (IaaS) as mentioned above. Also 5 there may be a need for optimizing the performance, scalability, and resilience of the applications by upgrading the Operating system and Database stack and migrating using Platform as a Service (PaaS) cloud services (Replatform migration path). Another important use case is to remediate or modernize the applications based on cloud native architecture to bring more agility and innovation (Refactor 10 migration path). The target platform for the migration is Public Cloud with leading hyperscalers like Amazon Web Services (AWS), Microsoft® Azure (Azure) or Google® Cloud Platform (GCP), etc.
[033]
Cloud Migration is a complex process since there are major differences in application hosting and the way the applications are accessed from 15
16
the public cloud. So
, understanding the risks, critical success factors and governing them properly is very important for the success of the cloud migration program and realizing the benefits like business agility, scalability, and innovation. A three dimensional migration framework consisting of 3 pillars forming the migration factory model (people, process and technology), 8 different stages of migration 5 lifecycle for each application being migrated (Discovery, Analysis, Design, Plan, Migration, Validation, Hypercare and Transition) and 8 critical factors influencing the migration velocity represented by an acronym ‘RELIABLE’ (Resource scaling, Execution strategy, Landing zone readiness, Inventory maturity, Automation efficiency, Business alignment, Legacy index and End to end orchestration) is used 10 to govern the overall migration program.
[034]
Conventional Cloud migration needs a lot of manual data collection and analysis for understanding the As-Is architecture of the application being migrated. Even though there are many discovery tools in the market, capturing all the granular details of the application is time consuming since the information is 15 scattered in different data repositories and in different formats. Many migrations are failing due to lack of understanding of existing configurations since the applications were implemented long back and subject matter experts or documentation not available currently.
[035]
At step 204 of the method of the present disclosure, the one or more 20 hardware processors 104 compute, by using the migration orchestration unit, a risk maturity score for each of the one or more components using the migration requirements. The migration orchestration unit enables the system 100 to execute the cloud migration in an efficient and structured manner, by integrating the various attributes such as People (Migration POD team), Process and Technology. The 25 objective is to achieve high volume, velocity, and variety for infrastructure, application, and data migrations by increasing the re-usability, repeatability, and automation.
[036]
In order to make the migration program successful, there needs to be a strong focus on the program governance. Many times, there will be risks related 30 to target platform readiness, finalization of licensing model, unavailability of
17
release windows or downtime, etc. These risks should be identified and mitigated
in a timely manner to avert these risks becoming blockers for migration. Maturity of the cloud migration is checked by assessing and calculating the maturity score.
[037]
The maturity score calculator is based on a multi-dimensional framework (e.g., say 3-dimensional framework) and parameters derived from ‘m’ 5 critical success factors (e.g., say 8 factors) influencing the migration velocity those are represented by an acronym ‘RELIABLE’ as indicated as exemplary scoring parameters. There is a plurality of parameters derived from 8 critical factors from the migration maturity framework. Below Table 3 illustrates risk maturity score being calculated/computing for various factors, by way of examples: 10
Table 3
No
Migration Maturity Factors
# Areas
Current/Maximum risk maturity Score
1
Resource scaling
7
2.6/5
2
Execution strategy
5
3.0/5
3
Landing zone readiness
7
2.7/5
4
Inventory maturity
7
2.6/5
5
Automation efficiency
8
3.1/5
6
Business alignment
8
2.9/5
7
Legacy index
3
3.0/5
8
End to end orchestration
5
2.6/5
Total Maturity Score
50
2.8/5
[038]
All these parameters are measured on a scale of 1 to 5 and maturity score calculated for each RELIABLE factor based on questionnaires related to that area. One example of questionnaire under ‘Execution strategy’ is “What percentage 15
18
of workload is considered under Lift & Shift migration? Based on the response, the
values are assigned to that area e.g., 1 (0-19%), 2 (20 – 39%), 3 (40-59%), 4 (60-79%), and 5 (80 – 100%), Some are based on quantitative numbers (e.g., percentage completion) and others are based on the qualitative assessments by a migration consultant in a scale of 1 to 5. Overall score is calculated from the individual factor 5 scores which is a measure of the overall risk maturity. Below Table 4 illustrates on how the risk maturity score has been computed for resource scaling as one of the factors:
Table 4
Factors
Area of measurement
Score (1-5)
Risk maturity score
Resource scaling
Product Oriented Delivery (POD) structure
2
2.6
POD Scaling
3
POD capacity management
3
POD Orchestration
3
Global coverage
2
Digitization and re-usability
2
Delivery Readiness Index (DRI)
3
10
[039]
Below Table 5 illustrates risk maturity score calculation (scoring parameters), by way of examples:
Table 5
Factors
Area of measurement
Resource scaling
POD Structure
POD Scaling
19
POD Capacity management
POD Orchestration
Global Coverage
Digitization and Re-usability
Delivery Readiness Index (DRI)
Execution strategy
Lift & Shift vs modernize
App based vs server based
Migration Complexity mix
T-Minus process adoption
Migration R-path
Landing zone readiness
Basic Tenancy Structure
Network connectivity
Target Platform Readiness
Day2Operations tool readiness
Security tools readiness
Shared Infra readiness
Business Continuity Plan (BCP)/Disaster Recovery (DR) Readiness
Inventory maturity
configuration Management Database (CMDB) data accuracy
Apps and infra mapping
As-Is Architecture availability
As-Is documentation quality
Discovery tools data availability
Contextual Knowledge
Inter App dependency
Automation efficiency
Use of Discovery tools
Use of Migration tools
Use of DevOps tools
Infra as Code (IaC) maturity
20
Continuous Integration and Continuous Delivery (CI-CD) integration
Discovery Automation
Design Automation
Runbook Automation
Business alignment
Executive Sponsorship
Application Owner and Team’s Availability
Flexible Release windows
Application downtime
Relaxation on Change Freeze periods
Organization Change Management (OCM)
Cloud Adoption Maturity
Cloud Center of Excellence (CCoE) readiness
Legacy index
Mix of legacy tech stack
Out of Support systems
legacy runtime versions
End to end orchestration
Migration lifecycle process
Integration of tools
Agile maturity
Program Governance
Eliminate blockers
[040]
The above tables and the step 204 may be better understood by way of following description:
[041]
In order to make the migration program successful, there needs to be a strong focus on the program governance. Many times, there are risks related to 5 target platform readiness, finalization of licensing model, not having any release windows or downtime, etc. These risks should be identified and mitigated in a
21
timely manner so that those
do not become blockers for migration. The migration orchestration unit enables entities/clients to check the maturity of the cloud migration by assessing and calculating the maturity score. The maturity score calculator is based on the 3-dimensional framework and parameters derived from 8 critical success factors influencing the migration velocity as indicated above. 5
[042]
The Resource scaling is assessed for 7 parameters such as the structure of the migration factory team (dedicated or shared), the scaling model (scale out with parallel teams or Scale up with adding more people into same POD (Product Oriented Delivery) teams, POD capacity management, POD orchestration, Global coverage of the team, level of digitization and re-usability, delivery 10 readiness index which is a measure for teams overall experience and capability in large migration programs.
[043] The Execution strategy is measured against 5 parameters covering the migration approach (Lift & Shift vs Modernize), migration method (App based vs Server based), application complexity levels, adoption of T-minus process, and 15 distribution of R-path mainly on Rehost category.
[044]
The Landing zone readiness is another critical factor which determines the migration velocity and it is assessed by 7 parameters covering readiness of Basic landing zone, Network connectivity, Target platforms like Google Cloud VMware Engine (GCVE) or Google Cloud Kubernetes Engine 20 (GKE), IBM Power Systems for Google Cloud (IP4G), Operations and Security tooling for monitoring, management and compliance, Shared infrastructure readiness (those are used by multiple applications) Business Continuity and Disaster Recovery solution etc. Applications that have higher Business Criticality need readiness in all these areas. A high score indicates better readiness and less 25 risk.
[045]
Inventory maturity is a measure for the quality of the as-is information which is crucial for the right design in the target cloud. There are 7 parameters including client CMDB data accuracy level, App & Infra mapping correctness, As-Is architecture documentation availability and quality, data 30 availability from existing discovery/ management tools, Contextual knowledge of
22
the applications, inter application dependencies and integration data availability etc. If the score is high for these parameters, migration factory team will be able to complete the Discovery, Analysis, Design and Plan stages without much challenges and impacts the Design POD velocity (number of applications those target cloud designs are approved in a month). 5
[046]
The Automation efficiency factor is an indication of the level of automation used in the cloud migration factory. There are 7 parameters being assessed to calculate the automation efficiency score which includes use of Discovery and Migration tools, DevOps tools, infra as code maturity for cloud infra provisioning, use of Continuous Integration and Continuous Delivery pipelines, use 10 of accelerators for Discovery and Design automation, availability of automation scripts for automating repeated migration tasks from Migration Runbook etc. If the score is higher the risk is less and team will be able to achieve more migration velocity (e.g., number of servers or apps migrated successfully in a month).
[047]
The Business alignment factor indicates the level of involvement 15 from the client organization and their readiness to move the application to cloud. The maturity is calculated through 8 parameters which includes executive sponsorship from client organization, Availability of App owner and team, flexibility of release windows and downtime, change freeze relaxation, organization change management and communication for the cloud migration 20 initiative, cloud adoption readiness and Cloud Center of Excellence (CCoE) readiness etc. The higher the score, the risk is less.
[048]
The Legacy index factor indicates how obsolete the technical landscape is. These are mainly measured through 3 parameters covering the distribution of legacy Operating System, database or middleware components that 25 are out of support from vendors, level of code remediation needed for the legacy application etc. The higher the score, the tech stack is more updated, and risk is lower.
[049] The end to end orchestration is a critical factor indicating the integration of the process cycles, program governance and efficiency in handling 30 the blockers. There are 5 parameters considered which includes migration lifecycle
23
process, integration of different tools such as but are not limited to, Project
Management, Agile execution readiness, program governance structure, Risk, Assumptions, Issue, Dependency (RAID) tracking, closure efficiency, etc. The higher the score the overall risk of migration is less.
[050]
All these parameters are measured in a scale of 1 to 5 and maturity 5 score calculated for each RELIABLE factor. Overall score is calculated from the individual factor scores which is a measure of the overall Factory maturity. When the score is close to 5 means the migration has a high level of maturity and overall risk is less. It enables the migration program team to check the low score areas and mitigate the risks on a periodic basis. The system 100 allows score calculation 10 multiple times using which team (e.g., migration team) can see the progress they are making through the migration program. Also, the system 100 provides migration key performance indicators (KPIs) such as migration velocity as part of the migration factory dashboard which can be used to compare the migration factory performance periodically against the Objective and Key Results (OKR) 15 defined.
[051]
At step 206 of the method of the present disclosure, the one or more hardware processors 104 generate, by using an inventory consolidation engine, an inventory matrix for the plurality of components based on a dynamic column mapping library, an output data obtained from a discovery tool and an inventory 20 data obtained from the entity. The inventor matrix comprises of multiple parameters associated with various categories. Table 6 depicts the inventory matrix of as-is information, by way of examples:
Table 6
No
Categories
Parameter count
Purpose
1
Application
32
Application specific details such as App name, Application identifier (App ID), etc.
24
2
Environment
5
Environment details such as Production or Non-Production
3
Infrastructure
39
Infrastructure details like OS type, CPU, RAM etc.
4
Database
48
Database details like type, version, etc.
5
Migration Type
7
Details such as migration wave, move group etc.
6
License
5
Licensing information of software stack
7
Monitoring
9
Method of monitoring
8
High Availability
2
High Availability setup
9
Disaster Recovery
6
Disaster Recovery parameters like RPO, RTO
10
Business Continuity
6
Backup details like schedule, policy etc.
11
Security
9
Security tools and compliance needs
12
Day2 tools & operations
33
Operations tool details like logging
[052] Table 7 illustrates dynamic column mapping library, the output data obtained from the discovery tool and the inventory data (e.g., customer CMDB). More specifically, Table 7 depicts sample data column format from Discovery Tools (e.g., Stratazone, VMware RV Tools, and the like) and customer CMDB 5 mapped to inventory matrix. Table 7
25
Sl. No Discovery tool – Stratazone (VMinfo)
Discovery Tool – VMware RV tool (Vinfo) Customer CMDB (entity CMDB) Inventory matrix 1 MachineName
VM Hostname HostName 2 PrimaryIPAddress
Primary IP Address IPAddress PrivateIPAddress 3 TotalDiskAllocatedGiB
Disks Disk_space_in_GB DiskSize 4 MemoryGiB
Memory RAM_in_GB RAM 5 OSType
OS according to configuration file OS OSType 6 OSVersion
OS according to VMware tools OS_Version OSVersion 7 AllocatedProcessorCoreCount
CPUs CPU_Count CPUCores
[053]
The uploaded data is processed against an inventory matrix template to map source columns to the respective inventory matrix template columns. A self-learning dynamic column mapping library brings intelligence to the migration system 100, by providing automated mapping of columns for the previously 5 mapped datasets. Example of column mapping includes such as but is not limited to, source column is the input data from Discovery tool/Client Inventory, and ‘Consolidated Inventory’ is the column mapping for ‘Inventory Matrix’.
26
[054]
The above Tables 6-7, and the step 206 are better understood by way of following description:
[055]
One of the critical factors for a successful cloud migration is ‘Inventory Maturity’ of the entity’s/client landscape. Unless the migration team (or the system 100) understands the as-is details accurately, it is very difficult to create 5 a to-be design in the cloud. So, a ‘baseline inventory’ is created which can be referred to by all the stages of the migration lifecycle. The current discovery and assessment tools provide only partial information, and the migration team has to refer to hundreds of data repositories to gather the necessary information.
[056]
To enable proper analysis and design for the cloud migration, nearly 10 200 parameters were captured and made available as a Consolidated Inventory for Design (CID) dataset. These parameters are from 12 different categories of as-is information which includes Application, Environment, Database, Infrastructure, High Availability, Business Continuity, Disaster Recovery, Monitoring, Security, Licensing, Operations tooling, and Migration planning business details. Such 15 examples shall not be construed as limiting the scope of the present disclosure.
[057]
The objective of the inventory consolidation engine is to extract and consolidate all the infrastructure, application and business parameters from multiple sources such as discovery tool output, client CMDB databases, monitoring and management tools such as VMware RV Tools, assessment reports, etc. and create 20 a ‘Consolidated Inventory for Design (CID)’ for the migration factory team. The current limitation is that the as-is data has multiple column formats, and a lot of manual work is needed to extract the relevant parameters.
[058]
The system 100 enables the migration by enabling uploading of data from different sources in in various formats such as Microsoft® Excel or Comma 25 Separated Value (CSV) formats and store them in a landing table in a data warehouse. The uploaded data is processed against the Consolidated Inventory for Design template to map the source columns to the respective CID template columns. A Self-learning dynamic column mapping library as implemented by the system 100 brings intelligence to the migration, by providing automated mapping 30 of columns for the previously mapped datasets. The mapped data is processed in
27
different stages like data ingestion, data consolidation and data de
-duplication to create a set of unique entries for each Application Server(s) to create the final consolidated inventory (e.g., the inventory matrix).
[059]
Another challenge for the migration is that there will be data from multiple sources with different accuracy levels. The inventory consolidation engine 5 allows various user/migration team(s) to upload the source data files multiple times with different priority numbers so that they can define the ‘single source of truth’ and update the records in bulk or incremental uploads. The highest priority number ranges from 1 to 9 where 1 is the highest priority. The consolidated inventory is updated with unique entries after each data upload and processing. 10
[060]
The mapping library is automatically updated based on the historical mappings done for each type of data source using a Self-learning ability. For example, if the migration team does a column mapping from a client's CMDB data, the mapping table is updated, and intelligence is brought for column mapping when incremental data is uploaded next time. This is also applicable for other types of 15 datasets like discovery tool data, monitoring tool output, etc.
[061]
At step 208 of the method of the present disclosure, the one or more hardware processors 104 perform, by using a migration analysis engine, a migration complexity analysis on the inventory matrix, and a network connection data to obtain a target sizing report, an optimized firewall rule report, and a migration tool 20 fitment report. The migration complexity analysis comprises applying one or more migration analysis rules on the inventory matrix, and the network connection data. Migration of the infrastructure, application and data workloads is carried out using various tools which are part of the cloud platform such as Google Cloud Platform (GCP), Amazon Web Services (AWS), and Azure or procured from one or more 25 third party providers. The real challenge for migration or team responsible for migration is to identify which tool is suitable for different types of workloads. The objective of migration complexity analysis is to give an automated solution to the team to check the migration tool fitment compatibility for the selected workload. For example, GCP provides a tool named ‘Migrate for Compute Engine (M4CE) 30 for migrating Virtual Machines from On-Premises datacenter to Google® Cloud.
28
Different versions of the tool support different types of workloads. The technical
fitment of the tool is depending on the source platform where the Virtual Machine is running (e.g., VMware, AWS Cloud, Azure Cloud, and the like), OS type, OS version and the target platform such as Google Compute Engine (GCE), Google Cloud VMware Engine (GCVE), etc. The migration complexity analysis leverages 5 the inventory matrix and a decision tree logic to run the analysis for each workload and recommends the best suitable migration tool. Below Table 8 illustrates an output from the migration complexity analysis by way of examples.
Table 8
Hostname
OS Type
OS Version
OS Migration path and tool
App 1 Server 2
Linux
OEL 6.5
GCE-M4CE 4.11 – Online
App 1 Server 1
Linux
OEL 6.5
GCE-M4CE 4.11 – Online
GLM PROD APP3
Windows
Windows 2010
GCE – M4CE 5.0
GLM QV APP2
Windows
Windows 2019
GCE – M4CE 5.0
GLM IT DB1
Windows
Windows 2016
GCVE – 2 step migration (P2V/V2V)
App 19 Server 5
Linux
Red Hat Enterprise Linux 6 (64-bit)
2 step migration: V2V to VMWare 5.5U and above, GCE-M4CE 4.11 – Online
10
[062]
Target sizing: Cloud migration complexity analysis is the process of evaluating the difficulty and effort involved in moving applications and data from on-premises to the cloud. The cloud migration complexity analysis considers factors such as the size, architecture, dependencies, customizations, and security of the workloads to be migrated, as well as the availability, compatibility, and 15 performance of the cloud services to be used.
29
[063]
This feature works with the help of a machine learning (ML) model trained with the consolidated inventory parameters. This ML model is later used to predict the “Storage”, “Memory”, and “CPU” requirements for the application(s) getting migrated. ML model is trained using the features listed below in Table 9 by way of examples. 5
Table 9
Sl. No
Features
1
Server_Name
2
Environment
3
OS_Type
5
Server_Solver_group
6
ServerRole
7
Memory
8
Storage
9
Current utilization (CPU, memory, disk)
10
Projected growth (CPU, memory, disk)
[064] Firewall Rule optimization: One of the key challenges for the migration system 100 is to understand the interdependencies and integrations of applications to other internal or external systems. The documentation or inventory 10 matrix such as client/customer CMDB may be outdated, and clients may not have enough details to capture the information. The objective of firewall rule optimizer (stored in the memory 102 and invoked for execution as applicable) is to understand the network level connections among servers and applications and do some analysis
30
and consolidation based on traffic so that required firewall rules can be derived for
the cloud platform. The input data is collected from systems where all the network connection data is logged (e.g., SIEM - Security Incident and Event Management). Such data runs into millions of records and manual analysis is impractical.
[065] Firewall rules optimization in cloud migration is the process of 5 improving the efficiency and security of the firewall rules that control the network traffic between the on-premises and cloud environments. It involves reviewing, analyzing, and modifying the existing firewall rules to ensure that they are aligned with the cloud migration objectives and best practices. This is the process to consolidate the firewall rules port number based on Classless Inter-Domain Routing 10 (CIDR) range, protocols, and ports. By default, the firewall rules have a lot of distinct entries. This feature consolidates the entries and makes it easily configurable. Below Table 10 illustrates sample input data from SIEM tool: Table 10 Source IP Source port Network Protocol name Destination IP Destination 10.0.32.28 35438 TCP 10.0.32.226 9092 10.0.32.28 52388 TCP 10.0.172.11 1521 10.0.32.28 35406 TCP 10.0.172.11 1521 10.0.75.5 37130 TCP 10.0.32.28 7777 10.0.75.6 41335 TCP 10.0.32.28 7777 10.0.32.28 51094 TCP 10.0.32.28 8080 10.0.32.28 38182 TCP 10.0.32.228 9092 10.0.75.8 38760 TCP 10.0.32.28 7777 10.0.32.28 55010 TCP 10.0.172.12 1521 10.0.32.28 35436 TCP 10.0.172.5 1521 10.0.32.28 39180 TCP 10.230.0.102 3872
15
31
[066]
The IP addresses are consolidated based on the CIDR format to represent a contiguous range of IP addresses. Below Table 11 illustrates depicts the IP addresses based on the CIDR format:
Table 11
Sl. No
CIDR
Source address
Destination address
Protocol
Port
1
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
9000
2
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
8001
3
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
8003
4
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
8002
5
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
8006
6
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
8004
7
10.0.171.6/31
10.0.171.6
10.0.171.60 TCP
8005
8
10.0.171.6/31
10.0.171.7
10.0.171.60 TCP
9002
9
10.0.171.6/31
10.0.171.7
10.0.171.60 TCP
8000
10
10.0.171.6/31
10.0.171.7
10.0.171.60 TCP
9001
5
[067]
The optimized output has source IP and Destination IP ranges represented in a CIDR format along with a group of destination ports which is used to derive the rules for the cloud firewalls. Below Table 12 depicts the optimized output:
Table 12 10 Sl. No src_addr dest_addr dest_ports 1 10.0.171.8 10.0.171.60 9004-9005 2 10.0.171.6 10.0.171.60 8001-8006; 9000 3 10.0.171.7 10.0.171.60 8000; 9001-9003 4 10.0.171.9 10.0.171.60 9006; 9008
[068]
The above tables and the step 208 are better understood by way of following description:
32
[069]
Migration planning is always an iterative process since there will be changes in migration paths (Rehost, Replatform, Refactor, Rebuild, Retain, Retire, etc.) for the applications and there will be changes in migration Move groups as and when more details of the application is gathered in the detailed Discovery stage. So, migration requires performing various analyses on the consolidated inventory data 5 (e.g., also referred to as inventory matrix) to get more insights into migration approaches, grouping and schedule. The migration analysis engine enables intelligence and insights in the following areas:
a)
Finalize the Migration path, tool, efforts, duration, etc. for each application and server. 10
b)
Finalize the target sizing for the Cloud Infrastructure (e.g., Cloud Virtual Machines).
c)
Identify the firewall changes required at on-premises datacenter and in Cloud for the application migration.
d)
Redefining the Migration Waves and Move Groups based on dependencies. 15
[070]
For each application and server(s) in the consolidated inventory data, certain analysis is to be performed (e.g., by the migration consultant(s) or the system 100) to finalize the approach and plan. One such analysis is migration tool fitment analysis. There are many cloud natives or third-party migration tools but not all the servers or applications can be migrated using these tools. The fitment is 20 mainly dependent on certain parameters such as Operating system version, type of disks (encrypted or shared), the type of current deployment standalone or clustered, etc. Another type of analysis is the migration complexity of each application which is an indication for overall migration effort and timeline. The migration analysis engine comprised in the system 100 leverage a Machine Learning (ML) algorithm 25 (e.g., as known in the art ML model) to run the analysis on the consolidated inventory data/inventory matrix and predicts the migration approach, tool fitment, complexity, etc. and further generates one or more report(s) which can be used by migration consultants to take appropriate decisions.
33
[071]
The machine learning algorithm is further executed to predict the optimized infrastructure sizing for the target cloud services (e.g. Virtual Machine), and generate sizing details such as CPU core, memory, and disk size reports to for generation of cost optimized designs (e.g., via inputs from migration team).
[072]
Another critical challenge for migration is to identify the necessary 5 network firewall changes. Many migrations fail or are rolled back after cutover since the inter app communication or access is blocked due to firewalls rules, following which testing is also failed. Since an on-premises datacenter is mostly dedicated infrastructure for the client, there will be no restrictions on inter app or inter tier communications. For example, communication from an App server to 10 Database server may not be blocked on-premises. Similar is the case for shared services such as Directory Services for authentication, shared storage, etc. Since, public cloud is a multi-tenant infrastructure shared among different clients, multiple levels of segregation is needed for each application. This information may not be readily available with the entity (or associated team of the entity) and thus missing 15 details lead to failure of migration.
[073]
The firewall rule optimizer as described above enables analysis on the existing network connection data and brings intelligence by suggesting firewall rules. As first step, the network connection details (inbound and outbound) data is collected from existing monitoring tools such as SIEM (Security Incident and Event 20 Management) and consolidation and deduplication are done for the dataset. Further analysis on the network protocol such as Transport Control Protocol (TCP) or User Defined Protocol (UDP) is done on the consolidated data. There will be thousands of records for each server and hence further consolidation is done using Classless Inter Domain Routing (CIDR) notation for the source and destination IP addresses. 25 Finally, a summarized report with ingress (inbound) and egress (outbound) connection details for each server IP address is produced which can be used for migration (by the migration team) to plan the firewall rule changes.
[074]
One of the critical requirements from the migration factory team is to identify the applications that can be migrated in the upcoming move group. Even 30 though the initial Waves and MoveGroups were defined in the Cloud Assessment
34
stage, there will be a lot of changes due to blockers such as non
-readiness of target platform, availability of proper licenses, conflict with business release windows and downtime etc. A custom/personalized analytics dashboard may be provided by the system 100 for the advanced query and filtering of applications and servers based on required criteria such as Operating system version, business criticality, external 5 or internal access, etc.
[075]
At step 210 of the method of the present disclosure, the one or more hardware processors 104 generate, by using an application information extraction engine, an application information form (AIF) report for each application comprised in the plurality of components based on the inventory matrix, and one or 10 more responses to one or more queries. Below Table 13 illustrates exemplary questionnaire for which responses are received.
Table 13
Sl. No
Questionnaire section
Areas covered
1
Application details
General details such as Application name, application identifier along with business area, description, technology stack details, number of users, etc.
2
Application Lifecycle Management (ALM)
Source code management, and build tools
3
Data classification
Application data retention, audit requirements, data classification, etc.
4
Migration scheduling
Business blackout dates, inflight change projects, maintenance windows, planned migration quarter, etc.
35
5
Service Level Agreement (SLA) requirements
SLA, DR classification, RPO, RTO
6
Special requirements
CPU intensive, memory intensive, I/O intensive, etc.
7
Testing
Baseline testing plans, test pack availability, environments, automation, etc.
8
Operational onboarding
Support team details
9
Operational functionality
Alerting, monitoring, release cycle, etc.
10
Service introduction
Support model Service Desk Knowledge base articles, etc.
[076]
Below Table 14 illustrates details of AIF for each application being migrated, by way of examples:
Table 14
Sl. No
AIF details
1
Application details
2
Application interface
3
Database details
4
Host details
5
Firewall details
6
License details
7
Business continuity
8
Disaster recovery
5
[077]
The AIF is further based on the one or more associated landing zone documents and one or more associated application design documents. The landing
36
zone
architecture details are obtained by utilizing at least one of Natural Language Processing (NLP) and Deep Learning (DLP), and Generative AI based Google cloud services to extract the relevant information from documents and architecture diagrams which can be used in Migration design documents and also in AIF. Below Table 15 illustrates details of landing zone documents extracted from design 5 documents, by way of examples:
Table 15
Sl. No
Categories
Examples
1
Resource Hierarchy
Organization and folder structure, project/subscription naming standards
2
Network
Virtual Private Cloud (VPC), Network and subnetworks, IP schema, etc.
3
Storage
Types of storage used in the entity
4
Logging and monitoring
Details of logging and monitoring parameters
5
Security
Security zoning standards, firewall standards, etc.
6
Availability
High availability patterns, disaster recovery patterns
7
Machine configuration
Machine type and machine series, naming standards
8
Service account
Generic service accounts used
[078]
The above step of 210 and the Tables 13 through 15 are better understood by way of following description:
[079]
After the initial consolidation of the inventory data (e.g., the 10 inventory matrix) for the entire landscape and doing various analyses, the next stage is to kick start the migration execution of each and every application workload starting with Macro design. The migration involves taking a bunch of applications from the assigned move group wherein a detailed discovery of the application details is carried out mainly focusing on non-functional requirements such as 15 availability, security, compliance, performance, resilience, etc. The objective of this
37
stage is to understand the as
-is architecture of the application and consolidate the finding(s) so that it can be used for the target cloud design. The application information extraction engine comprised in the system 100 enables retrieval of information from the existing documentation such as Landing Zone design, Application as-is documentation, etc. 5
[080]
Every entity (e.g., client) has a landing zone architecture which provides architectural guidelines and standards while hosting the application in the cloud. This includes the cloud region, zones, network selection, computer, or disk types to be used, labeling, tagging standards, etc. Such documentation is typically saved as Microsoft® word (doc) or Portable document format (PDF), internal 10 knowledge website pages (html), etc. The system 100 employs landing zone details extractor comprised in the memory 102 which is invoked and when executed uses Natural Language Processing (NLP) and Deep Learning (DLP) and Generative AI based Google cloud services to extract the relevant information from documents and architecture diagrams which can be used in migration design documents and 15 also for deriving the Target VM configurations for the migration execution. Similarly, data from Application as-is architecture documents are extracted leveraging Generative AI capabilities of Google Cloud Platform (GCP) by providing the input data in selected formats. The extracted data is processed to extract the relevant information such as Application tiers, integrations, user access, 20 etc.
[081]
All the extracted information is further processed along with consolidated inventory data (e.g., the inventory matrix) using cognitive services to validate and generate the as-is details document which is referred as Application Information Form (AIF) (or AIF report). As-is information for each application is 25 grouped under different categories as follows:
a)
The application questionnaire section collects details like Application details, special requirements, Application lifecycle management, Testing requirements, Data classification, Migration scheduling, SLA requirements, Application contacts, application documentation, technology stack details 30 etc.
38
b)
The application interface section details all integration requirements.
c)
Licensing will have details of current licensing used.
d)
The host questionnaire covers all the infrastructure requirements.
e)
App database captures the databases used for the application.
f)
Business Continuity and Disaster Recovery. 5
g)
Firewall configuration.
h)
Operations onboarding.
i)
Current Architecture diagram.
[082]
At step 212 of the method of the present disclosure, the one or more hardware processors 104 create, by using a target design recommendation engine, 10 one or more target infrastructure designs for the plurality of components based on the one or more design patterns and the application information form (AIF) report generated for each application. The one or more target infrastructure designs for the plurality of components are obtained by applying one or more design selection rules on (i) the one or more design patterns and (ii) an associated comprehensive design 15 questionnaire.
[083]
Below Table 16 illustrates design patterns based on reliability by way of examples:
Table 16
No
Design Pattern
Objective
1
Pattern A
For Non-Mission Critical Production / Non-Production workloads
App on GCE VM & DB Engine on Cloud SQL
2
Pattern B
For non-mission-critical production / non-production workloads.
App & DB Engine on GCE VM with Regional persistent disk (Mult Tenant)
3
Pattern C
For non-mission-critical production workloads.
App on GCE VM with Regional disk, DB Engine on Cloud SQL, Multi AZ
4
Pattern D
For Mission Critical Production workloads
39
App on GCE VM Clustered/Load Balanced, DB on Cloud SQL with Muti AZ
5
Pattern E
For Mission Critical Production Workloads
App on GCE VM Clustered, DB on GCE VM SQL Active-Active HA, Multi AZ
6
Pattern F
For Mission Critical Production Workloads
App on GCE VM, DB on GCE VMs, Muti AZ, replicated to another regions
7
Pattern G
For Non-Critical Non-Production workloads
App and DB on GCE VMs shared. Rely on Backup and VM Mobility
[084]
Below Table 17 illustrates selected design patterns:
Table 17 AIF Section AIF Parameter Example Pattern
SLA Requirements
Permissible Downtime >4 Hours Pattern A
SLA Requirements
Permissible Downtime < 4 Hours Pattern D, E, F
SLA Requirements
RPO > 24 Hours Pattern A, G
SLA Requirements
RPO < 15 Minutes Pattern D, E, F
SLA Requirements
DR Classification Active-Active Pattern E
[085]
A Machine Learning Model is used for design pattern 5 recommendation for identifying the most suitable design patterns. A set of attributes and cloud services as parameters for predicting a particular design architecture. User(s) is/are required to answer a set of questions pertaining to those cloud service associated parameters based on the best practices from Well Architected
40
Framework. Then the one or more design selection rules are applied on (i) the one
or more design patterns and (ii) an associated comprehensive design questionnaire (e.g., user response to queries/questions) pertaining to the GCP parameters. Below Table 18 illustrates various design selection rules being applied as appropriate.
Table 18 5
No
Well Architected Framework pillar
Subarea Examples
1
System Design
Zones & Regions, Network Infrastructure, Compute, Storage Strategy, Database & Middleware, Sustainability, etc.
2
Security Privacy and Compliance
Identify & Access, Compute & Container Security, Network Security, Data Security, Application Security, Data Residency, Privacy requirements,
3
Reliability
Scale & High Availability, Reliability goals, Reliability principles, Alerting
4
Performance Optimization
Monitor & Analyse performance, optimize compute, optimize storage, Optimize networking and API, Optimize Database etc.
5
Cost Optimization
Monitor & Control Cost, Implement FinOps, Optimize Compute, Storage, Database, Networking etc.
6
Operational Excellence
Capacity & Quota, automate deployments, Monitoring, alerting & Logging, support & escalation process etc.
[086]
Below Table 19 illustrates some of the design patterns workload type and target platform:
Table 19
41
HIGH TOUCH
LOW TOUCH
Standard Mult cluster architecture on GKE based on Prod and Non-prod env
2 Tier Distributed Architecture (w/o HA)
Microservices Architecture on Google App Engine
GCP Landing zone with Network Foundation Architecture
Microservices with Cloud Functions
Single Tier Architecture- Shared VM for Application and DB Component
Microservices Architecture on Cloud Run with Cloud SQL Postgres SQL
VM Scheduler - VM Start up and Shutdown
Static Website Hosting on App Engine
GCP- Internal http LB - Unmanaged Instance Group (2 Tier distributed Architecture)
Static Website Hosting on Cloud Storage
3 Tier Highly Available Environment using LB and Managed instance Group
Three Tier Architecture using Cloud Run and Cloud SQL- My SQL
Hot Disaster recovery on Google Cloud for on-premises applications
GKE - with Cloud SQL with DR
Hardware Emulation on GCE platform using Stromasys Charon SSP Emulator
Containerized architecture with Cloud Run
GCVE hybrid cloud architecture including HCX Components
[087]
Below Table 20 illustrates a target infrastructure design based on workload type and target platform:
Table 20
HIGH TOUCH
LOW TOUCH
Standard Mult cluster architecture on GKE based on Prod and Non-prod env
2 Tier Distributed Architecture (w/o HA)
Microservices Architecture on Google App Engine
GCP Landing zone with Network Foundation Architecture
42
Microservices with Cloud Functions
Single Tier Architecture- Shared VM for Application and DB Component
Microservices Architecture on Cloud Run with Cloud SQL Postgres SQL
VM Scheduler - VM Start up and Shutdown
Static Website Hosting on App Engine
GCP- Internal http LB - Unmanaged Instance Group (2 Tier distributed Architecture)
Static Website Hosting on Cloud Storage
3 Tier Highly Available Environment using LB and Managed instance Group
Three Tier Architecture using Cloud Run and Cloud SQL- My SQL
Hot Disaster recovery on Google Cloud for on-premises applications
GKE - with Cloud SQL with DR
Hardware Emulation on GCE platform using Stromasys Charon SSP Emulator
Containerized architecture with Cloud Run
GCVE hybrid cloud architecture including HCX Components
[088]
The above step 212 and the Tables 16 through 20 are better understood by way of following description:
[089]
The next stage of migration after the Macro design is ‘Micro design’ where detailed cloud deployment architecture is being created for each application. 5 The target architecture of the application should ensure the to-be design is based on architecture best practices so that the application can provide equal or better performance in the cloud. The objective of the design blueprint generator which is part of the system 100 is to enable migration in the following areas:
a)
Maintain a centralized repository of re-usable migration design blueprints 10 for different types of workloads.
b)
Enable the migration Architect to identify the best suited design patterns aligned with best practices from well architected framework and tailor the designs based on client Landing Zone standards to create project specific templates. 15
43
c)
Assist migration design team to create to-be deployment architecture for each application from the design templates.
d)
Automate the creation of the Migration Design Document (MDD) so that all stakeholders can review and finalize the design.
e)
Generate code snippets from the designs for the automated provisioning of 5 the infrastructure in the cloud using infrastructure as code (IaC) standards.
[090]
The design blueprint generator facilitates a centralized repository which is a collection of re-usable design patterns in different migration paths. The Low Touch design patterns are for migrating the applications without making any code changes in the application and using a lift and shift migration approach 10 leveraging cloud Infrastructure as a service (IaaS), whereas High Touch design patterns are for migrating the applications with code remediation to make it fit for cloud native services mainly leveraging Platform as a Service (PaaS), Container as a Service (CaaS), Function as a Service (FaaS), etc.
[091]
The centralized repository has a number of design patterns for 15 different types of workloads and migration paths. So, the best suitable design pattern is selected amongst the number of design patterns for the client landscape. To make the designs aligned with industry best practices, there are design selection rules defined in the system 100. These rules are derived from 6 pillars of the well architected framework best practices covering areas of System Design, Reliability, 20 Security Privacy & Compliance, Cost optimization, Performance Optimization and Operational Excellence. The target design recommendation engine uses a pre-trained machine learning (ML) algorithm (e.g., known in the art ML model) to predict top ‘n’ best suitable design patterns. The prediction ML model is derived using attributes for cloud services and responses to the design selection 25 questionnaire in combination with the selection rules. The design pattern is selected, and a project specific design template is created after modifying the design pattern based on client landing zone standards and guidelines.
[092]
For each application in the consolidated inventory, the to-be designs are created (e.g., by migration design POD team). The design blueprint generator 30
44
provides 3 options to generate designs. The preferred option is to generate the to
-be designs leveraging the project specific design templates. If there is a new design pattern needed, search can be performed in the centralized repository (by querying by a team) to identify the most suitable patterns and use them as applicable. For scenarios where a design has to be built from scratch, design blueprints are created 5 by leveraging the standard icons and stencils from cloud provider. The application specific designs are saved in the application context in the system 100.
[093]
In order to finalize the to-be designs as part of the Miro design stage, a disposition document is generated (e.g., say with the help of cloud migration design team) which contains the as-is architecture, to-be architecture, and migration 10 approach details so that different stakeholders can review and approve the same before proceeding to actual migration. The design blueprint generator consolidates information from different stages, server details from consolidated inventory (e.g., the inventory matrix), Landing zone details documents from the macro design stage, migration approach details, to-be designs created in the micro design stage 15 and migration runbook details and finally generate a consolidated document called ‘Migration Design Document (MDD). This maybe a Microsoft® Word document with a pre-defined table of content, in one example embodiment (described below).
[094]
At step 214 of the method of the present disclosure, the one or more hardware processors 104 generate, by using a design blueprint generator, a design 20 blueprint document for each application based on the one or more target infrastructure designs, the inventory matrix, the optimized firewall rule report, the target sizing report, the migration tool fitment report, and one or more associated landing zone documents. Using the one or more target infrastructure designs, the inventory matrix, the optimized firewall rule report, the target sizing report, the 25 migration tool fitment report, and one or more associated landing zone documents, the system 100 generates a design blueprint document comprising various details as shown below. More specifically, below depicted is just a Table of content (ToC) of the design blueprint document being generated by the system 100, and such example of the ToC of the design blueprint document shall not be construed as 30 limiting the scope of the present disclosure. For the sake of brevity only ToC and it
45
is to be understood by a person having ordinary skill in the art or person skilled in
the art that each header of the ToC provides further details on associated content.
[095]
Examples of Design Blueprint Document (Table of Content)
Table of Contents
1. Introduction 5
1.1. Document Summary
1.2. Version Control
2. Application Summary
2.1 Current Server Landscape
2.2 Current Architecture 10
2.3 Security and Compliance Requirements
3.Target State
3.1.Design Decisions
3.2.Target Design
3.2.1 Disposition Summary 15
3.2.2 Target Architecture
3.2.3 Network Traffic Flow
3.2.4 Shared Services Connectivity
3.2.5 High Availability (HA) and Disaster Recovery (DR) Solution
3.2.6 Firewall Requirements 20
3.2.7 Application Specific Configurations
3.3.Migration Approach
3.3.1 Migration Method
3.3.2 Migration Tool / Deployment Process
3.3.3 Logical sequence for servers 25
3.3.4 VM Migration approach
3.3.5 Data Migration approach
3.3.6 Intermediate stages and final stage
3.4.Compliance Requirements and Solution
3.4.1 Compliance/security requirement 30
3.4.2 Compliance/security Solution
46
3.5.Operations tools
3.6.High Level Runbook Tasks
3.7.Migration Window
3.7.1 Non Production environment
3.7.2 Production Environment 5
4.Open Risk and Issues
4.1 Compliance and security
4.2 Operational
4.3 Performance
5.Stakeholders’ Approvals 10
[096]
At step 216 of the method of the present disclosure, the one or more hardware processors 104 generate, by using code generation and automation engine, a terraform code and an automation script using the design blueprint document generated for each application, and the one or more associated landing zone documents. Below Table 21 depicts terraform code generated from target 15 infrastructure design(s), by way of examples:
Table 21
Cloud Services module
Terraform code purpose
Cloud_firewall_rules
Automated creation of Firewall Rules
Compute_engine
For automated provisioning of Compute VMs in cloud
Cloud_storage
For automated provisioning of Cloud Storage
Logging
For enabling logging on Compute Engine, storage, and firewalls
[097]
Below Table 22 depicts an automation script from target infrastructure design(s) for Post Migration steps, by way of examples: 20
Table 22
47
Post Migration scenarios
Automation Script
Validation script
For validating the configuration of the Target Compute machine based on the infrastructure design
Windows Hardening
For security hardening of the Windows Operating System provisioned in cloud
Linux Hardening
For security hardening of the Linux operating system provisioned in cloud
Create Filesystem
For automated generation of the file system in VM disk
Agent install
Installation of agent software on cloud compute VM machine (eg Backup, Antivirus etc)
[098]
The step 216 and Tables 21 through 22 are better understood by way of following description:
[099]
One of the critical success factors in the migration is the ‘automation efficiency’ of the migration system. This model has a process (T-minus) to identify 5 different stages of the migration life cycle and all the tasks to be executed. These tasks are grouped and represented in a Migration Runbook.
[100]
The objective of the code generation and automation engine is to assist (migration team) in performing the repeated tasks in an automated manner so that team productivity and efficiency is achieved. One such area is the Build and 10 Remediate stage in which the cloud services may be rovisioned and reconfiguration is done in cloud services such as firewalls, Load balancers, etc. to reflect the IP address change of the Cloud Target Virtual Machine.
[101]
The system 100 enables generation of the terraform code from the designs generated through the design blueprint generator for the automated 15 provisioning of the infrastructure in cloud using infrastructure as a code (IaC) method. Terraform is used since it is a cloud agnostic technology for the automated provisioning of the infrastructure.
[102]
To automate the migration, mainly for the lift & shift migration approach, there are many migration tools available in the market either bundled by 20
48
Cloud provider (e.g., Google® Migrate to Virtual machine, Azure Migrate, AWS
Cloud migration tool, etc.) or from third-party providers such as Platespin, Vmware Hybrid Cloud Exchange (HCX), etc. But these tools handle only a portion of the migration runbook tasks mainly to clone the source system disk to target cloud VM and replicate the changes. 5
[103]
The code generation and automation engine also enables integration of the cloud provider migration tools with the system 100. For example, Google® Migrate to Virtual Machine (M2VM) tool requires a set of build parameters to configure on the Target Cloud VM so that the configuration is aligned with the customer landing Zone. These parameters are mainly to identify the Cloud region, 10 Zone, network, computer machine type, labels, etc. for the target cloud Virtual Machine. Also, the status of different stages of the migration tool such as onboarding, replication, cutover, etc. can be tracked from the single console (not shown in FIGS.) of the system 100.
[104]
There are many pre-migration and post-migration activities to be 15 carried out for completing the migration even if the migration is done using a migration tool. These tasks include validating and preparing the source systems for the migration, after making necessary configuration changes in the target system to ensure that it is ready for operation. The code generation and automation engine leverages Generative AI to create different types of automation scripts (e.g., 20 PowerShell for Windows® OS, shell scripts for Linux, terraform for generic use case, etc.) so that the pre-migration and post-migration activities are automated. These automation scripts and code snippets are delivered along with Migration Runbook in the system 100 so that associated team(s) can complete the pre/post migration activities for each server. 25
[105]
As mentioned above, the code generation and automation engine also leverage Google® Cloud Generative AI services to provide recommendations for the code remediations needed during the Refactor migration. For example, in a cross-cloud platform migration from AWS to Google cloud, the application code written for AWS Lambda services which is a serverless function where application 30 code is written in different programming languages such as python, java, node.js
49
etc. to move to the equivalent Google Cloud Function. The major changes in this
migration are the remediation in the Data Access layer and other integration points. The Gen AI based code generation system provides comparison and suggestions for the code changes required.
[106]
At step 218 of the method of the present disclosure, the one or more 5 hardware processors 104 generate, by using an automated migration planner, a migration planning runbook specific to one or more levels and an associated migration schedule based on the design blueprint document and one or more runbook templates. Below Table 23 illustrates the migration planning runbook specific to the one or more levels and the associated migration schedule, by way of 10 examples:
Table 23
Runbook Template Name
Migration Type
Migration Path Rehost_Complex_ T-60_onPrem_GCE_M4CE5.0_V1 Low Touch Rehost Rehost_Medium_ T-45_onPrem_GCE_SureEdge_V01 Low Touch Rehost Rehost_Medium_ T-45_onPrem_GCE_M4CE5.0_V1 Low Touch Rehost Rehost_Simple_ T-30_onPrem_GCE_M4CE5.0_V1 Low Touch Rehost Rehost_Simple_ T-30_onPrem_GCE_SureEdge_V1 Low Touch Rehost Rehost_Simple_ T-30_AWS_GCE_M4CE5.0_V1 Low Touch Rehost Rehost_Complex_ T-60_AWS_GCE_M4CE5.0_V1 Low Touch Rehost Rehost_Complex_ T-60_onPrem_GCE_SureEdge_V1 Low Touch Rehost
50
Replatform_Medium_ T-60_Standalone PostgreSQL_CloudSQL_AlloyDB_V1 Low Touch Replatform Replatform_Medium_ T-60_Standalone MySQL_CloudSQL_MySQL_V1 Low Touch Replatform Replatform_Complex_ T-75_Standalone PostgreSQL_CloudSQL_AlloyDB_V1 Low Touch Replatform Replatform_Complex_ T-75_Standalone MySQL_CloudSQL_MySQL_V1 Low Touch Replatform Replatform_Complex_ T-75_MS SQL_CloudSQL_Export_Import_V1 Low Touch Replatform Replatform_Complex_ T-75_MS SQL_CloudSQL_Export_Import_V1 Low Touch Replatform Replatform_Medium_ T-60_MS SQL_CloudSQL_Export_Import_V1 Low Touch Replatform Replatform_Medium_ T-45_onPrem_GCE_V1 Low Touch Replatform Replatform_Simple_ T-30_onPrem_GCE_V1 Low Touch Replatform Replatform_Simple_ T-45_onPrem_IBM_Power_IP4G_V1 Low Touch Replatform Replatform_Medium_ T-60_onPrem_IBM_Power_IP4G_V1 Low Touch Replatform Replatform_Complex_ T-75_onPrem_IBM_Power_IP4G_V1 Low Touch Replatform Replatform_Simple_ T-45_Standalone PostgreSQL_CloudSQL_AlloyDB_V1 Low Touch Replatform Replatform_Simple_ T-45_Standalone Oracle DB_CloudSQL_PostgreSQL_V1 Low Touch Replatform Replatform_Simple_ T-45_Standalone MySQL_CloudSQL_MySQL_V1 Low Touch Replatform
51
Replatform_Simple_ T-45_MS SQL_CloudSQL_Export_Import_V1 Low Touch Replatform Refactor_Simple_ T-60_Java Application Modernization Runbook_V1 High Touch Refactor Refactor_Simple_ T-60_.Net Application Modernization_V1 High Touch Refactor Refactor_Medium_ T-90_Java Application Modernization Runbook_V1 High Touch Refactor Refactor_Medium_ T-90_.Net Application Modernization_V1 High Touch Refactor Refactor_Complex_ T-120_Java Application Modernization Runbook_V1 High Touch Refactor Refactor_Complex_ T-120_.Net Application Modernization_V1 High Touch Refactor
[107]
At step 220 of the method of the present disclosure, the one or more hardware processors 104 perform migration of the plurality of components from the first infrastructure to the second infrastructure for the entity based on the risk maturity score, migration blueprint document (also referred to as design blueprint 5 document and may be interchangeably used herein) and an execution of the migration planning runbook specific to the one or more levels, the migration schedule, the terraform code and the automation scripts. Once the migration is completed or in progress, various statuses can be monitored and viewed. Examples of various statuses from the migration planning runbook is depicted in Table 24 as 10 below by way of examples:
Table 24
Category
Task
Status
T-minus
Start
End
Discovery
Application Migration Lifecycle Start
Completed
-30
8-Jan
10-Jan
52
Discovery
Data collection and Review
Completed
-30
8-Jan
10-Jan
Discovery
Application owner connect and As-Is architecture review
Completed
-25
8-Jan
10-Jan
Discovery
Proposed cut-over date communication
Completed
-23
8-Jan
10-Jan
Analysis
Identify all disks are non-encrypted
Completed
-23
11-Jan
12-Jan
Analysis
Identify amount of data to migrate
Completed
-23
11-Jan
12-Jan
Analysis
Identify target VPC and subnet
Completed
-23
11-Jan
12-Jan
Analysis
Identify tool for DB migration
Completed
-23
11-Jan
12-Jan
Analysis
Identify/Validate targets for the Migration Group
Completed
-23
11-Jan
12-Jan
Design
Disposition review communication
Completed
-22
15-Jan
16-Jan
Design
Target Architecture Review and Sign-Off
Completed
-20
15-Jan
16-Jan
53
Plan
Raise request to Create Project
In Progress
-15
17-Jan
19-Jan
Plan
Raise request to reserve IP address for targets
In Progress
-15
17-Jan
19-Jan
Plan
Submit request for Firewall Rules Implementation
In Progress
-8
17-Jan
19-Jan
Plan
Change Freeze on Source Workload
In Progress
-7
17-Jan
19-Jan
Plan
Create request for Local Administrator Account
In Progress
-7
17-Jan
19-Jan
Migration
Configure Target VM
Not Started
-5
5-Feb
15-Feb
Migration
Identify the vCenter of the VM
Not Started
-5
5-Feb
15-Feb
Migration
Initiate first replication using M4CE 5.0
Not Started
-5
5-Feb
15-Feb
Migration
On-board the source VM in M4CE 5.0 console
Not Started
-5
5-Feb
15-Feb
54
Validation
Infra testing: validate resources, connectivity
Not Started
0
16-Feb
16-Feb
Hyper care
Hypercare to BAU Handover
Not Started
10
19-Feb
28-Feb
[108]
The steps 218 and 220 along with associated tables are better understood by way of following description:
[109]
Migration planning is an iterative process and continues throughout the migration program as. The high-level migration waves, movegroups, and 5 schedule are created as part of the cloud assessment, but the system 100 refines this plan based on the details collected in Macro design and Micro design stages. The objectives of the automated migration planner of the system 100 are as follows.
1.
Validate the overall migration schedule for the entire program and refine it based on the updated migration waves & movegroups and application mapping. 10
2.
Maintain a repository of Migration Runbook templates covering different migration approaches (Rehost, Replatform, Refactor) and source and target platforms (Onprem datacenter, AWS, Azure and GCP) and T-minus process cycle.
3.
Automatically map the best suitable Migration Runbook template for each 15 application in scope, based on its migration path, complexity and target platform and generate a detailed migration plan with focus on a ‘target cutover date’.
4.
Track the progress of the migration lifecycle stages in different levels (server, application and waves & movegroups)
[110]
The Process pillar in the migration uses a T-Minus process cycle to 20 plan all the migration activities to achieve a target cutover date which is known as T0 day. T-minus refers to the lead time in business days, in which the migration activities should start in advance so that the final cutover can be completed on time. This T-minus process cycle is mainly dependent on the migration path and complexity of the applications being migrated and represented as T-30, T-45, T-60, 25
55
T
-75, T-90, T-120, T-150 etc. For example, a medium complex application in the Rehost migration path may follow the T-45 cycle where the detailed discovery activities (macro design) should start 45 business days before the cutover and continues for 2 weeks ending on T-35. The micro design may start in T-35 and will continue for 3 weeks ending on T-20. Followed by non-production migration in T-5 10 and production migration in T0. There is 2 weeks of hyper care period after cutover represented as T+10. All these migration runbook templates are stored in the database 108 within the system 100.
[111]
The automated migration planner checks if Waves & MoveGroups are available for all the applications in migration scope and allows the update if the 10 data is not present. Normally all applications in a MoveGroup follow the same T-minus process cycle. The applications are then grouped based on Waves and Movegroups and an overall migration schedule is created using the T-minus mapping of the applications and the planned start date of the applications. The migration schedule report is referred, and updated periodically (e.g., based on 15 inputs from migration team) and then the progress is compared after checking the migration status for all the applications.
[112]
For each application, there is a detailed migration planning required so that the Migration POD team can identify and execute the pre-migration, migration and post-migration tasks and track the progress. The automated migration 20 planner enables an automated mapping of the migration runbook template based on the migration path, complexity and target platform and finalizes the migration runbook for the application. Tasks from different Migration lifecycle stages like Discovery & Analysis (Macro design), Design & Plan (Micro design), Migration, validation, hypercare and transition are grouped in the migration runbook and 25 updated with T-minus cycle time to indicate the start and end date for each task to complete the migration cutover on a specific target date. Also, the migration tasks are assigned to respective Migration Factory POD team members to ensure proper tracking and closure.
[113]
The automated migration planner enables tracking of the migration 30 status for the program governance and measuring of the Key Performance
56
Indicators such as design velocity, migration velocity
, etc. The completion of the migration runbook tasks is consolidated and grouped in different dashboards to track the overall performance accordingly. The dashboard is created at Application level, Wave & Move Group level and overall program level to bring intelligence to different levels of stakeholders such as Migration POD lead, Program Manager and 5 Program Sponsors from client organization, etc. in alignment with the system 100.
[114]
Embodiments of the present disclosure provide systems and methods for performing migration of information technology (IT) components for entities in an efficient and better way by increasing the repeatability, reusability, and automation. Also, the system 100 enables a Machine First approach for delivery 10 where intelligence and insights are created by the system 100 which is augmenting the humans involved in the cloud migration process. This helps to reduce the dependency on the tacit knowledge of the experienced migration consultants, architects, and subject matter experts (SMEs) and reduce the learning curve of the associated cloud migration team. The rare and costly resource time can be utilized 15 effectively since they are assisted with augmented intelligence from the system 100 and can focus on higher value services like reviews, bringing further innovation, etc. Client organizations (e.g., Entities) are benefit with increase in migration velocity up to 10 times higher than normal migration and reduced Total Cost of Ownership with reduced parallel run in on-premises source datacenters and cloud, 20 reduced business risks resulting from increased quality of migration with first time right migration. The system 100 is configured to provide an end-to-end orchestration of different migration lifecycle stages, and flow of the intelligence among different stages thus eliminating the need to use different tools.
[115]
The written description describes the subject matter herein to enable 25 any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent 30 elements with insubstantial differences from the literal language of the claims.
57
[116]
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device 5 can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an 10 ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different 15 hardware devices, e.g., using a plurality of CPUs.
[117]
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or 20 combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[118]
The illustrated steps are set out to explain the exemplary 25 embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can 30 be defined so long as the specified functions and relationships thereof are
58
appropriately performed.
Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are 5 intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. 10
[119]
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more 15 processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, 20 nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[120]
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
We Claim:
1. A processor implemented method, comprising:
receiving, by using a migration orchestration unit via one or more hardware processors, migration requirements pertaining to migration of a plurality of components from a first infrastructure to a second infrastructure (202), wherein the plurality of components are specific to an entity;
computing, by using the migration orchestration unit via the one or more hardware processors, a risk maturity score for each of the one or more components using the migration requirements (204);
generating, by using an inventory consolidation engine via the one or more hardware processors, an inventory matrix for the plurality of components based on a dynamic column mapping library, an output data obtained from a discovery tool and an inventory data obtained from the entity (206);
performing, by using a migration analysis engine via the one or more hardware processors, a migration complexity analysis on the inventory matrix, and a network connection data to obtain a target sizing report, an optimized firewall rule report, and a migration tool fitment report (208);
generating, by using an application information extraction engine via the one or more hardware processors, an application information form (AIF) report for each application comprised in the plurality of components based on the inventory matrix, and one or more responses to one or more queries (210);
creating, by using a target design recommendation engine via the one or more hardware processors, one or more target infrastructure designs for the plurality of components based on one or more design patterns and the application information form (AIF) report generated for each application (212);
generating, by using a design blueprint generator via the one or more hardware processors, a design blueprint document for each application based on the one or more target infrastructure designs, the inventory matrix, the optimized firewall rule report, the target sizing report, the migration tool fitment report, and one or more associated landing zone documents (214);
generating, by using code generation and automation engine via the one or more hardware processors, terraform code and automation scripts using the design blueprint document generated for each application, and the one or more associated landing zone documents (216);
generating, by using an automated migration planner via the one or more hardware processors, a migration planning runbook specific to one or more levels and an associated migration schedule based on the design blueprint document and one or more runbook templates (218); and
performing, via the one or more hardware processors, migration of the plurality of components from the first infrastructure to the second infrastructure for the entity based on the risk maturity score and an execution of the migration planning runbook specific to the one or more levels, the migration schedule, the terraform code and the automation scripts (220).
2. The processor implemented method as claimed in claim 1, wherein the migration complexity analysis comprises applying one or more migration analysis rules on the inventory matrix, and the network connection data.
3. The processor implemented method as claimed in claim 1, wherein the one or more target infrastructure designs for the plurality of components are obtained by applying one or more design selection rules on (i) the one or more design patterns and (ii) an associated comprehensive design questionnaire.
4. The processor implemented method as claimed in claim 1, wherein the AIF report is further based on the one or more associated landing zone documents and one or more associated application design documents.
5. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
receive, by using a migration orchestration unit, migration requirements pertaining to migration of a plurality of components from a first infrastructure to a second infrastructure (202), wherein the plurality of components are specific to an entity;
compute, by using the migration orchestration unit, a risk maturity score for each of the one or more components using the migration requirements;
generate, by using an inventory consolidation engine, an inventory matrix for the plurality of components based on a dynamic column mapping library, an output data obtained from a discovery tool and an inventory data obtained from the entity;
perform, by using a migration analysis engine, a migration complexity analysis on the inventory matrix, and a network connection data to obtain a target sizing report, an optimized firewall rule report, and a migration tool fitment report;
generate, by using an application information extraction engine, an application information form (AIF) report for each application comprised in the plurality of components based on the inventory matrix, and one or more responses to one or more queries;
create, by using a target design recommendation engine, one or more target infrastructure designs for the plurality of components based on the one or more design patterns and the application information form (AIF) report generated for each application;
generate, by using a design blueprint generator, a design blueprint document for each application based on the one or more target infrastructure designs, the inventory matrix, the optimized firewall rule report, the target sizing report, the migration tool fitment report, and one or more associated landing zone documents;
generate, by using code generation and automation engine, terraform code and automation scripts using the design blueprint document generated for each application, and the one or more associated landing zone documents;
generate, by using an automated migration planner, a migration planning runbook specific to one or more levels and an associated migration schedule based on the design blueprint document and one or more runbook templates; and
perform migration of the plurality of components from the first infrastructure to the second infrastructure for the entity based on the risk maturity score and an execution of the migration planning runbook specific to the one or more levels, the migration schedule, the terraform code and the automation scripts.
6. The system as claimed in claim 5, wherein the migration complexity analysis comprises applying one or more migration analysis rules on the inventory matrix, and the network connection data.
7. The system as claimed in claim 5, wherein the one or more target infrastructure designs for the plurality of components are obtained by applying one or more design selection rules on (i) the one or more design patterns and (ii) an associated comprehensive design questionnaire.
8. The system as claimed in claim 5, wherein the AIF report is further based on the one or more associated landing zone documents and one or more associated application design documents.
| # | Name | Date |
|---|---|---|
| 1 | 202421018113-STATEMENT OF UNDERTAKING (FORM 3) [13-03-2024(online)].pdf | 2024-03-13 |
| 2 | 202421018113-REQUEST FOR EXAMINATION (FORM-18) [13-03-2024(online)].pdf | 2024-03-13 |
| 3 | 202421018113-FORM 18 [13-03-2024(online)].pdf | 2024-03-13 |
| 4 | 202421018113-FORM 1 [13-03-2024(online)].pdf | 2024-03-13 |
| 5 | 202421018113-FIGURE OF ABSTRACT [13-03-2024(online)].pdf | 2024-03-13 |
| 6 | 202421018113-DRAWINGS [13-03-2024(online)].pdf | 2024-03-13 |
| 7 | 202421018113-DECLARATION OF INVENTORSHIP (FORM 5) [13-03-2024(online)].pdf | 2024-03-13 |
| 8 | 202421018113-COMPLETE SPECIFICATION [13-03-2024(online)].pdf | 2024-03-13 |
| 9 | Abstract1.jpg | 2024-05-08 |
| 10 | 202421018113-FORM-26 [08-05-2024(online)].pdf | 2024-05-08 |
| 11 | 202421018113-FORM-26 [22-05-2025(online)].pdf | 2025-05-22 |