Abstract: A Cloud Platform and A Method for Evaluating Infrastructure Readiness for Migration The invention relates to a cloud computing and enterprise IT infrastructure, specifically to systems for evaluating and managing cloud migration processes The invention addresses fragmented metadata acquisition and static migration modeling across heterogeneous environments The cloud platform (200) and method (1000) are deployed within the network-enabled computing device (205), integrating the interface (240), assessment engine (260), and metric modelling engine (MME) (290) to extract metadata (270), transform it into structured metadata (280), and generate migration readiness metrics (QMRRM) (300) using refined scoring logic (320). Customizable instance type recommendations (330) are produced based on user-defined parameters (360) and resource profiles (350). The invention enables real-time transformation, dynamic modeling, and predictive scaling, improving automation, accuracy, and scalability in migration planning. Benefits include consistent metadata processing, contextual provisioning strategies, and enhanced decision-making across hybrid and multi-cloud environments. Figure 1
Description:FIELD OF THE INVENTION
[0001] The present invention relates to the domain of cloud computing and enterprise IT infrastructure, particularly to platforms and systems designed for evaluating and managing cloud migration processes (migration assessment).
BACKGROUND FOR THE INVENTION:
[0002] A cloud platform is a comprehensive environment that provides hardware resources (like virtual servers, storage, and networking) and software services (such as databases, analytics, and orchestration tools) over the internet. The cloud platform abstracts physical infrastructure using virtualization and containerization technologies, enabling scalable, on-demand access to computing resources.
[0003] In cloud migration assessment operations, the cloud platform is used to evaluate existing workloads for compatibility, performance, and cost-effectiveness in the cloud. This involves analyzing hardware dependencies (e.g., CPU, memory, I/O patterns), software configurations (OS, middleware, applications), and network topologies. Tools on the cloud platform help simulate migration scenarios, detect conflicts, and generate metadata for automated provisioning and governance workflows. Fragmented metadata acquisition and transformation across heterogeneous environments - Cloud migration assessments require consistent extraction of infrastructure metadata from diverse source environments comprising virtual machines, containers, and legacy systems. Hardware-level configurations, operating system dependencies, and network protocols introduce variability in metadata formats and accessibility. Without a unified interface and processing logic, computing platforms face challenges in initiating and managing metadata acquisition workflows.
[0004] Transformation of raw metadata into structured formats demands real-time parsing, semantic enrichment, and normalization across multiple layers of abstraction. Memory-resident engines must coordinate with processors to AI-driven logic, ensuring compatibility with predefined templates. Inconsistent transformation pipelines result in unreliable data models, affecting downstream analysis and migration planning accuracy.
[0005] Lack of dynamic modeling for migration readiness and configuration recommendations - Quantifying migration readiness involves analyzing structured metadata against performance, utilization, and configuration parameters. Static scoring models fail to adapt to dynamic infrastructure states, leading to inaccurate readiness metrics. Processor-intensive modeling engines require scalable logic to evaluate multiple dimensions of infrastructure health and compatibility with cloud-native configurations.
[0006] Generating instance type recommendations based on readiness metrics demands integration of user-defined parameters, historical trends, and projected workloads. Absence of customizable modeling logic restricts the ability of computing platforms to simulate future resource needs or suggest optimal scaling configurations. This limitation affects decision-making in migration pipelines and reduces automation efficiency in deployment planning.
[0007] US2015347183A1 discloses a system for identifying candidate workloads for migration by analyzing resource usage metrics and performing cost-based evaluations such as total cost of ownership and return on investment. However, the system lacks customizable instance recommendations, and predictive scaling mechanisms.
[0008] US2025165300A1 provides a multi-cloud platform for service deployment and migration, enabling seamless transfer of operational metadata across cloud providers. The system supports metadata extraction via APIs, transformation into cloud-neutral formats, and ranking of deployment arrangements based on cost and performance. Nonetheless, the system does not incorporate dynamic R-factor modeling or customizable instance type recommendations.
[0009] IN202511028329A describes an AI-powered cloud adoption engine that evaluates IT infrastructure, workloads, and performance needs to generate migration strategies. The system leverages machine learning for intelligent recommendations, structured metadata transformation.However, the system does not formalize migration readiness scoring.
[0010] US2022171699A1 presents a system for transforming application source code into cloud-native formats using pre-defined transformation flows and CI/CD pipeline automation. The system includes a cloud readiness assessment tool, maturity scoring based on weighted metrics/ However, the system does not explicitly support customizable instance type recommendations.
[0011] WO2020136427A1 introduces a cloud assessment tool that captures structured application metadata through a wizard-based interface and evaluates cloud suitability using standardized attributes. The system supports batch evaluation, secure data storage, and recommendation of cloud platforms and transformation paths. Nevertheless, the system lacks dynamic R-factor modeling and customizable instance type recommendations.
[0012] Therefore, there is a need for a system (cloud platform) or such provisions which overcome the problems of the prior art.
OBJECTS OF THE INVENTION:
[0013] An object of the present invention is to enable consistent extraction and processing of infrastructure metadata from diverse computing environments.
[0014] One more object of the present invention is to facilitate real-time transformation of raw metadata into structured formats suitable for migration analysis.
[0015] One more object of the present invention is to support dynamic modeling of migration readiness based on infrastructure performance and configuration parameters.
[0016] One more object of the present invention is to improve automation in deployment planning through accurate and scalable configuration recommendations.
SUMMARY OF THE INVENTION:
[0017] The present invention pertains to cloud computing platforms designed to assess and manage enterprise IT infrastructure migration. The platform addresses fragmented metadata acquisition across heterogeneous environments by enabling consistent extraction from virtual machines, containers, and legacy systems. A unified interface coordinates secure data exchange using APIs and collectors, ensuring compatibility across hybrid and multi-cloud deployments. The invention introduces real-time transformation logic that parses, enriches, and normalizes metadata into structured formats suitable for downstream analysis. This transformation is executed using AI-driven semantic enrichment, and normalization techniques.
[0018] The platform integrates memory-processor architecture to execute migration-related operations. Memory modules store transformation templates, scoring models, and metadata schemas, while processors perform parallel computations for parsing and modeling. The assessment engine extracts metadata from source environments and transforms the metadata into structured formats using predefined templates. The structured metadata includes configuration parameters, performance statistics, and dependency mappings, which are stored in cloud-native formats. These formats enable continuous evaluation and provisioning logic across diverse infrastructure layers.
[0019] A metric modeling engine analyzes structured metadata to generate quantifiable migration readiness metrics. These metrics are computed using refined scoring logic that incorporates weighted models, normalization routines. Parameters such as compute density, I/O intensity, uptime history, and dependency graphs are evaluated to determine workload suitability. The scoring logic supports dynamic recalculation based on evolving infrastructure states, enabling real-time updates and comparative analysis across workloads. Modeling engine enhances decision-making accuracy and supports scalable migration planning.
[0020] Customizable instance type recommendations are generated based on migration readiness metrics and user-defined parameters. Configuration options include baseline, moderately scaled, and highly scaled profiles tailored to operational characteristics. Each option is defined by virtual CPU count, memory allocation, storage type, and bandwidth requirements. Resource profiles are derived from actual usage patterns and categorized into compute-intensive, memory-intensive, I/O-intensive, or latency-sensitive types. Recommendations are aligned with business constraints, preferred cloud vendors, and performance objectives, ensuring contextual relevance and technical feasibility.
[0021] The interface supports multiple input modes including graphical user interfaces, AI-generated recommendation interfaces, and automated service request interfaces. Inputs include workload identifiers, migration goals, and performance thresholds.
[0022] A deployment control module manages post-assessment provisioning and scaling operations. Predictive scaling logic anticipates future resource demands using historical telemetry and operational trends. Simulations evaluate performance under projected workloads, validating configuration suitability. Scaled configurations include instance count, resource type, geographic distribution, and failover policies. These configurations are stored in deployment templates and executed via orchestration platforms.
[0023] The invention offers technical advantages over prior art by formalizing dynamic migration readiness scoring and enabling customizable instance recommendations. Unlike existing systems, the platform supports real-time transformation, predictive scaling, and comparative workload analysis. The present invention solves the problem of fragmented metadata handling and static modeling by delivering a unified, scalable, and intelligent migration assessment framework.
BRIEF DESCRIPTION OF DRAWINGS:
[0024] Figure 1 shows a schematic block diagram of a cloud platform for migration assessment deployed within a network-enabled computing device in accordance with the present invention;
[0025] Figures 2 to 6 show schematic block diagrams illustrating input interfaces, transformation techniques, instance recommendation configurations, deployment modules, and SaaS architecture of the cloud platform shown in figure 1; and
[0026] Figure 7 shows a flowchart of a method for executing cloud migration assessment using the cloud platform shown in Figure 1.
DETAILED DESCRIPTION:
[0027] The present invention provides a cloud platform (200) (Figure 1) for migration assessment deployed within a network-enabled computing device (205). The network-enabled computing device (205) includes memories (210) and processors (220), which collectively enable the execution of migration-related operations. The cloud platform (200) is designed to interact with source environments (230), which may include virtual machines, containerized applications, legacy infrastructure, or hybrid cloud deployments. The operational connection between the cloud platform (200) and the source environments (230) facilitates secure and continuous data exchange, allowing infrastructure metadata to be acquired for assessment workflows.
[0028] The memories (210) are configured to store system instructions, metadata schemas, transformation templates, and migration scoring models. The memories (210) may include volatile memory such as dynamic random-access memory (DRAM) for temporary data processing and non-volatile memory such as solid-state drives (SSD).
[0029] The processors (220) are configured to execute instructions stored in the memory (210) and perform computational tasks associated with metadata parsing, transformation, and migration readiness evaluation. The processors (220) may include multi-core central processing units (CPUs), graphics processing units (GPUs), or tensor processing units (TPUs) capable of parallel execution.
[0030] The source environments (230) represent the origin systems from which infrastructure metadata is extracted. The source environments (230) may include public cloud platforms. The cloud platform (200) interfaces with the source environments (230) using secure application programming interfaces (APIs), agent-based collectors, or discovery tools to retrieve metadata such as CPU utilization, memory allocation, network topology, and storage configurations. For example, a containerized microservice running in a Kubernetes® cluster may expose its resource profile via Prometheus® (Tradenames) metrics, which are ingested by the cloud platform (200) for assessment.
[0031] The platform (200) includes an interface (240), an assessment engine (260), a metric modelling engine (MME) (290), and a migration readiness metric generator (300) configured to produce customizable instance type recommendations (330).
[0032] The interface (240) is configured within the memory (210) and processed by the processor (220) to facilitate user interaction with the cloud platform (200). The interface (240) is designed to receive input (250) for initiating a cloud migration assessment (A1), which may include parameters such as source environment identifiers, workload categories, performance thresholds, and migration objectives. The interface (240) may be implemented as a graphical user interface (GUI), command-line interface (CLI), or application programming interface (API), depending on the deployment context and user access requirements.
[0033] The input (250) received through the interface (240) may originate from system administrators, migration consultants, or automated orchestration tools. For example, a user may specify a virtual machine cluster hosted in a private data center. The input (250) may also include metadata extraction preferences, such as selecting between agent-based or agentless discovery methods and defining the scope of analysis—ranging from compute resources to network dependencies.
[0034] The interface (240) supports preprocessing of the input (250) to ensure compatibility with the cloud platform’s (200) assessment engine. Preprocessing may also involve mapping user-defined inputs to internal configuration templates stored in the memory (210).
[0035] In certain implementations, the interface (240) may be integrated with enterprise service portals or cloud management dashboards to streamline migration workflows.
[0036] The assessment engine (260) is configured within the memory (210) and processed by the processor (220) to perform real-time evaluation of infrastructure metadata for migration assessment. The assessment engine (260) is operationally connected to the source environments (230), which may include virtual machines, container clusters, legacy systems, or hybrid cloud deployments. The assessment engine (260) initiates metadata extraction workflows using secure protocols, agent-based collectors, or API integrations, depending on the architecture of the source environments (230).
[0037] A metadata (270) extracted by the assessment engine (260) includes configuration parameters, performance statistics, dependency mappings, and operational logs. The metadata elements are collected from various layers of the source environments (230), such as operating systems, middleware, and application runtimes. The assessment engine (260) supports both synchronous and asynchronous extraction modes to accommodate environments with varying latency and access constraints. For instance, the metadata (270) from a containerized application may be retrieved using Prometheus® exporters or Kubernetes ® (Trade name) endpoints.
[0038] The assessment engine (260) transforms the extracted metadata (270) into a structured metadata (280) using transformation techniques (not shown). The transformation techniques include AI-based semantic enrichment, and normalization using predefined metadata templates. AI-based semantic enrichment enhances the metadata (270) by interpreting context, resolving ambiguities, and tagging dependencies. Normalization ensures consistency across heterogeneous data sources by converting values into uniform units and formats.
[0039] The structured metadata (280) generated by the assessment engine (260) is stored in a cloud-native format suitable for downstream analysis and provisioning workflows. The structured metadata (280) serves as the foundation for migration readiness evaluation, instance type recommendation, and deployment planning. For example, the structured metadata (280) representing a three-tier web application may include normalized CPU utilization, memory allocation, storage IOPS, and network latency metrics for each tier. The assessment engine (260) ensures that the transformation process is executed in real time, enabling dynamic updates and continuous assessment during migration planning.
[0040] The metric modelling engine (MME) (290) is configured within the memory (210) and processed by the processor (220) to perform analytical computations on the structured metadata (280) derived from the source environments (230). The MME (290) is designed to evaluate infrastructure characteristics, operational parameters, and configuration dependencies to determine migration feasibility. The structured metadata (280) may include normalized performance metrics, resource utilization profiles, and dependency graphs, which are analyzed using advanced scoring algorithms. For example, the MME (290) may assess a multi-tier application’s readiness by comparing CPU load variability, memory saturation levels, and network latency against predefined migration thresholds.
[0041] The MME (290) (full name) is generally referred to as "Instance recommendation factor. (Dynamic readiness).
[0042] The MME (290) generates a quantifiable migration readiness metric (QMRRM) (300) based on parameters (310) extracted from the source environments (230). Th parameters (310) may include compute density, storage throughput, I/O intensity, uptime history, and service interdependence. The QMRRM (300) serves as a numerical indicator of how suitable a workload is for migration to a cloud-native environment. For instance, a workload with low dependency density and consistent performance may receive a high QMRRM (300), indicating minimal risk and high compatibility with cloud provisioning models. The QMRRM (300) may be recalculated dynamically as infrastructure conditions evolve or as input parameters are modified.
[0043] The MME (290) utilizes a refined scoring logic (320) to compute the QMRRM (300). The refined scoring logic (320) incorporates weighted scoring models, normalization techniques. The scoring logic (320) may be configured to prioritize specific migration goals such as cost efficiency, performance optimization, or compliance alignment. For example, in a financial services deployment, the refined scoring logic (320) may assign higher weightage to encryption support and latency thresholds, whereas in a media streaming application, throughput and scalability may be prioritized.
[0044] The refined scoring logic (320) is stored in the memory (210) and may be updated periodically based on evolving migration standards or enterprise policies.
[0045] The MME (290) further supports comparative analysis across multiple workloads or environments. By applying the refined scoring logic (320) uniformly, the MME (290) enables migration readiness across departments, data centers, or cloud platforms. The QMRRM (300) generated by the MME (290) is used by downstream modules to guide instance type recommendations, provisioning strategies, and deployment sequencing.
[0046] The metric modelling engine (MME) (290) is further configured to generate customizable instance type recommendations (330) based on analytical evaluation of the structured metadata (280) and user-defined parameters (360). These instance type recommendations (330) are tailored to match the operational characteristics and migration goals of the source environments (230). The MME (290) utilizes the refined scoring logic (320) to correlate infrastructure metrics with optimal cloud resource configurations. For example, a workload exhibiting high memory consumption and moderate CPU usage may be matched with a memory-optimized instance type in a public cloud platform.
[0047] The customizable instance type recommendations (330) comprise one or more configuration options (340), each representing a distinct deployment profile. These configuration options (340) may include baseline configurations that mirror the current resource profile, moderately scaled configurations for performance enhancement, and highly scaled configurations for anticipated growth. Each configuration option (340) is defined by parameters such as virtual CPU count, memory allocation, storage type, and network bandwidth. For instance, a baseline configuration may recommend a 4 vCPU, 16 GB RAM instance, while a scaled configuration may suggest an 8 vCPU, 32 GB RAM instance with SSD-backed storage.
[0048] Each configuration option (340) includes at least one type of a resource profile (350) derived from the source environments (230). The resource profiles (350) reflect actual usage patterns, performance thresholds, and dependency mappings observed during metadata extraction and transformation. The resource profiles (350) may be categorized into compute-intensive, memory-intensive, I/O-intensive, or latency-sensitive types.
[0049] The user-defined parameters (360) received via the interface (240) influence the generation of the customizable instance recommendations (330). The user-defined parameters (360) may include business constraints, budget limits, compliance requirements, preferred cloud vendors, and performance objectives. The interface (240) may support multiple input modes such as graphical user interface (GUI), AI-generated recommendation interface, or automated service request interface originating from orchestration tools.
[0050] The interface (240) is further configured to receive the input (250) (Figure 2) through one or more types of receiving interfaces selected from a defined group. These receiving interfaces are designed to accommodate varied operational contexts and user roles, ensuring flexibility and accessibility across enterprise environments. The input (250) may include migration objectives, workload identifiers, performance thresholds, and user-defined constraints, which are essential for initiating and customizing the cloud migration assessment (A1).
[0051] A user-operated graphical user interface (GUI) (601) enables manual input through interactive visual components such as dropdown menus, sliders, checkboxes, and data entry fields. The GUI (601) may be embedded within enterprise dashboards or cloud management portals, allowing users to configure migration parameters with minimal technical overhead. For example, a system administrator may use the GUI (601) to select a group of virtual machines for assessment and specify preferred cloud platforms.
[0052] An AI-generated recommendation interface (602) provides the input (250) based on predictive analytics and historical migration patterns. The AI-generated recommendation interface (602) may be powered by machine learning models trained on infrastructure telemetry, cost-performance data, and previous migration outcomes. For instance, the AI-generated recommendation interface (602) may suggest optimal migration configurations for a web application based on its observed resource consumption and latency sensitivity.
[0053] An automated service request interface (603) enables the input (250) to be received from external systems or orchestration tools without manual intervention. The automated service request interface (603) may be integrated with infrastructure-as-code platforms, continuous integration/continuous deployment (CI/CD) pipelines, or enterprise service buses.
[0054] The assessment engine (260) (Figure 3) transforms the extracted metadata (270) into the structured metadata (280) using one or more transformation techniques selected from a defined group. These transformation techniques are designed to standardize, enrich, and normalize infrastructure data originating from heterogeneous source environments (230), thereby enabling consistent analysis and provisioning logic across cloud platforms.
[0055] An AI-based semantic enrichment technique (703) enhances the extracted metadata (270) by interpreting contextual relationships, resolving ambiguities, and tagging dependencies. This technique utilizes machine learning models trained on infrastructure telemetry, historical migration data, and operational patterns to infer missing attributes and identify logical groupings.
[0056] A normalization technique (794) converts metadata values into consistent units and formats to enable accurate comparison and scoring. Normalization (794) may include converting memory values from megabytes to gigabytes, standardizing timestamp formats, unifying naming conventions, and harmonizing performance metrics across environments.
[0057] The customizable instance type recommendations (330) (Figure 4) generated by the metric modelling engine (MME) (290) are based on the quantifiable migration readiness metric (QMRRM) (300) and refined scoring logic (320). These recommendations are designed to support flexible provisioning strategies by offering multiple configuration options (340) that align with the operational characteristics and migration goals of the source environments (230). Each configuration option (340) is derived from the structured metadata (280) and the user-defined parameters (360) received via the interface (240), ensuring contextual relevance and technical feasibility.
[0058] A baseline configuration (340a) is equivalent to the current resource profile (350) of the source environment (230). The baseline configuration (340a) replicates the existing compute, memory, storage, and network parameters to maintain operational continuity during migration. For example, if a virtual machine in the source environment (230) is configured with 4 vCPUs, 16 GB RAM, and 500 GB HDD storage, the baseline configuration (340a) will recommend a cloud instance with equivalent specifications.
[0059] A moderately scaled configuration (340b) includes enhanced compute or memory resources to accommodate performance improvements or anticipated workload expansion. The moderately scaled configuration (340b) is generated by analyzing historical usage patterns, peak load intervals, and performance bottlenecks identified in the structured metadata (280). For instance, a workload exhibiting periodic memory saturation may be recommended to have a configuration with 8 vCPUs and 32 GB RAM to ensure smoother operation under variable load conditions. The moderately scaled configuration (340b) is ideal for applications undergoing modernization or experiencing gradual growth.
[0060] A highly scaled configuration (340c) is optimized for performance-intensive workloads or anticipated future growth scenarios. The highly scaled configuration (340c) incorporates advanced resource profiles (350) such as high-throughput storage, accelerated networking, and computer-optimized instance types. The MME (290) evaluates scalability requirements, latency sensitivity, and redundancy needs to generate this configuration. The highly scaled configuration (340c) supports workloads with aggressive performance targets or those transitioning to high-availability architectures.
[0061] A deployment control module (370) (Figure 5) is configured within the memory (210) and processed by the processor (220) to manage post-assessment provisioning and scaling operations within the cloud platform (200). The deployment control module (370) operates in conjunction with the metric modelling engine (MME) (290) and the assessment engine (260) to translate migration readiness insights into actionable deployment strategies.
[0062] An integrate predictive scaling logic (380) is embedded within the deployment control module (370) to anticipate future resource demands based on historical usage patterns and operational trends. The predictive scaling logic (380) analyzes telemetry data such as CPU utilization curves, memory consumption rates, and I/O throughput fluctuations to forecast scaling requirements. For example, a retail application experiencing traffic surges during holiday seasons may be flagged for auto-scaling configurations that activate additional computing instances during peak hours. The predictive scaling logic (380) ensures that resource provisioning remains aligned with performance objectives and cost constraints.
[0063] The deployment control module (370) simulates future resource utilization scenarios (390) using historical trends and projected workloads derived from the structured metadata (280). The future resource utilization scenarios (390) may include stress testing, load forecasting, and performance modeling across various cloud configurations. The deployment control module (370) generates scaled configurations (395) that anticipate growth, seasonal spikes, or performance bottlenecks. The scaled configurations (395) are derived from customizable instance type recommendations (330) and are adjusted based on predictive insights and simulation results. Each scaled configuration (395) includes parameters such as instance count, resource type, geographic distribution, and failover policies.
[0064] In one embodiment, the cloud platform (200) incorporates the interface (240), the assessment engine (260), and the metric modelling engine (MME) (290), each implemented as Software-as-a-Service (SaaS) applications (800) (Figure 6). The interface (240) comprises a web-accessible user interaction layer that supports multi-tenant access, secure authentication protocols, and dynamic rendering of assessment modules. The interface (240) is designed to operate independently of client-side hardware, enabling seamless access across devices and operating systems. For example, the interface (240) may utilize RESTful APIs to fetch real-time data from the assessment engine (260), allowing users to initiate and monitor evaluation workflows remotely.
[0065] The assessment engine (260) and the metric modelling engine (MME) (290) are configured to function as modular, cloud-native services within the SaaS architecture. The assessment engine (260) includes a scoring algorithm repository, and a feedback loop mechanism that continuously refines assessment parameters based on historical data. The metric modelling engine (MME) (290) includes a statistical modelling suite, a data normalization module, and a visualization layer that generates performance dashboards. These components are deployed using container orchestration platforms such as Kubernetes ® (Trade names).
[0066] In one embodiment, a method (1000) (Figure 7) for assessment is provided in accordance with the present invention. The method (1000) starts at step (1000a). At step (1001), configure the interface (240), the assessment engine (260), and the metric modelling engine (MME) (290) in the memory (210). The interface (240) connects to external systems to receive input data such as infrastructure parameters, migration readiness scores, or operational metrics. The assessment engine (260) processes the input data to evaluate system performance, risk factors, and compliance thresholds.
[0067] At step (1002), connect the cloud platform (200) operationally with one or more source environments (230). The cloud platform (200) establishes secure communication channels with the source environments (230) using APIs, VPNs, or direct network links. At step (1003), receive the input (250) for initiating the cloud migration assessment (A1) through the interface (240). The input (250) includes parameters such as source environment identifiers, workload types, performance baselines, and migration objectives.
[0068] At step (1004), extract the metadata (270) from the source environments (230) and transform the extracted metadata (270) into the structured metadata (280) in real time by the assessment engine (260). The metadata (270) includes system configurations, workload dependencies, network topologies, and performance logs. At step (1005), analyze the structured metadata (280) and generate the quantifiable migration readiness metric (QMRRM) (300) based on the parameters (310) of the source environments (230) using the refined scoring logic (320) by the metric modelling engine (MME) (290).
[0069] At step (1006), generate the customizable instance type recommendations (330) comprising one or more configuration options (340) with at least one type of resource profiles (350) of the source environments (230) based on the user-defined parameters (360) received via the interface (240), and the refined scoring logic (320) by the metric modelling engine (MME) (290). The method (1000) ends at step (1000b).
[0070] The cloud platform (200) enables structured migration assessments across hybrid environments—virtual machines, containers, and legacy systems. Using the assessment engine (260), raw infrastructure metadata (270) is transformed into structured metadata (280), supporting consistent workload evaluation and simplifying migration planning across public and private clouds.
[0071] The metric modelling engine (MME) (290) computes quantifiable migration readiness metrics (QMRRM) (300) using parameters (310) like compute density and I/O intensity. The refined scoring logic (320) enables comparative analysis across departments or data centers, helping prioritize workloads and improve migration strategy accuracy.
[0072] Based on QMRRM and user-defined parameters (360) such as budget and performance goals, the MME generates customizable instance type recommendations (330). Configuration profiles—baseline (340a), moderately scaled (340b), and highly scaled (340c) are derived from resource profiles (350) and structured metadata (280), enabling precise provisioning and cost optimization.
[0073] The interface (240) supports multiple input modes—GUI (601), AI-generated interface (602), and automated service request interface (603)—allowing minimal manual effort and integration with orchestration tools for event-driven workflows. Role-based access control and multilingual support enhance accessibility and operational efficiency.
[0074] The deployment control module (370) simulates future resource utilization (390) using predictive scaling logic (380), which analyzes historical telemetry to forecast demand spikes. This supports proactive provisioning for seasonal or performance-sensitive workloads, aligning with business continuity goals. The platform (200) is deployable on network-enabled computing devices (205), where processors (220) and memories (210) execute parsing, modeling, and scoring tasks in parallel. This supports both edge and centralized deployments, making the solution scalable and adaptable to varied enterprise infrastructures.
[0075] The method (1000) orchestrates end-to-end migration assessments through a repeatable workflow. By configuring the interface (240), assessment engine (260), and MME (290) within the memory (210) of the computing device (205), users automate metadata extraction (1004), transformation, scoring (1005), and recommendation generation (1006) in real time. This ensures consistent evaluation across heterogeneous environments (230) and supports integration with CI/CD pipelines.
[0076] Metadata extraction is initiated via agent-based collectors and APIs, with structured input (250) configuring extraction parameters. Transformation into the structured metadata (280) is achieved through the semantic enrichment (703) and the normalization (794), executed by the processor (220), ensuring compatibility with cloud-native formats and resolving pipeline inconsistencies. The coordinated execution of extraction, transformation, and scoring improves consistency, reduces manual intervention, and enhances compatibility across dynamic infrastructure states. This structured approach directly addresses fragmented metadata acquisition and transformation challenges, supporting continuous infrastructure modernization.
[0077] All third-party trademarks, service marks, and trade names referenced in this specification are the property of their respective owners and are used solely for descriptive and identification purposes to identify compatible systems and services. Such use does not imply endorsement, affiliation, or sponsorship by the trademark owners, and all trademark rights are acknowledged. , Claims:CLAIMS
We Claim:
1) A cloud platform (200) for migration assessment, the cloud platform (200) is configured in a network enabled computing device (205) with one or more memories (210), and one or more processors (220), the cloud platform (200) being operationally connected with one or more source environments (230), the cloud platform (200) comprising:
(a) an interface (240) configured in the memory (210) and processed by the processor (220), the interface (240) being configured to receive input (250) for initiating a cloud migration assessment (A1);
(b) an assessment engine (260) configured in the memory (210) and processed by the processor (220), the assessment engine (260) being configured to extract metadata (270) from the source environments (230) and transform the extracted metadata (270) into a structured metadata (280) in real time;
(c) a metric modelling engine (MME) (290) configured in the memory (210) and processed by the processor (220), the MME (290) is being configured to analyse the structured metadata (280), and generate a quantifiable migration readiness metric (QMRRM) (300) based-on parameters(310) of the source environments (230) using a refined scoring logic (320); and
wherein the MME (290) generates the customizable instance type recommendations (330) comprising one or more configuration options (340) with at least one type of resource profiles (350) of the source environments (230) based on user-defined parameters (360) received via the interface (240), and the refined scoring logic (320).
2) The cloud platform as claimed in claim 1, wherein the interface (240) is configured to receive the input (250) via one or more types of receiving interfaces selected from the group includes a user-operated graphical user interface (GUI) (601), an AI-generated recommendation interface (602), and an automated service request interface (603) interface originating from external systems or orchestration tools.
3) The cloud platform as claimed in claim 1, wherein the assessment engine (260) transforms the extracted metadata (270) into the structured metadata (280) using one or more transformation techniques selected from the group consisting of: an AI-based semantic enrichment technique (703) and a normalization (794) using predefined metadata templates.
4) The cloud platform (200) as claimed in claim 1, wherein the customizable instance type recommendations (330) generated by the metric modelling engine (MME) (290) based on the quantifiable migration readiness metric (QMRRM) (300) and refined scoring logic (320), the customizable instance type recommendations (330) comprises:
(a) a baseline configuration (340a) equivalent to the current resource profile (350) of the source environment (230);
(b) a moderately scaled configuration (340b) with enhanced compute or memory resources; and
(c) a highly scaled configuration (340c) optimized for performance-intensive workloads or anticipated future growth.
5) The cloud platform (200) as claimed in claim 1, wherein the cloud platform (200) comprising a deployment control module (370) configured in the memory (210) and processed by the processor (220), the deployment control module (370) being configured to:
(a) an integrate predictive scaling logic (380) into a migration pipeline;
(b) simulate future resource utilization scenarios (390) based on historical trends and projected workloads; and
(c) generate scaled configurations (395) that anticipate growth, seasonal spikes, or performance bottlenecks.
6) The cloud platform (200) as claimed in claim 1, wherein the interface (240), the assessment engine (260), the metric modelling engine (MME) (290) are SaaS applications (800).
7) A method (1000) for migration assessment using the cloud platform (200), the cloud platform (200) is configured in a network enabled computing device (205) with one or more memories (210), and one or more processors (220), the method comprising:
configuring an interface (240), an assessment engine (260), a metric modelling engine (MME) (290) in the memory (210);
connecting the cloud platform (200) operationally connected with one or more source environments (230);
receiving input (250) for initiating a cloud migration assessment (A1) through the interface (240);
extracting metadata (270) from the source environments (230) and transform the extracted metadata (270) into a structured metadata (280) in real time by the assessment engine (260);
analysing the structured metadata (280), and generate a quantifiable migration readiness metric (QMRRM) (300) based-on parameters(310) of the source environments (230) using a refined scoring logic (320) by the MME (290); and
generating the customizable instance type recommendations (330) comprising one or more configuration options (340) with at least one type of resource profiles (350) of the source environments (230) based on user-defined parameters (360) received via the interface (240), and the refined scoring logic (320) by the MME (290).
Dated this 28th day of September, 2025
BALIP AMIT ABASAHEB [IN/PA-5184]
| # | Name | Date |
|---|---|---|
| 1 | 202541093101-STATEMENT OF UNDERTAKING (FORM 3) [28-09-2025(online)].pdf | 2025-09-28 |
| 2 | 202541093101-REQUEST FOR EXAMINATION (FORM-18) [28-09-2025(online)].pdf | 2025-09-28 |
| 3 | 202541093101-REQUEST FOR EARLY PUBLICATION(FORM-9) [28-09-2025(online)].pdf | 2025-09-28 |
| 4 | 202541093101-POWER OF AUTHORITY [28-09-2025(online)].pdf | 2025-09-28 |
| 5 | 202541093101-FORM-9 [28-09-2025(online)].pdf | 2025-09-28 |
| 6 | 202541093101-FORM 18 [28-09-2025(online)].pdf | 2025-09-28 |
| 7 | 202541093101-FORM 1 [28-09-2025(online)].pdf | 2025-09-28 |
| 8 | 202541093101-DRAWINGS [28-09-2025(online)].pdf | 2025-09-28 |
| 9 | 202541093101-DECLARATION OF INVENTORSHIP (FORM 5) [28-09-2025(online)].pdf | 2025-09-28 |
| 10 | 202541093101-COMPLETE SPECIFICATION [28-09-2025(online)].pdf | 2025-09-28 |