Sign In to Follow Application
View All Documents & Correspondence

Method And System For Automated Provisioning Of Resources In A Cloud Computing Environment

Abstract: A method for automated provisioning of resources in a cloud computing environment dynamically during run time, comprising the steps of receiving a trigger to alter the allocated resources determining a scalability plan and/or bursting plan scaling resources up or down based on one or more scaling parameters providing access to one or more target clouds based on one or more cloud parameters determining the values of one or more transfer variables required and transferring at least a part of the service onto the target clouds using the transfer variables.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
11 August 2011
Publication Number
36/2011
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application

Applicants

HCL Technologies Ltd.
50-53 Greams Road  Chennai - 600006  Tamil Nadu  India

Inventors

1. Subha S.
HCL Technologies Ltd  4 Canal Rd  Tidel Park  Taramani  Chennai-600113 India
2. Ashok Kumar R.
HCL Technologies Ltd  4 Canal Rd  Tidel Park  Taramani  Chennai-600113 India

Specification

METHOD AND SYSTEM FOR AUTOMATED PROVISIONING OF RESOURCES IN
A CLOUD COMPUTING ENVIRONMENT
FIELD
The present disclosure generally relates to the field of cloud computing. More particularly, it relates to a method and system for automatically provisioning resources in a cloud computing environment at run time. It is applicable to various scenarios and industries such as e-commerce, internet banking and other online services
BACKGROUND
When cloud computing is being adopted for e-commerce, retail type of workload, the unpredictability of the incoming user requests for the service makes the infrastructure provisioning process in cloud more complex than it should be. Since cloud computing is about an elastic framework where resources are allocated on demand and charged per usage, the challenge is to ensure that a service scales on infrastructure automatically without manual intervention
Popular cloud providers such as Google and Microsoft mandate that any allocation of infrastructure be done at the time of configuring or planning for a service with adequate backup allocated for scaling through capacity management. While this is a safe approach, it requires data in hand at the time of planning to have a fool-proof cloud setup. Proper capacity planning and management is needed and the intervention of manual administration is also not ruled out

This is a major hurdle for hosting consumer services on cloud where unexpected spikes in workload and several days of inactivity are not uncommon. To keep costs optimal and to still ensure that service SLAs are met with, automated cloud bursting to handle spikes is required. This essentially requires that incoming traffic is closely monitored and the backend cloud resources are automatically scaled up beyond a threshold either from the same cloud poo! or from a partner cloud
There are various approaches that exist to solve the aforementioned issue;
- Hybrid Cloud Enablement: This approach requires upfront planning and deployment. Enterprises make use of the infinite capacity of public clouds in handling unexpected loads. Businesses either reserve infrastructure upfront or keep a subscription and consume on demand
- Migration of Workload to Public Clouds: This approach can be implemented at the time of demand or even before. However this is a very time consuming and inefficient because of migration of workload from one cloud to another requires other back-end processing such as image conversion, snapshots and migration of complete workload to a target cloud
In both the above-described conventional techniques, performance based cloud bursting offering instant provisioning to partner cloud in a secure, cost-effective manner is not available
EP2228721 titled 'System and Method for Unified Cloud Management' describes a method and system for managing workloads in a cloud computing environment

comprising cloud services providers (202). In one embodiment, the method comprises, for each of the cloud services providers, monitoring (200) a situation of the cloud services provider to obtain situation information for the cloud services provider and evaluating (204) the obtained situation information and then deploying (240) a workload to a selected one of the cloud services providers based at least in part on results of the evaluating. It provides a management framework to monitor a host of cloud service providers and deploy workloads based on the suitability of the workload. However, this is a workload orchestration framework with no live migrations or conversions. It also does not migrate workloads based on fluctuation loads.
US20100235355 titled 'System and Method for Unified Cloud Management' describes a method and system for managing workloads in a cloud computing environment comprising cloud service providers. In one embodiment, the method comprises, for each of the cloud services providers, monitoring a situation of the cloud services provider to obtain situation information for the cloud services provider and evaluating the obtained situation information and then deploying a workload to a selected cloud services provider based at least in part on the results of the evaluation. However, it decides the best possible service provider for a workload based on historic data & cost and does not provide for dynamic allocation as is desired
Thus there is a requirement for an alternate approach as the present technologies have various disadvantages and do not provide what is desired in the present scenario. There is a need for a technique that can handle sudden bouts of spikes and make live migrations at run time. Also necessary is to internally embed cloud format conversions and make migration from one cloud to another. It is desired that migrations are comprehensive and include templates as well as software and hardware environments and not just applications

SUMMARY
In order to obviate the above drawbacks the instant invention provides a method and system for automated provisioning of resources in a cloud computing environment dynamically during run time
It is an aim of the present disclosure to sense or receive and understand a trigger indicating a requirement or request to alter the allocated resources
It is also an aim of the present disclosure to sense or monitor the conditions for a trigger and accordingly activate a trigger
It is another aim of the present disclosure to monitor one or more scaling parameters
It is yet another aim of the present disclosure to analyze one or more cloud parameters
It is further an aim of the present disclosure to make live migrations at run time
It is also an aim of the present disclosure to transfer at least a part of the service is back to the source cloud if space is available
To achieve the aforesaid and other objectives related to efficient provisioning of resources, the instant invention provides a method for automated provisioning of resources in a cloud computing environment dynamically during run time comprising the steps of receiving a trigger to alter the allocated resources, determining a scalability plan and/or bursting plan, scaling resources up or down based on one or more scaling parameters, providing access to one or more target clouds based on one or more cloud parameters, determining the values of one or more transfer variables required and

transferring at least a part of the service onto the target clouds using the transfer variables
It also provides a system for automated provisioning of resources in a cloud computing environment dynamically during run time, comprising of trigger manager configured to receive a trigger to alter the allocated resources, decision maker coupled to the trigger manager and configured to determine a scalability plan and/or bursting plan and to scale resources up or down based on one or more scaling parameters, burster coupled to the decision maker and configured to provide access to one or more target clouds based on one or more cloud parameters and migration manager coupled to the burster and configured to determine the values of one or more transfer variables required and to transfer at least a part of the service onto the target clouds using the transfer variables;
The present disclosure is for automated cloud bursting of designated workload to a pre¬defined partner cloud or from a select list of clouds offering the best spot price thus eliminating service downtime due to paucity of resources. It offers automated provisioning in delivering Infrastructure-As-A-Service (IaaS) thereby eliminating manual monitoring and provisioning
The advantage offered is that all bottlenecks such as service downtime In overall cloud adoption are removed and real time response in systems allocation is provided. Other advantages include reduction of provisioning costs since no upfront allocation of hardware resources is required. Further there is reduced hardware spending due to leverage of third party low cost alternatives.

It also achieves certain business objectives such as reduction of service downtime due to infrastructure shortage, elimination of manual labour in server provisioning and administration and quick provisioning across a range of cloud providers without any manual labour
BRIEF DESCRIPTION OF THE DRAWINGS
The following is a brief description of the preferred embodiments with reference to the accompanying drawings. It is to be understood that the features illustrated in and described with reference to the drawings are not to be construed as limiting of the scope of the invention. In the accompanying drawings:
- Figure 1 depicts the primary embodiment of the method followed in this disclosure
- Figure 2 depicts the primary embodiment of a trigger activation procedure
- Figure 3 depicts the primary embodiment of the system implemented as per the technology in this disclosure
DETAILED DESCRIPTION OF THE DRAWINGS
A method and system for automated provisioning of resources in a cloud computing environment dynamically during run time are described. The method and system are not intended to be restricted to any particular form or arrangement, or any specific embodiment, or any specific use, disclosed herein, since the same may be modified in various particulars or relations without departing from the spirit or scope of the claimed invention herein above shown and described of which the device or method shown is intended only for illustration and disclosure of an operative embodiment and not to

show all of the various forms or modifications in which this invention might be embodied or operated.
The present disclosure introduces an automated secure cloud bursting of a complete workload or components as per the user's choice onto a partner cloud. This method also includes understanding the image format of the target clouds and converting the image formats of the target cloud automatically
The entire bursting can be performed in two ways:
- Instant bursting to a cloud partner who provides the best spot costing at that time for the required infrastructure
- Instant bursting to a fallback infrastructure within the firewall reserved for the purpose of unexpected spikes from any service
In a preferred embodiment, a service comprises of one or more running applications.
Figure 1 depicts the primary embodiment of the method followed in this disclosure. The procedure begins at step 101 in which an indication is received to alter the allocated resources. This indication is received in the form of a trigger. In a preferred embodiment, this trigger is activated based on at least one prescheduled activity or scheduled based on known future events. Alternatively this trigger may be activated by at least one user action.
In yet another embodiment, the trigger is activated in real-time dynamically. In such a case, as is depicted in Figure 2, one or more parameters are selected which shall be the deterministic parameters for calculating the load of a service running on a source cloud in step 201. These parameters are therefore monitored in step 202 and if at least one parameter crosses a certain threshold then in step 203 a trigger is activated. The

parameters are selected based the preconfigured scalability plan. Such triggers are referred to as auto triggers. As an example if the parameter is selected as 'user traffic' then at run time on reaching the configured maximum threshold, the scale up trigger is generated.
Next, in step 102, scalability and/or a bursting plan is determined. Based on these plans and aiso at least one scaling parameter, in step 103 the active resources are scaled up or down. In a preferred embodiment, these scaling parameters are one or more of current resource allocation, utilization, cloud capacity number of users logging in or downloading, the network traffic or any other custom parameter
In step 104, based on at least one cloud parameter the appropriate target cloud or clouds are given access. The target cloud may be identified based on several parameters. In a preferred embodiment, the cloud which provides the best spot costing at that time for the required infrastructure is identified as the target cloud. In another embodiment, the fallback infrastructure within the firewall reserved for the purpose of unexpected spikes from any service is identified as the target cloud.
In a preferred embodiment, these cloud parameters are one or more of cloud provider, region, security configurations, network configurations and storage requirements
Next in step 105, one or more variables required for the transfer are determined and lastly in step 106, at least part of the service is transferred onto the target clouds using the transfer variables. However before the migration is performed, in a preferred embodiment, it is checked whether the image format of both the clouds match. For this, the image format of the target cloud is retrieved and if it does not match that of the source cloud, the format of the present state of service is converted to the image format of the target cloud. Preferably, the runtime context of the workload of the cloud is also identified.

In a preferred embodiment, at least one application or at least a partial component of the application is transferred. In another embodiment, the method also comprises transferring at least a part of the service back to the source cloud if space is available. In yet another embodiment, only the computation is transferred to a selected target cloud and the data is retained at the source cloud
Similarly, Figure 3 depicts the primary embodiment of the system implemented as per the technology in this disclosure. The input is received at the trigger manager 201 which is configured to raise or receive a trigger to alter the allocated resources. As described above the trigger may have been produced via any of the techniques described. It is configured to manage the triggers for the scaling up and down events. It monitors different parameters of the application workload such as quality of service requirements such as quality of service (QoS) requirements including request per seconds, transactions per second etc. In a preferred embodiment, once there is a deflection in the monitoring parameters, the trigger manager 201 is configured to raise a trigger
The trigger manager 201 provides the trigger to the decision maker 202 which is configured to determine a scalability plan and/or bursting plan and to scale resources up or down based on one or more scaling parameters. In a preferred embodiment, these scaling parameters are one or more of current resource allocation, utilization, cloud capacity number of users logging in or downloading, the network traffic or any other custom parameter. In another preferred embodiment, it is configured to constantly receive complex trigger events and process them to conclude on action. It is also configured to decide the action for the trigger event. It computes the action to be taken for the workload or components which can be either scaling up or down the workload infrastructure, network, policies, environment and/or the context data of the workload to meet the quality of service defined in the policies. In another preferred embodiment, it is configured to compute the action for the trigger event by performing

analysis over historic data of any parameter, such as QoS parameters, as per scalability and/or bursting plans
The decision maker 202 is further connected to the burster 203 and is configured to provide access to at least one target cloud based on at least one cloud parameter. It is the main component which dynamically migrates workload based on fluctuation load indicated by the trigger. It is configured to decide bursting based on cloud capacity and pre-configured bursting plan. It is also configured to identify the image in the target cloud and identify the runtime context of the workload which includes components with serevr templates, packages, scripts and other database parameters of the workload.
In a preferred embodiment, the burster 203 is configured to constantly analyze and match the work load context details. The context details may include a variety of information such as the cloud on which the workload or the components of the workload are deployed, the workload deployment strategy, nature of the workload, associated virtual images, storage requirements etc
Further, based on the context details, if the target cloud doesn't have an appropriate format, the burster 203 is configured to perform inherent format conversions for virtual images. In a preferred embodiment, it is also configured to migrate the entire snapshots / storage, network, firewall permissions for the workload etc. Preferably, it is configured to migrate the workload based on the components of the workload identified from the workload context. It may be preferably configured to handle the application and database separately. The application burster is configured to identify and migrate the application state, policies, its run time platform, libraries, configuration details etc to the target cloud. The database burster is configured to identify the configuration details with respect to the persistent storage, snapshots, platform of the storage and migrate them to the target cloud

The migration manager 204 is coupled to the burster 203 and is configured to determine the values of at least one transfer variable required and to transfer at least a part of the service onto the target clouds using the transfer variables. In a preferred embodiment, the migration manager 204 is the abstracted layer over cloud parameters and is for provisioning or de-provisioning compute, storage, network etc. Preferably, it is configured to switch between variety clouds as defined by the burster 203 during workload migration. The burster 203 is preferably configured to use the migration manager 204 based on the context information and decision maker action to migrate the workload
The embodiments described above and illustrated in the figures are presented by way of example only and are not intended as a limitation upon the concepts and principles of the present invention. As such, it will be appreciated by one having ordinary skill in the art that various changes in the elements and their configuration and arrangement are possible without departing from the spirit and scope of the present invention as set forth in the appended claims.
It will readily be appreciated by those skilled in the art that the present invention is not limited to the specific embodiments shown herein. Thus variations may be made within the scope and spirit of the accompanying claims without sacrificing the principal advantages of the invention.

We claim:
1. A method for automated provisioning of resources in a cloud computing
environment dynamically during run time, comprising the steps of:
a. receiving a trigger to alter the allocated resources;
b. determining a scalability plan and/or bursting plan;
c. scaling resources up or down based on one or more scaling parameters;
d. providing access to one or more target clouds based on one or more cloud
parameters;
e. determining the values of one or more transfer variables required; and
f. transferring at least a part of the service onto the target clouds using the
transfer variables;
2. A method as claimed in claim 1, wherein the trigger is activated by the following
steps:
a. determining one or more parameters based on which the load of a service
running on a source cloud is calculated;
b. monitoring the one or more parameters; and
c. activating a trigger if atleast one parameter crosses a certain threshold;
3. A method as claimed in claim 1, wherein the trigger is activated based on one or
more prescheduled activities

4. A method as claimed in claim 1, wherein the trigger is activated by one or more user actions
5. A method as claimed in claim 1, wherein the trigger is activated in real-time dynamically
6. A method as claimed in claim 1, wherein the scaling parameters comprise one or more of current resource allocation, utilization, cloud capacity number of users logging in or downloading, the network traffic or any other custom parameter
7. A method as claimed in claim 1, wherein the cloud parameters comprise one or more of cloud provider, region, security configurations, network configurations and storage requirements
8. A method as claimed in claim 1, wherein the transfer variables comprise one or more of workload components, workload deployment strategy, nature of the workload, associated virtual images and storage requirements
9. A method as claimed in claim 1, comprising the steps of:
a. retrieving the image format of the target clouds; and
b. converting the format of the present state of the service to the image
format of the target cloud;
10. A method as claimed in claim 9, comprising the step of:
a. identifying the runtime context of the workload of the cloud;
11.A method as claimed in claim 1, wherein atleast one target cloud is a cloud which provides the best spot costing at that time for the required infrastructure

12.A method as claimed in claim 1, wherein atleast one target cloud is a fallback infrastructure within the firewall reserved for the purpose of unexpected spikes from any service
13. A method as claimed in claim 1, comprising the step of transferring atleast a part of the service is back to the source cloud if space is available
14.A method as claimed in claim 1, wherein a service comprises of one or more running applications
15. A method as claimed in claim 14, wherein atleast one application is transferred to a target cloud
16.A method as claimed in claim 14, wherein atleast a partial component of an application is transferred to a target cloud
17. A method as claimed in claim 1, wherein only the computation is transferred to a selected target cloud and the data is retained at the source cloud
18.A system for automated provisioning of resources in a cloud computing environment dynamically during run time, comprising of:
a. trigger manager configured to receive a trigger to alter the allocated
resources;
b. decision maker coupled to the trigger manager and configured to
determine a scalability plan and/or bursting plan and to scale resources up
or down based on one or more scaling parameters;
c. burster coupled to the decision maker and configured to provide access to
one or more target clouds based on one or more cloud parameters; and

d. migration manager coupled to the burster and configured to determine the values of one or more transfer variables required and to transfer at least a part of the service onto the target clouds using the transfer variables;
19. A system as claimed in claim 18, wherein the trigger is an automatic trigger and is activated when one or more predetermined parameters cross a configured threshold value
20. A system as claimed in claim 18, wherein the trigger is activated based on one or more prescheduled activities
21. A system as claimed in claim 18, wherein the trigger is activated by one or more user actions
22.A system as claimed in claim 18, wherein the trigger is activated in real-time dynamically
23. A system as claimed in claim 18, wherein the scaling parameters comprise one or more of current resource allocation, utilization, cloud capacity number of users logging in or downloading, the network traffic or any other custom parameter
24.A system as claimed in claim 18, wherein the cloud parameters comprise one or more of cloud provider, region, security configurations, network configurations and storage requirements
25. A system as claimed in claim 18, wherein the transfer variables comprise one or more of workload components, workload deployment strategy, nature of the workload, associated virtual images and storage requirements

26. A system as claimed in claim 18, wherein the burster is configured to retrieve the
image format of the target clouds and convert the format of the present state of
the service to the image format of the target cloud
27. A system as claimed in claim 18, wherein the decision maker is configured to
identify the runtime context of the workload of the cloud
28.A system as claimed in claim 18, wherein atleast one target cloud is a cloud which provides the best spot costing at that time for the required infrastructure
29.A system as claimed in claim 18, wherein atleast one target cloud is a fallback infrastructure within the firewall reserved for the purpose of unexpected spikes from any service
30.A system as claimed in claim 18, wherein the migration manager is configured to transfer at least a part of the service back to the source cloud if space is available
31.A system as claimed in claim 18, wherein a service comprises of one or more running applications
32.A system as claimed in claim 31, wherein at least one application is transferred to a target cloud
33.A system as claimed in claim 31, wherein atleast a partial component of an application is transferred to a target cloud
34. A system as claimed in claim 18, wherein only the computation is transferred to a selected target cloud and the data is retained at the source cloud

Documents

Application Documents

# Name Date
1 2752-CHE-2011 FORM-9 23-08-2011.pdf 2011-08-23
1 2752-CHE-2011-Correspondence to notify the Controller [13-08-2020(online)].pdf 2020-08-13
2 2752-CHE-2011-US(14)-HearingNotice-(HearingDate-13-08-2020).pdf 2020-07-15
2 2752-CHE-2011 FORM-18 23-08-2011.pdf 2011-08-23
3 Form-5.pdf 2011-09-04
3 2752-CHE-2011-ABSTRACT [30-05-2018(online)].pdf 2018-05-30
4 Form-3.pdf 2011-09-04
4 2752-CHE-2011-Amendment Of Application Before Grant - Form 13 [30-05-2018(online)].pdf 2018-05-30
5 Form-1.pdf 2011-09-04
5 2752-CHE-2011-AMMENDED DOCUMENTS [30-05-2018(online)].pdf 2018-05-30
6 2752-CHE-2011-CLAIMS [30-05-2018(online)].pdf 2018-05-30
6 2752-CHE-2011 POWER OF ATTORNEY 15-12-2011.pdf 2011-12-15
7 2752-CHE-2011-COMPLETE SPECIFICATION [30-05-2018(online)].pdf 2018-05-30
7 2752-CHE-2011 CORRESPONDENCE OTHERS 15-12-2011.pdf 2011-12-15
8 2752-CHE-2011-DRAWING [30-05-2018(online)].pdf 2018-05-30
8 2752-CHE-2011 FORM-1 31-07-2012.pdf 2012-07-31
9 2752-CHE-2011-FER_SER_REPLY [30-05-2018(online)].pdf 2018-05-30
9 2752-CHE-2011 CORRESPONDENCE OTHERS 31-07-2012.pdf 2012-07-31
10 2752-CHE-2011-FER.pdf 2018-01-10
10 2752-CHE-2011-MARKED COPIES OF AMENDEMENTS [30-05-2018(online)].pdf 2018-05-30
11 2752-CHE-2011-OTHERS [30-05-2018(online)].pdf 2018-05-30
11 2752-CHE-2011-PETITION UNDER RULE 137 [30-05-2018(online)].pdf 2018-05-30
12 2752-CHE-2011-OTHERS [30-05-2018(online)].pdf 2018-05-30
12 2752-CHE-2011-PETITION UNDER RULE 137 [30-05-2018(online)].pdf 2018-05-30
13 2752-CHE-2011-FER.pdf 2018-01-10
13 2752-CHE-2011-MARKED COPIES OF AMENDEMENTS [30-05-2018(online)].pdf 2018-05-30
14 2752-CHE-2011 CORRESPONDENCE OTHERS 31-07-2012.pdf 2012-07-31
14 2752-CHE-2011-FER_SER_REPLY [30-05-2018(online)].pdf 2018-05-30
15 2752-CHE-2011 FORM-1 31-07-2012.pdf 2012-07-31
15 2752-CHE-2011-DRAWING [30-05-2018(online)].pdf 2018-05-30
16 2752-CHE-2011 CORRESPONDENCE OTHERS 15-12-2011.pdf 2011-12-15
16 2752-CHE-2011-COMPLETE SPECIFICATION [30-05-2018(online)].pdf 2018-05-30
17 2752-CHE-2011 POWER OF ATTORNEY 15-12-2011.pdf 2011-12-15
17 2752-CHE-2011-CLAIMS [30-05-2018(online)].pdf 2018-05-30
18 2752-CHE-2011-AMMENDED DOCUMENTS [30-05-2018(online)].pdf 2018-05-30
18 Form-1.pdf 2011-09-04
19 Form-3.pdf 2011-09-04
19 2752-CHE-2011-Amendment Of Application Before Grant - Form 13 [30-05-2018(online)].pdf 2018-05-30
20 Form-5.pdf 2011-09-04
20 2752-CHE-2011-ABSTRACT [30-05-2018(online)].pdf 2018-05-30
21 2752-CHE-2011-US(14)-HearingNotice-(HearingDate-13-08-2020).pdf 2020-07-15
21 2752-CHE-2011 FORM-18 23-08-2011.pdf 2011-08-23
22 2752-CHE-2011-Correspondence to notify the Controller [13-08-2020(online)].pdf 2020-08-13
22 2752-CHE-2011 FORM-9 23-08-2011.pdf 2011-08-23

Search Strategy

1 patseer_14-09-2017.pdf