Sign In to Follow Application
View All Documents & Correspondence

Method And Apparatus For Transparent Cloud Computing With A Virtualized Network Infrastructure

Abstract: A capability is provided for providing transparent cloud computing with a virtualized network infrastructure. A method for enabling use of a resource of a data center as an extension of a customer network includes receiving, at a forwarding element (FE), a packet intended for a virtual machine hosted at an edge domain of the data center, determining a VLAN ID of the VLAN for the customer network in the edge domain, updating the packet to include the VLAN ID of the VLAN for the customer network in the edge domain, and propagating the updated packet from the FE toward virtual machine. The edge domain supports a plurality of VLANs for a respective plurality of customer networks. The packet includes an identifier of the customer network and a MAC address of the virtual machine. The VLAN ID of the VLAN for the customer network in the edge domain is determined using the identifier of the customer network and the MAC address of the virtual machine. The FE may be associated with the edge domain at which the virtual machine is hosted, an edge domain of the data center that is different than the edge domain at which the virtual machine is hosted, or the customer network. Depending on the location of the FE at which the packet is received, additional processing may be provided as needed.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
18 April 2012
Publication Number
31/2013
Publication Type
INA
Invention Field
COMMUNICATION
Status
Email
Parent Application

Applicants

ALCATEL LUCENT
3  avenue Octave Gréard  F-75007 Paris  France

Inventors

1. HAO  Fang
217 Hidden Lake Drive  Morganville  New Jersey 07751  USA
2. LAKSHMAN  T.  V.
115 Laredo Drive  Morganville  New Jersey 07751  USA
3. MUKHERJEE  Sarit
One Knob Hill Road  Marlboro  New Jersey 07746  USA
4. SONG  Haoyu
137 Woodbury Road  Edison  NJ 08820  USA

Specification

METHOD AND APPARATUS FOR TRANSPARENT CLOUD COMPUTING WITH A VIRTUALIZED NETWORK INFRASTRUCTURE FIELD OF THE INVENTION The invention relates generally to the field of cloud computing and, more specifically but not exclusively, to providing transparent cloud computing for customer networks. BACKGROUND Cloud computing is a paradigm of computing in which cloud resources of a cloud service provider may be utilized by cloud clients, which may include individual users and enterprises. The cloud resources of a cloud service provider may include cloud services, cloud applications, cloud platforms, cloud infrastructure, and the like, as well as various combinations thereof. For example, existing cloud computing solutions include Amazon EC2, Microsoft Azure, and Google AppEngine, among others. The cloud computing model is reshaping the landscape of Internet-provided services, especially given its beneficial nature for individual users and large enterprises alike. For example, for a home user of a home network, having a server that requires use of an additional server to run a particular application, cloud computing is an attractive option since the home user does not have to commit to the cost of purchasing an additional server; rather, the home user merely rents a virtual server from the cloud service provider. For example, for an enterprise, having existing network infrastructure periodically requiring additional resources to accommodate variations in resource demand, cloud computing is an attractive option since the enterprise does not have to commit to hardware purchase costs; rather, the enterprise need only pay the cloud service provider for actual usage of cloud resources. Disadvantageously, however, the existing cloud computing model lacks a mechanism to effectively integrate resources of the cloud service provider with existing resources of the customer networks. Rather, in the existing cloud computing model, there is a clear boundary demarcating a customer network from cloud-based resources used by the customer network This boundary is maintained primarily due to the fact that the devices within the cloud and the devices in the customer networks are in different IP address domains and, thus, there may be conflicts where the IP address domain employed by the cloud service provider overlaps with IP address domains of customer networks (especially where customer applications utilize a combination of resources of the customer network and resources of the cloud service provider). As such, while providing physically separate networks for different customers can ensure isolation of IP address domains, such a solution is highly inefficient and inflexible and, therefore, there is a need for a mechanism to effectively integrate the resources of a cloud service provider with existing resources of customer networks. SUMMARY Various deficiencies in the prior art are addressed by embodiments that support transparent cloud computing with a virtualized network infrastructure, which enables various resources of a data center to be used as extensions of customer networks. The data center includes a core domain and a plurality of edge domains. The edge domains host resources of the data center. The core domain facilitates communications within the data center. The edge domains interface with the core domain using forwarding elements and, similarly, the customer networks interface with the core domain using forwarding elements. The forwarding elements are controlled using a central controller of the data center. The forwarding elements and the central controller support various capabilities for forwarding packets associated with the customer networks in a manner that enables customer networks to use resources of the data center in an efficient, secure, and transparent manner. The teachings herein can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which: FIG. 1 depicts a high-level block diagram of a communication system architecture; FIG. 2 depicts one embodiment of a method for processing a packet at a source forwarding element, where the packet is intended for a virtual machine hosted in a destination edge domain having a destination forwarding element associated therewith; FIG. 3 depicts one embodiment of a method for processing a packet at a forwarding element associated with an edge domain hosting a virtual machine for which the packet is intended, where an assumption is made that the forwarding element always determines the VLAN ID of the VLAN of the CN within the edge domain; FIG. 4 depicts one embodiment of a method for processing a packet at a forwarding element associated with an edge domain hosting a virtual machine for which the packet is intended, where an assumption is not made that the forwarding element always determines the VLAN ID of the VLAN of the CN within the edge domain; and FIG. 5 depicts a high-level block diagram of a general-purpose computer suitable for use in performing the functions described herein. To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. DETAILED DESCRIPTION OF THE INVENTION An integrated Elastic Cloud Computing (iEC2) architecture is depicted and described herein. The iEC2 architecture enables cloud computing resources to be a transparent extension to existing customer infrastructure, thereby enabling customer applications to utilize both customer computing resources and cloud computing resources in a seamless and flexible manner without any modification to existing customer infrastructure. The iEC2 A architecture enables cloud computing resources to be a transparent extension to existing customer infrastructure by using network virtualization to instantiate virtual extensions of customer networks within the cloud. The iEC2 architecture enables seamless integration of cloud computing resources with existing customer infrastructure without requiring any modification of the existing customer infrastructure. The iEC2 architecture enables such extended customer networks to grow and shrink with demand, thereby obviating the need for customers to deploy network infrastructure which may only be needed temporarily. The iEC2 architecture enables cloud computing resources to be a transparent extension to existing customer infrastructure in a highly scalable manner, such that cloud computing resources may be utilized by a very large number customers (e.g., any types of customers ranging from individual customers having home networks to large enterprise customers having enterprise networks, rather than existing cloud computing architectures which can only accommodate a small number of customers). The iEC2 architecture enables dynamic customization of the data center for different customers (e.g., dynamic customization of data forwarding, policy control, and like features). The iEC2 architecture enables network virtualization to be provided. The network virtualization is provided using two types of entities: Forwarding Elements (FEs) and a Central Controller (CC). In general, the FEs perform packet handling functions (e.g., address mapping, policy checking and enforcement, packet forwarding, and the like), while the CC maintains and provides control information adapted for use by the FEs in performing the packet handling functions (e.g., configuration information, address mapping, policies, and the like). The FEs may be implemented as Ethernet switches having enhanced APIs that enable them to be controlled remotely by the CC. The iEC2 architecture, unlike in typical network virtualization solutions, does not require deployment of specialized routers or switches across the entire data center network; rather, the FEs are deployed only at certain chosen points of the data center network in order to provide the virtualization functions, while conventional, off-the-shelf Ethernet switches can be used in most parts of the data center network. The iEC2 architecture utilizes a hierarchical network structure within the data center network, in which multiple edge domains communicate via a central core domain. The edge domains each include physical hosts (for providing cloud computing resources within the edge domain) connected via one or more switches. The edge domains each provide a number of functions, such as resolving packet addresses (e.g., for resolving the edge domain / layer two address to which a packet should be forwarded), isolating packets of different customers within the edge domain, determining forwarding of packets based on intended destination (e.g., forwarding locally within the edge domain, or toward the core domain for forwarding toward another edge domain), policy checking and enforcement, and the like. The edge domains each are connected to the core domain via one or more FEs (i.e., FEs are deployed as gateways between the edge domains and the core domain, while the CC is associated with the core domain for purposes of communicating with the FEs for providing configuration and control functions for the FEs). The customer networks may be treated as special instances of edge domains, in which case each customer network may access the data center network via one or more FEs (where such FEs are referred to as customer-facing FEs (CN-facing FEs) that see the customer network(s) as local LANs). The FEs may have processing devices associated therewith, such as firewalls, load balancers, and the like, for providing policy treatment of packets traversing the FEs. The core domain may be a flat layer-two network adapted for forwarding packets between edge domains. In the iEC2 architecture, a customer is provided one or more virtual networks isolated from virtual networks of other customers. The virtual network(s) of a customer may be provided in one edge domain or spread across multiple edge domains, with the underlying domain structure being hidden from the customer. In each edge domain, VLANs are utilized locally within the edge domain to isolate different customers that are utilizing resources within the edge domain, and, further, VLAN IDs are reused across edge domains, thereby providing a significant increase in the number of customers which may be supported by the data center network. When a customer network spans multiple edge domains, the customer network may be assigned different VLAN IDs in each of the multiple edge domains and, , for packets forwarded across edge domains, such VLAN IDs are remapped by the gateway FEs between the edge domains and the core domain. As described herein, the iEC2 architecture enables seamless integration of cloud computing resources with customer networks, thereby enabling customers to utilize cloud computing resources without modification of the customer networks. The seamless integration of cloud computing resources with customer networks may be achieved using an addressing scheme for logically isolating customer networks within the data center network. A description of one exemplary embodiment of such an addressing scheme follows. The addressing scheme includes a mechanism for differentiating between different customers, by which each customer network is assigned a unique customer network identifier (referred to herein as a cnet identifier). In the case of a customer having a single VLAN, the cnet identifier also identifies the customer. In the case of a customer having multiple VLANs, use of a different cnet identifier for each VLAN ensures that the VLAN structure of the customer may be preserved (which may be desirable or even necessary, such as where different policy controls are applicable for different VLANs). In this manner, logically, a virtual machine can be identified using a combination of (cnet identifier, IP address). The combination of (cnet identifier, IP address) is mapped to a unique layer two MAC address for the virtual machine. In host virtualization platforms that support assignment of virtual MAC addresses to virtual machines, the unique layer two MAC address for the virtual machine directly identifies the virtual machine, even where other virtual machines may be hosted on the physical server. In host virtualization platforms that do not support assignment of virtual MAC addresses to virtual machines, the layer two MAC address assigned to the virtual machine is the same as the MAC address assigned to the physical server and, thus, an additional identifier is required for purposes of identifying the virtual machine on the physical server. In this case, the additional identifier may be a pseudo MAC address generated for the virtual machine, which is used for virtual machine identification purposes but not for packet forwarding purposes. The addressing 'scheme includes a mechanism for separating different customers within each edge domain. In addition to using layer two and layer three addresses* VLANs are used to separate different Customers within each edge domain. In each edge domain, each customer is mapped to a different VLAN. If a customer has multiple VLANs that use the cloud computing service, each of the internal VLANs of the customer is mapped to a different VLAN in the edge domain. For each edge domain, VLAN configuration is performed at the FE(s) 120 associated with the edge domain, at the Ethernet switch(es) using for forwarding packets within the edge domain, and at the host hypervisors of the physical servers within the edge domain. The VLAN configurations in the edge domains are transparent to the virtual machines, such that applications that run on the virtual machines are not aware of the VLAN configurations. The addressing scheme includes a mechanism for enabling edge domains to reuse VLAN IDs. VLAN tagging scope is limited within the edge domains. By enabling edge domains to reuse VLAN IDs, the limit on the number of customers that may be accommodated by the data center network is eliminated. While this design implicitly imposes a limit on the number of virtual machines that can be supported in each edge domain (e.g., in the worst case, when each virtual machine belongs to a different customer in the edge domain, there can be a maximum of 4000 virtual machines in each edge domain; although, in general, edge domains are likely to be able to support many more virtual machines since many customers are likely to use multiple virtual machines), there is no limit on the number of edge domains that may be supported within the data center and, thus, there is no limit on the number of customers that may be supported. The above-described addressing scheme enables seamless integration of cloud computing resources with customer networks, while at the same time isolating customer networks, and enables preservation of the IP address spaces of the customers, respectively. This ensures that servers of different customers can be differentiated, even when they utilize the same IP address, without requiring use of physically separate networks for each customer within the data center. The above-described addressing scheme may be modified while still providing an iEC2 architecture that enables seamless integration of cloud computing resources with customer networks in a manner that prevents the need for any modification of the customer networks. The above-described addressing scheme may be better understood by way of reference to the exemplary iEC2 architecture depicted and described with respect to FIG. 1. The foregoing description of the iEC2 architecture is merely a general description provided for purposes of introducing the iEC2 architecture and, thus, the iEC2 architecture is not limited in view of this description. A more detailed description of the iEC2 architecture, and its various associated embodiments, follows. FIG. 1 depicts a high-level block diagram of a communication system architecture. As depicted in FIG. 1, communication system architecture 100 is an exemplary iEC2 architecture including a data center (DC) 101 providing cloud computing resources for a plurality of customer networks (CNs) 102A -102c (collectively, CNs 102). The data center 101 includes cloud computing resources which may be made available for use by customers (e.g., where customers may lease or purchase cloud computing resources). The cloud computing resources are provided to customers such that they are considered to be extensions of the customer networks of those customers. The cloud computing resources may include any types of computing resources which may be made available to customers, such as processing capabilities, memory, and the like, as well as various combinations thereof. The data center 101 includes network infrastructure adapted for use in facilitating use of cloud computing resources by customers. The DC 101 includes a plurality of edge domains (EDs) 110i - 1103 (collectively, EDs 110), a plurality of forwarding elements (FEs) 120i - 12O4 (collectively, FEs 120), and a core domain (CD) 130. The DC 101 also includes a central controller (CC) 140 and, optionally, a management system (MS) 150. The various elements of DC 101 cooperate to perform various functions for enabling customers to utilize cloud computing resources available from DC 101. The CNs-102 may utilize cloud computing resources of DC 101 without modification of theCNs 102. The CNs 102 may include any types of customer networks, e.g., from home networks of individual customers to enterprise networks of large enterprise customers. In other words, CNs 101 may be any customer networks which may make use of cloud computing resources of DC 101. The CNs 102 access DC 101 using one or more of the FEs 120 of DC 101. A description of the various components of DC 101 which enable CNs 102 to access cloud computing resources follows. The EDs 110 enable a significant increase in the number of CNs 102 which may be supported by DC 101. As described herein, each ED 110 is capable of supporting a set of VLANs for customers independent of each of the other EDs 110. In this manner, rather than the DC 101 being constrained by the number of VLANs that may be supported (and, thus, the number of CNs 102 which may be supported), only each of the individual EDs 110 is constrained by the number of VLANs that may be supported and, thus, the number of CNs 102 which may be supported; however, from the perspective of DC 101, this per-ED limitation on the number of CNs 102 which maybe supported is artificial in that any number of EDs 110 may be supported by DC 101. The EDs 110 each include physical servers 112 for use by customers associated with CNs 102 (illustratively, ED 110i includes two physical servers 112i-i and 1121_2, ED 1102 includes two physical servers 1122-i and 1122-2, and ED HO3 includes two physical servers 1123-i and 1123.2, where the physical servers are referred to collectively as physical servers 112). The physical servers 112 each host cloud computing resources which may be utilized by CNs 102. The physical servers 112 may be any servers suitable for supporting such cloud computing resources, which may depend on the type(s) of cloud computing resources being made available for use by CNs 102. The PSs 112 each include one or more virtual machines (VMs) 113 which may be utilized by CNs 102. For each PS 112 supporting multiple VMs 113, the VMs 113 of the PS 112 may be implemented using any virtualization technique(s) suitable for logically separating different customer networks on the physical server 112 (and thus, for making different portions of the same physical hardware available for use by different customers in a transparent, secure, and cost-effective manner). The VMs 113 are virtual machines configured on PSs 112. The VMs 113 provide cloud computing resources which may be configured on PSs 112 for use by CNs 102. The cloud computing resources include any resources which may be utilized by CNs 102, such as processor resources, memory resources, and the like, as well as various combinations thereof. In the exemplary network of FIG. 1, three customers have VMs 113 provisioned within DC 101 for use by CNs 102 of the three customers. The first CN 102A has four VMs 113 provisioned within DC 101 (namely, two VMs 113 on PS 112M of ED110i,oneVM 113 on PS 112i.2 of ED 110i, and one VM 113 on PS 1122-i of ED 1102). The second CN 102B also has four VMs 113 provisioned within DC 101 (namely, oneVM 113 on PS 112^ of ED 110i,oneVM 113 on PS 112^ of ED 1102loneVM 113 on PS 1122-2 of ED 1102, and one VM 113 on PS 1123.2 of ED 1103). The third CN 102c has two VMs 113 provisioned within DC 101 (namely, two VMs 113 on PS 1123.1 of ED 1103). From these exemplary customers, it will be appreciated that VMs 113 may be provisioned within DC 110 in many different configurations (e.g., a customer may utilize one physical server, multiple physical servers within the same edge domain, multiple physical servers across the different edge domains, and the like, and, further, a customer may utilize one or more VMs on any given physical server). The EDs 110 each include communication infrastructure for routing packets within EDs 110. In one embodiment, EDs 110 may utilize switches to perform packet forwarding within EDs 110. For example, EDs 110 each may utilize one or more Ethernet switches. In one embodiment, since the switches in each ED 110 need to handle different VLANs for different customers, the switches in each ED 110 may be configured in a conventional tree-based topology (e.g., rooted at FEs 120 associated with EDs 110, respectively). It will be appreciated that EDs 110 may be implemented using other types of network elements and other associated communication capabilities. The EDs 110 each have one or more FEs 120 associated therewith. The FEs 120 operate as gateways between EDs 110 and CD 130. It will be appreciated that while one FE 120 per ED 110 is sufficient for the network virtualization functions to be supported within DC 101, multiple FEs 120 may be used to interface EDs 110 with CD 130 for purposes of load balancing, improved reliability, and the like. The FEs 120 perform packet handling functions (e.g., address lookup and mapping, policy checking and enforcement, packet forwarding and tunneling, and the like) for packets associated with DC 101. The FEs 120 function as gateways between CNs 102 and CD 130 and between EDs 110 and CD 130. The FEs 120 include CN-facing FEs (illustratively, FE 1204 which operates as a gateway in DC 101 for each of the CNs 102) and ED-facing FEs (illustratively, FEs 120i, 1202, and I2O3 which operate as gateways between EDs 110i, HO2, and HO3, and CD 130, respectively). The operation of the FEs 120 in facilitating communications for CNs 102 and EDs 110 is described in additional detail hereinbelow. The FEs 120 perform address lookup and mapping functions. The address lookup and mapping may be performed by an FE 120 using mapping information stored in the local storage of the FE 120 and/or by requesting the required mapping information from CC 140 (e.g., such as where the required mapping information is not available from the local storage of the FE 120). The mapping information which may be stored locally and/or requested from CC 140 includes any mapping information required for performing the packet -10 handling functions (e.g., such as the virtual machine MAC <-» (cnet identifier, IP address) mapping, the virtual machine MAC -o- edge domain identifier mapping, the edge domain identifier «-» FE MAC list mapping, and the (cnet identifier, edge domain identifier) <-> VLAN identifier mapping described herein as being maintained by CC 140). The FEs 120 may perform policy checking and enforcement functions. The policies may be general policies enforced by the data center operator and/or customer policies enforced for customers. The policy checking and enforcement functions may be provided using policy information, which may be stored locally on the FEs 120 and/or requested from CC 140 by the FEs 120 (e.g., where the required policy information is not available locally on the FEs 120). For example, the FEs 120 may process packets to ensure that MAC address, IP address, VLAN IDs, and the like are consistent for the source and destination of the packet. For example, the FEs 120 may apply customer policies (e.g., forwarding packets to firewalls before delivery, utilizing load balancers, and the like). It will be appreciated that FEs 120 may check and enforce any other suitable data center operator and/or customer policies. The FEs 120 perform packet forwarding functions. The address lookup and mapping functions may be utilized in conjunction with the packet forwarding functions. For example, FEs 120 may forward packets originating from CNs 102 that are intended for processing by virtual machines within DC 101, forward packets originating from virtual machines within DC 101 that are intended for CNs 102, forward packets originating from virtual machines within DC 101 that are intended for other virtual machines within DC 101, forward control packets exchanged within DC 101 (e.g., such as packets conveying provisioning information mapping information, and the like), and the like. As may be seen from these exemplary packet flows, FEs 120 may receive packets from EDs 110 and forward the packets via CD 130, receive packets from CD 130 and forward the packets to EDs 110, receive packets from EDs and forward the packets back within the same EDs 110, and the like. The FEs 120 may perform such packet forwarding using any suitable packet forwarding capabilities. In one embodiment, a source FE 120 receiving a packet from a CN 102 or an ED 110 for forwarding to a destination ED 110 via CD 130 tunnels the packet across CD 130 using tunneling. In one such embodiment, the source FE tunnels the packet using MAC-in-MAC tunneling. In MAC-in-MAC tunneling, the source FE 120 adds an outer MAC header to the Ethernet frame (the header of the Ethernet frame being the internal header), the modified Ethernet frame is routed by the Ethernet switches of CD 130 using the outer MAC header, and the destination FE 120 removes the outer MAC header. It will be appreciated that most Ethernet switches allow larger frame sizes to be used, such that the additional bytes of the outer MAC header will not cause any issues within CD 130 (especially in CD 130, where higher capacity Ethernet switches may be deployed to meet the traffic demands of all of the EDs 110). It will be appreciated that any other suitable type of tunneling may be used. In one embodiment, an FE 120 receiving a packet for delivery within the ED 110 with which the FE 120 is associated (e.g., to a VM 113), forwards the packet within the ED 110 using the MAC address of the VM 113, the IP address of the VM 113, and the VLAN ID of the CN 102 with which the packet is associated. The packet may be routed through the ED 110 via one or more Ethernet switches deployed within the ED 110 for facilitating communications between the FE 120 and the VMs 113 hosted within the ED 110. It will be appreciated that this type of packet forwarding may be performed for packets received at the FE 120 via CD 130 and for packets originating from within the ED 110 with which the FE 120 is associated (i.e., intra-ED communications). The FEs 120 perform such functions using any information suitable for performing such functions. As indicated hereinabove, the FEs 120 may access such information in any suitable manner. For example, the FEs 120 may store such information locally (e.g., packet forwarding entries, customer policy rules, and the like) and/or receive such information from one or more remote sources of such information (e.g. via signaling with other FEs 120, via requests to CC 140, and the like). The FEs 120 may support various other capabilities in addition to the address lookup and mapping functions, the policy checking and enforcement functions, and the packet forwarding functions. The FEs 120 may be implemented in any suitable manner. In one embodiment, FEs 120 may be implemented as switches. In one such embodiment, for example, FEs 120 may be implemented as high capacity Ethernet switches supporting such capabilities. The FEs 120 also may support enhanced APIs adapted for enabling the FEs 120 to be controlled remotely by CC 140. In one embodiment, one or more of the FEs 120 may have one or more additional network elements associated therewith for use in performing various packet processing functions (e.g., firewalls, load balancers, and the like). For example, FE 120i has a firewall and load balancer associated therewith for use in processing packets (e.g., processing packets received from CNs 102 before routing of the packets within DC 101), and FE 1204 has a firewall and load balancer associated therewith for use in processing packets (e.g., processing packets received at FE 120i from CD 120 before routing of the packets within ED 110i and/or processing packets received at FE 120i from ED 110i before routing of the packets toward CD 120). The CD 130 facilitates communications within DC 101. The CD 130 facilitates communications associated with FEs 120. For example, CD 130 facilitates communications between CNs 102 and EDs 110, between EDs 110, between FEs 120, between FEs 120 and data center controllers (e.g., such as between FEs 120 and CC 140 and between FEs 120 and MS 150), and the like, as well as various combinations thereof. The CD 130 also facilitates communications between CC 140 and other components of DC 101 (e.g., FEs 120, VMs 113, and the like) and, optionally, between MS 150 and other components of DC 101 (e.g., FEs 120, VMs 113, and the like). The CD 130 includes communication infrastructure for routing packets within CD 130. The CD 130 may be implemented in any suitable-manner. In one embodiment, CD 130 may utilize switches to perform packet forwarding within CD 130. In one such embodiment, for example, CD 130 may utilize one or more Ethernet switches. It will be appreciated that, since the switches in CD 130 primarily are used for packet forwarding, the design of the switching fabric of CD 130 is not limited by any policy. In one embodiment, in order to ensure better resource usage, shortest-path frame routing, or other suitable schemes that allow use of multiple paths, may be implemented within CD 130. It will be appreciated that CD 130 may be implemented using any other suitable type of network elements and associated communication capabilities. The CC 140 cooperates with FEs 120 to enable FEs 120 to provide the packet forwarding functions for DC 101. The CC 140 may determine provisioning information for use in provisioning the FEs 120 to support customer requests for computing resources of DC 101, and may provide such provisioning information to the FEs 120. The CC 140 may maintain mapping information adapted for use by FEs 120 in forwarding packets for DC 101, and may provide such mapping information to the FEs 120. The CC 140 may maintain policy information adapted for use by FEs 120 in enforcing customer policies while forwarding packets for DC 101, and may provide such policy information to the FEs 120. In one embodiment, CC 140 maintains the following mapping information adapted for use by FEs 120 in forwarding packets for DC 101: (1) virtual machine MAC <-> (cnet identifier, IP address) mapping: This mapping maps the MAC address of a VM 113 to a combination of a customer network identifier (cnet identifier) of a customer for which the VM 113 is provisioned and an IP address of the VM 113. Where each customer has its own independent IP space, the (cnet identifier, IP address) also uniquely identifies the VM 113; (2) virtual machine MAC -o- edge domain identifier mapping: This mapping maps the MAC address of a VM 113 to the identifier of the ED 110 within which that VM 113 is hosted. As noted hereinabove, CNs 102 are considered as SDecial edae domains and, thus, each CN 102 also is assigned an edge domain identifier; (3) edge domain identifier o FE MAC list mapping: This mapping maintains an association between the MAC address(es) of the FE(s) 120 to which the ED 110 connects. As noted hereinabove, it is possible for an ED 110 to have multiple FEs 120, e.g., for load balancing reasons, reliability reasons, and the like. (4) (cnet identifier, edge domain identifier) o VLAN identifier mapping: This mapping maps a combination of a customer network identifier and edge domain identifier to a VLAN identifier for the customer (since a customer may access VMs 113 in multiple EDs 110). In other words, each CN 102 is allocated a VLAN identifier in each ED 110 having one or more VMs 113 of the customer, where the VLAN identifiers for a given CN 102 may be different in different EDs 110. The VLAN identifiers in DC 101 may be allocated by CC 140 (or any other eiement(s) suitable for performing such allocation). The VLAN identifiers used in CNs 102 are allocated by the respective customers. In one embodiment, at least a portion of this mapping information that is maintained by CC 140 also may be stored within each of the FEs 120 such that the FEs 120 may utilize such mapping information locally for performing packet forwarding, rather than the FEs 120 having to continually query CC 140 for such mapping information. In one embodiment, CC 140 maintains policy information adapted for use by FEs 120 in enforcing customer policies while forwarding packets for DC 101. The policies may include any suitable policy rules. For example, a policy rule for a customer may be that all packets must first be forwarded from the associated CN-facing FE 120 to a firewall before being forwarded to the destination. In this case, the source FE 120 that enforces this policy will tunnel the packets to the firewall before the packets are ultimately forwarded toward the destination FE 120. The CC 140 may maintain the information in any suitable manner (e.g., maintaining the information using one or more databases, storing the information in any suitable format, and the like, as well as various rnmhinatinns thprpnft The* CC 140 mav nrovide the information to FEs 120 in any suitable manner, e.g., periodically, in response to queries from FEs 120, and the like, as well as various combinations thereof. Although primarily depicted and described herein with respect to use of a single CC 140 within DC 101, it will be appreciated that standard reliability and scalability techniques may be used for providing the functions of CC 140 within DC 101. The functionality of CC 140 may be distributed in any suitable manner. In one embodiment, for example, different EDs 110 may be managed by different CCs. In one embodiment, for example, different customers may be assigned to different CCs (e.g., using a Distributed Hash Table (DHT)). It will be appreciated that, since management and policy control are relatively independent for different customer networks, such partitioning of the functionality of CC 140 does not affect the functionality of CC 140. In one embodiment, FEs 120 and CC 140 cooperate as a distributed router(s), in which FEs 120 are simplified packet forwarding elements and CC 140 provides centralized control of FEs 120. The use of such a distributed router architecture provides various advantages for DC 101, simplifying management of resources and policies within DC 101. In one embodiment, FEs 120 and CC 140 together form a VICTOR, as depicted and described in U.S. Patent Application Serial No. 12/489,187, entitled "PROVIDING CLOUD-BASED SERVICES USING DYNAMIC NETWORK VIRTUALIZATION, filed June 22, 2009 which is incorporated by reference herein in its entirety. The MS 150 provides a customer front-end by which customers may request computing resources of DC 101. For example, a customer may access MS 150 remotely in order to request that computing resources of DC 101 be made available for use by the CN 102 of that customer. The customer may specify various parameters associated with the request for computing resources, such as type of resources, amount of resources, duration of resource usage/availability, and the like, as well as various combinations thereof. The MS 150, based upon a request for computing resources, signals one or more of the FEs 120 to provision the FE(s) 120 to support the request for computing resources, or signals CC 140 to signal one or more of the FEs 120 to provision the FE(s) 120 to support the request for computing resources. The provisioning may be performed immediately (e.g., where the request indicates that the computing resources are to be used immediately), may be scheduled to be performed at a later time (e.g., where the request indicates that the computing resources are to be used at a later time), and the like. The MS 150 also may provide various network management functions within DC 101. Although primarily depicted and described with respect to a data center having specific types, numbers, and arrangements of network elements, it will be appreciated that the data center may be implemented using different types, numbers, and/or arrangements of network elements. For example, although primarily depicted and described with respect to use of a single FE 120 to connect an ED 110 to CD 130, it will be appreciated that one or more EDs 110 may be connected to CD 130 via multiple FEs 120 (e.g., for load-balancing, reliability, and the like). For example, although primarily depicted and described with respect to use of a single CC 140, the functionality of CC 140 may be spread across multiple CCs 140. It will be appreciated that various other modifications may be made. As described herein, DC 101 enables the CNs 102 to utilize cloud computing resources of DC 101 without modification of the CNs 102. The CNs 102 may include any types of customer networks, e.g., from home networks of individual customers to enterprise networks of large enterprise customers. In other words, CNs 101 may be any customer networks which may make use of cloud computing resources of DC 101. The CNs 102 may communicate with DC 101 using any suitable means of communication (e.g., Virtual Private l_AN Services (VPLS), Multi-Protocol Label Switching (MPLS), and the like). In one embodiment, DC 101 and CN 102 communicate using VPLS. In one such embodiment, in which a CN 102 communicates with DC 101 using VPLS via provider edge (PE) routers of an Internet Service Provider (ISP), a customer edge (CE) device within the CN 102 connects to a first VPLS PE router of the ISP and the CE-acting FE 120 of DC 101 that is associated with r.M 1r>9 rnnnpntc tn a Rpr.nnri VPI S PF. router of the ISP. For the CE devices at the ends, the associated PE router appears as a local Ethernet switch. In one further embodiment, in order to avoid scalability issues that may arise (e.g., since different ports need to be allocated at both the CE device and PE router interfaces for different customers, so that the total number of customers that can be supported by PEs and CE-acting FEs is limited by the number of ports), QinQ encapsulation is used between the CE-acting FE and the PE routers, thereby enabling each port to support up to 4000 customers and, thus, significantly increasing the numbers of customers that can be supported. It will be appreciated that VPLS may be utilized to support communications between DC 101 and CNs 102 in any other suitable manner. In one embodiment, DC 101 and CN 102 communicate using MPLS. In one such embodiment, in which both DC 101 and CN 102 get network connections from the same Internet Service Provider (ISP), MPLS may be used as the underlying transport, and bandwidth may be provisioned for better quality of service. In another such embodiment, in which DC 101 and CN 102 get network connections from different Internet Service Providers (ISPs), one or more other encapsulation protocols may be utilized for traversing layer three networks (e.g., such as using Generic Routing Encapsulation (GRE) or other encapsulation protocols that are suitable for traversing layer three networks). It will be appreciated that MPLS may be utilized to support communications between the DC 101 and the CNs 102 in any other suitable manner. In one embodiment, for individual users or small businesses, Layer 2 Tunneling Protocol (L2TP) / Internet Protocol Security (IPSec) may be used between the DC 101 and the CN 102 in order to provide basic connectivity and security without assistance from network service providers. It will be appreciated that communications between DC 101 and CNs 102 may be implemented using any other suitable types of communications capabilities. As described herein, the iEC2 architecture enables integration of cloud computing resources with customer networks, thereby enabling customers to utilize cloud computing resources without modification of the customer networks. The utilization of cloud computing resources of the data center by customer networks requires exchanging of packets between the data center and the customer networks, as well as exchanging of packets within the data center. A description of different data forwarding paths utilized for providing cloud computing services in the iEC2 architecture follows. In utilizing the cloud computing service, a device in a CN 102 may send a packet to a VM 113 in DC 101. For purposes of clarity in describing the data flow, assume for this example that a packet is sent from a device in CN 102A to the VM 113 that is hosted on PS 112i_2 in ED 110-, for CN 102A. The packet is forwarded from CN 102A to the CN-facing FE 12O4 associated with CN 102A. The packet is received at CN-facing FE I2O4 associated with CN 102A. The packet received at CN-facing FE I2O4 includes the MAC address and IP address of intended VM 113. The CN-facing FE 1204 ensures that the MAC address of intended VM 113 that is included in the packet is consistent with the combination of (cnetid, IP address) using the virtual machine MAC o (cnet identifier, IP address) mapping. The CN-facing FE 1204 determines the ED 110 hosting the VM 113 for which the packet is intended (which, in this example, is ED 110i that is hosting intended VM 113 for CN 102A) by using the MAC address from the packet to access the virtual machine MAC <-» edge domain identifier mapping. The CN-facing FE 1204 determines the MAC address of one of the FEs 120 of the identified ED 110 hosting the VM 113 for which the packet is intended (which, in this example, is FE 120i serving ED 110-i) by using the determined edge domain identifier to access the edge domain identifier ^ FE MAC list mapping. The CN-facing FE 1204 modifies the received packet to include the determined MAC address of the one of the FEs 120 of the ED 110 hosting the VM 113 for which the packet is intended (which, in this example, is the MAC address of FE 120i serving ED 110i). For example, CN-facing FE 1204 may append the determined MAC address of FE 110i as an outer header of the received packet to form a modified packet. The CN-facing FE 1204, optionally, also may access policy information for the ^1 lotrtmor

Documents

Application Documents

# Name Date
1 3491-CHENP-2012 FORM-18 19-04-2012.pdf 2012-04-19
1 3491-CHENP-2012-US(14)-HearingNotice-(HearingDate-02-02-2021).pdf 2021-10-17
2 3491-CHENP-2012 CORRESPONDENCE OTHERS 19-04-2012.pdf 2012-04-19
2 3491-CHENP-2012-Response to office action [22-03-2021(online)].pdf 2021-03-22
3 Power of Authority.pdf 2012-04-24
3 3491-CHENP-2012-Correspondence to notify the Controller [01-02-2021(online)].pdf 2021-02-01
4 Form-5.pdf 2012-04-24
4 3491-CHENP-2012-FORM-26 [01-02-2021(online)].pdf 2021-02-01
5 Form-3.pdf 2012-04-24
5 Correspondence by Agent_Power of Attorney_16-10-2018.pdf 2018-10-16
6 Form-1.pdf 2012-04-24
6 3491-CHENP-2012-ABSTRACT [15-10-2018(online)].pdf 2018-10-15
7 Drawings.JPG 2012-04-24
7 3491-CHENP-2012-CLAIMS [15-10-2018(online)].pdf 2018-10-15
8 3491-CHENP-2012-COMPLETE SPECIFICATION [15-10-2018(online)].pdf 2018-10-15
8 3491-CHENP-2012 FORM-3 18-06-2013.pdf 2013-06-18
9 3491-CHENP-2012 CORRESPONDENCE OTHERS 18-06-2013.pdf 2013-06-18
9 3491-CHENP-2012-DRAWING [15-10-2018(online)].pdf 2018-10-15
10 3491-CHENP-2012 CORRESPONDENCE OTHERS 19-08-2013.pdf 2013-08-19
10 3491-CHENP-2012-FER_SER_REPLY [15-10-2018(online)].pdf 2018-10-15
11 3491-CHENP-2012 FORM-3 07-10-2013.pdf 2013-10-07
11 3491-CHENP-2012-FORM 3 [15-10-2018(online)].pdf 2018-10-15
12 3491-CHENP-2012 CORRESPONDENCE OTHERS 07-10-2013.pdf 2013-10-07
12 3491-CHENP-2012-FORM-26 [15-10-2018(online)].pdf 2018-10-15
13 3491-CHENP-2012 FORM-3 11-08-2014.pdf 2014-08-11
13 3491-CHENP-2012-Information under section 8(2) (MANDATORY) [15-10-2018(online)].pdf 2018-10-15
14 3491-CHENP-2012 CORRESPONDENCE OTHERS 11-08-2014.pdf 2014-08-11
14 3491-CHENP-2012-OTHERS [15-10-2018(online)].pdf 2018-10-15
15 3491-CHENP-2012 FORM-3 02-03-2015.pdf 2015-03-02
15 3491-CHENP-2012-Proof of Right (MANDATORY) [15-10-2018(online)].pdf 2018-10-15
16 3491-CHENP-2012 CORRESPONDENCE OTHERS 02-03-2015.pdf 2015-03-02
16 3491-CHENP-2012-FORM 3 [13-06-2018(online)].pdf 2018-06-13
17 3491-CHENP-2012-FER.pdf 2018-04-17
17 3491-CHENP-2012 FORM-3 09-06-2015.pdf 2015-06-09
18 3491-CHENP-2012 CORRESPONDENCE OTHERS 09-06-2015.pdf 2015-06-09
18 Form 3 [23-11-2016(online)].pdf 2016-11-23
19 3491-CHENP-2012-Correspondence-F3-290216.pdf 2016-07-05
19 3491-CHENP-2012-FORM-3-15-10-15.pdf 2016-03-19
20 3491-CHENP-2012-CORRESPONDENCE-15-10-15.pdf 2016-03-19
20 3491-CHENP-2012-Form 3-290216.pdf 2016-07-05
21 3491-CHENP-2012-CORRESPONDENCE-15-10-15.pdf 2016-03-19
21 3491-CHENP-2012-Form 3-290216.pdf 2016-07-05
22 3491-CHENP-2012-Correspondence-F3-290216.pdf 2016-07-05
22 3491-CHENP-2012-FORM-3-15-10-15.pdf 2016-03-19
23 3491-CHENP-2012 CORRESPONDENCE OTHERS 09-06-2015.pdf 2015-06-09
23 Form 3 [23-11-2016(online)].pdf 2016-11-23
24 3491-CHENP-2012-FER.pdf 2018-04-17
24 3491-CHENP-2012 FORM-3 09-06-2015.pdf 2015-06-09
25 3491-CHENP-2012 CORRESPONDENCE OTHERS 02-03-2015.pdf 2015-03-02
25 3491-CHENP-2012-FORM 3 [13-06-2018(online)].pdf 2018-06-13
26 3491-CHENP-2012 FORM-3 02-03-2015.pdf 2015-03-02
26 3491-CHENP-2012-Proof of Right (MANDATORY) [15-10-2018(online)].pdf 2018-10-15
27 3491-CHENP-2012 CORRESPONDENCE OTHERS 11-08-2014.pdf 2014-08-11
27 3491-CHENP-2012-OTHERS [15-10-2018(online)].pdf 2018-10-15
28 3491-CHENP-2012 FORM-3 11-08-2014.pdf 2014-08-11
28 3491-CHENP-2012-Information under section 8(2) (MANDATORY) [15-10-2018(online)].pdf 2018-10-15
29 3491-CHENP-2012 CORRESPONDENCE OTHERS 07-10-2013.pdf 2013-10-07
29 3491-CHENP-2012-FORM-26 [15-10-2018(online)].pdf 2018-10-15
30 3491-CHENP-2012 FORM-3 07-10-2013.pdf 2013-10-07
30 3491-CHENP-2012-FORM 3 [15-10-2018(online)].pdf 2018-10-15
31 3491-CHENP-2012 CORRESPONDENCE OTHERS 19-08-2013.pdf 2013-08-19
31 3491-CHENP-2012-FER_SER_REPLY [15-10-2018(online)].pdf 2018-10-15
32 3491-CHENP-2012 CORRESPONDENCE OTHERS 18-06-2013.pdf 2013-06-18
32 3491-CHENP-2012-DRAWING [15-10-2018(online)].pdf 2018-10-15
33 3491-CHENP-2012 FORM-3 18-06-2013.pdf 2013-06-18
33 3491-CHENP-2012-COMPLETE SPECIFICATION [15-10-2018(online)].pdf 2018-10-15
34 3491-CHENP-2012-CLAIMS [15-10-2018(online)].pdf 2018-10-15
34 Drawings.JPG 2012-04-24
35 3491-CHENP-2012-ABSTRACT [15-10-2018(online)].pdf 2018-10-15
35 Form-1.pdf 2012-04-24
36 Correspondence by Agent_Power of Attorney_16-10-2018.pdf 2018-10-16
36 Form-3.pdf 2012-04-24
37 Form-5.pdf 2012-04-24
37 3491-CHENP-2012-FORM-26 [01-02-2021(online)].pdf 2021-02-01
38 Power of Authority.pdf 2012-04-24
38 3491-CHENP-2012-Correspondence to notify the Controller [01-02-2021(online)].pdf 2021-02-01
39 3491-CHENP-2012-Response to office action [22-03-2021(online)].pdf 2021-03-22
39 3491-CHENP-2012 CORRESPONDENCE OTHERS 19-04-2012.pdf 2012-04-19
40 3491-CHENP-2012-US(14)-HearingNotice-(HearingDate-02-02-2021).pdf 2021-10-17
40 3491-CHENP-2012 FORM-18 19-04-2012.pdf 2012-04-19

Search Strategy

1 3491search_10-04-2018.pdf