Sign In to Follow Application
View All Documents & Correspondence

Guest Based Quality Of Service For Iscsi

Abstract: ABSTRACT GUEST-BASED QUALITY OF SERVICE FOR iSCSI A system and method for providing access to a Logical Unit mapped to an iSCSI target are described herein. In accordance with this disclosure, an initiator IQN name may be split into a physical IQN name (PIN) and a virtual IQN name (VIN). The VIN may be assigned to a virtual adapter that is created in a guest partition. The PIN may be assigned to a physical adapter (e.g., an iSCSI initiator in a hypervisor). The physical adapter may log into the iSCSI target on behalf of the virtual adapter using the VIN. The physical adapter may receive a list of available logical units associated with the iSCSI target and map the list of available logical units to the virtual adapter. Thereafter, a quality of service between the virtual adapter and the iSCSI target may be monitored. Fig.1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
26 September 2013
Publication Number
14/2015
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patents@ssrana.com
Parent Application

Applicants

EMULEX
3333 Susan Street, Costa Mesa, California 92626, U.S.A.

Inventors

1. KRISHNA, Prabal
403, 11th Cross, 18th Main, 2nd Phase, J.P. Nagar, Bangalore-560078, Karnataka, India.

Specification

[0001] The disclosure relates to the field of iSCSI (Internet Small Computer
System Interface). In particular, but not exclusively, it relates to the application of a
quality of service (QoS) in the 5 e iSCSI configuration.
BACKGROUND
[0002] Internet Small Computer System Interface (iSCSI) is an Internet
10 Protocol (IP)-based storage networking standard for linking data storage facilities. By
carrying SCSI commands over IP networks, iSCSI is used to facilitate data transfers
over intranets and to manage storage over long distances. The iSCSI protocol can be
used to transmit data over local area networks (LANs), wide area networks (WANs), or
the Internet and can enable location-independent data storage and retrieval. The
15 protocol allows clients (called initiators) to send SCSI commands via Command
Descriptor Blocks (CDBs) to SCSI storage devices (targets) on remote servers. The
iSCSI protocol is a storage area network (SAN) protocol that is a block storage
protocol, allowing organizations to consolidate storage into data center storage arrays
while providing hosts (such as database and web servers) with the illusion of locally
20 attached disks.
[0003] The Data Center Bridging Exchange (DCBX) standard defined by IEEE
describes a method for bandwidth allocation that can be implemented on an iSCSI
network link. However, even with the DCBX protocol implemented, the availability of
25 resources on a server may not be visible to a Guest Partition from a client point-ofview.
[0004] Furthermore, a unified quality of service (QoS) between the Guest
Partition and the server resource is not guaranteed in a configuration implementing the
30 DCBX protocol alone. For example, the client side of a configuration can apply a QoS
on its side of the network, but if the server does not guarantee the same QoS
parameters, the configuration on the client side may have no effect from an end user
perspective.
3
[0005] A logical unit may be a device addressed by a SCSI protocol or
protocols which encapsulate SCSI. The SCSI protocol may be, for example, iSCSI.
There are different methods of providing direct access to a Logical Unit from a Guest
Partition. A software only implementation of an iSCSI Initiator can be run directly on a
Guest Partition. However, this option is available only for a software implem5 entation
of an iSCSI Initiator.
[0006] An iSCSI Initiator running on a Hypervisor can log into the iSCSI target,
and the Hypervisor can map the Logical Units to a Guest Partition in pass-through
10 mode. However, using the pass-through mode may prevent the use of additional
capabilities that exist in the hardware, therefore limiting the competitive benefit of this
option.
[0007] A hardware implementation of an iSCSI Initiator can be virtualized by
15 way of storage single root I/O virtualization (SR-IOV), and an iSCSI Login can be
directly issued to a Guest Partition completely bypassing the Hypervisor. However,
this option is dependent on Hypervisor vendors implementing Storage SR-IOV, which
is not widely accepted.
20 [0008] Further limitations and disadvantages of conventional and traditional
approaches will become apparent to one of skill in the art, through comparison of such
systems with the present disclosure as set forth in the remainder of the present
application with reference to the drawings.
25 BRIEF SUMMARY
[0009] Aspects of the present disclosure are aimed at a system and method for
providing access to an iSCSI target. In accordance with this disclosure, an initiator
IQN name may be split into a physical IQN name (PIN) and a virtual IQN name (VIN).
30 The VIN may be assigned to a virtual adapter that is created in a guest partition. The
PIN may be assigned to a physical adapter (e.g., an iSCSI initiator in a hypervisor).
The physical adapter may log into the iSCSI target on behalf of the virtual adapter
using the VIN. The physical adapter may receive a list of available logical units
associated with the iSCSI target and map the list of available logical units to the virtual
4
adapter. Thereafter, a quality of service between the virtual adapter and the iSCSI
target may be monitored. The method of partitioning the Initiator IQN name into its
physical and virtual parts may be referred to as Initiator IQN Name virtualization or
IINV.
5
[00010] One example embodiment of this disclosure comprises a guest partition
and a hypervisor. The hypervisor may instantiate a virtual adapter in the guest partition.
Upon logging into the iSCSI target, the hypervisor may query the iSCSI target for a list
of logical units and attach one or more logical units to the virtual adapter.
10
[00011] In another example embodiment of this disclosure, the guest partition
and the hypervisor are elements of a multi-tenant environment comprising a plurality of
guest partitions. The multi-tenant environment may achieve traffic isolation at a logical
unit level according to an Access Control List (ACL) on the iSCSI target. In addition, a
15 layer-2 packet that originates on behalf of an iSCSI virtual adapter may be tagged with
a distinct VLAN ID as per the provisions available in IEEE 802.1Q protocol thus
providing additional traffic isolation in a multi-tenant environment.
[00012] In another example embodiment of this disclosure, the hypervisor may
20 comprise a storage stack operable to query the iSCSI target for a list of available logical
units.
[00013] In another example embodiment of this disclosure, a method for
providing access to an iSCSI target, may comprise: assigning a virtual IQN name (VIN)
25 to a virtual adapter; logging into the iSCSI target on behalf of the virtual adapter using
the VIN; receiving an acceptance response from the iSCSI target; querying the iSCSI
target for a list of available logical units; receiving a report from the iSCSI target with
the list of available logical units; and mapping the list of available logical units to the
iSCSI virtual adapter. An iSCSI initiator on a hypervisor may assign the VIN to the
30 virtual adapter. The iSCSI initiator may also login to the iSCSI Target using the VIN
as an initiator name. The storage stack running on the hypervisor may query the iSCSI
target for the list of available logical units associated with an access control list.
Thereafter, the hypervisor may map the list of available logical units to one or more
iSCSI virtual adapters.
5
[00014] In another example embodiment of this disclosure, a system and method
operable to provide access to an iSCSI target uses a guest partition and a hypervisor.
The guest partition may comprise a virtual adapter. The hypervisor may manage a
quality of service between the virtual adapter and the iSCSI target by monitoring the
resource usage of the virtual adapter and limiting the flow of inbound and outbou5 nd
traffic on behalf of the virtual adapter. The quality of service may be initialized and/or
updated by one or more quality of service related parameters. Examples of quality of
service related parameters may include: a minimum number of input/output operations
per second (IOPs); a maximum number of input/output operations per second (IOPs); a
10 minimum throughput; and a maximum throughput.
[00015] In another example embodiment of this disclosure, a system and method
operable to provide access to an iSCSI target uses a guest partition, a hypervisor and a
quality of service manager. The quality of service manager may monitor a quality of
15 service negotiation between a physical adapter on the hypervisor and the iSCSI target.
The quality of service negotiation may utilize one or more extension keys as per
provisions of RFC 3790 in addition to one or more operational text keys in a login
protocol data unit.
20 BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
[00016] The disclosure will now be described in greater detail with reference to
accompanying figures, in which:
25 [00017] Figure 1 is a block diagram of an Initiator IQN name virtualization
(IINV) platform according to one or more example embodiments of the present
disclosure.
[00018] Figure 2 is a flow diagram of an Initiator IQN name virtualization
30 (IINV) method according to one or more example embodiments of the present
disclosure.
6
[00019] Figure 3 is a block diagram illustrating where the elements of the IINV
method occur according to one or more example embodiments of the present
disclosure.
[00020] Figure 4 is a block diagram of an iSCSI SAN configuration according 5 to
one or more example embodiments of the present disclosure.
[00021] Figure 5 illustrates a SAN Configuration with an Initiator Layer
/Physical Adapter running on a Hypervisor according to one or more example
10 embodiments of the present disclosure.
[00022] Figure 6 illustrates communication between the iSCSI Target and the
iSCSI Initiator according to one or more example embodiments of the present
disclosure.
15
DETAILED DESCRIPTION
[00023] This disclosure provides a system and method for end-to-end QoS
provisioning in a client-server configuration. The provisioning of (Input/Output
20 Operations per Second) IOPs and Bandwidth parameters may be directly applied to a
Virtual Adapter, thus making it visible to a Guest Partition.
[00024] With iSCSI Initiators and Targets being developed and manufactured by
various vendors, an end-to-end QoS provisioning scheme may be implemented by
25 utilizing the extensions per RFC 3720 to define new QoS specific parameters. This
may eliminate the need for a separate QoS management application that interacts with
both the iSCSI Initiator and iSCSI Target.
[00025] This disclosure also provides systems and methods for virtualizing the
30 iSCSI Initiator in order to provide direct access to the Logical Unit (LU) from a Guest
Partition. These systems and methods may not require any changes to the iSCSI
protocol as defined in RFC 3720. By using the systems and methods defined in this
disclosure, it may be possible to provide complete traffic isolation for each Guest
Partition, which may be especially useful in a multi-tenant environment. In addition,
35 for an example embodiment that implements iSCSI Initiator capability in hardware, this
7
disclosure may provide additional application deployment options, since the Logical
Unit can be directly mapped to a guest.
[00026] This disclosure proposes a mechanism for exporting Logical Units from
an iSCSI Target directly into Guest Partitions. Unlike storage SR-IOV which bypasse5 s
the Hypervisor completely, the following system may be under the direct control of a
Hypervisor. Therefore, this system may seamlessly integrate with storage migration
solutions implemented in Hypervisors. This may provide the benefits of traffic
isolation both at the Logical Unit and Network Layer level for hardware
10 implementations of the iSCSI protocol.
[00027] Figure 1 is a block diagram of an Initiator IQN name virtualization
(IINV) platform according to one or more example embodiments of the present
disclosure. The exemplary IINV platform comprises a Hypervisor Host 107 and three
15 Guest Partitions - Guest Partition 0 (101), Guest Partition 1 (103) and Guest Partition 2
(105). Guest Partition 0 (101) comprises Virtual iSCSI Adapter 109. Guest Partition 1
(103) comprises Virtual iSCSI Adapter 111. Guest Partition 2 (105) comprises Virtual
iSCSI Adapter 113. Other example embodiments of the IINV platform may comprise
more or fewer Guest Partitions and/or Virtual iSCSI Adapters. The Hypervisor Host
20 107 comprises an iSCSI Initiator/Physical Adapter 115 and a Host iSCSI Driver 117.
[00028] The iSCSI Initiator running on the Hypervisor may be capable of
reporting support for an IINV feature. The iSCSI Initiator may allow assignments of
the VIN by the Hypervisor; store and retrieve the VIN from a persistent store and delete
25 it from the persistent store on demand. The persistent store that is used to store the VIN
may be available across power cycles.
[00029] The iSCSI Qualified Name (IQN), as documented in RFC 3720, is a
unique name of an iSCSI initiator. The IQN name is split into a physical and logical
30 part by the Hypervisor. The Physical IQN Name (PIN) resides with the iSCSI Initiator
115 running on the Hypervisor Host 107. Each Virtual iSCSI Adapter 109, 111 and
113 is assigned a Virtual IQN Name (VIN).
8
[00030] Figure 2 is a flow diagram of an Initiator IQN name virtualization
(IINV) method 200 according to one or more example embodiments of the present
disclosure. Figure 3 is a block diagram illustrating where the elements of the IINV
method 200 of Figure 2 occur according to one or more example embodiments of the
present disclosure. As shown in Figure 3, the iSCSI Initiator 115 creates the Virtua5 l
iSCSI Adapter 109 within the Guest Partition 101 of Figure 1. The iSCSI Initiator 115
assigns a Virtual IQN Name (VIN) to the instantiated Virtual iSCSI Adapter 109 at
block 201 of Figures 2 and 3. There are several methods that may be invoked in
accordance with block 201 of Figures 2 and 3. These methods comprise:
10 QueryINNVCapability, QueryInstances, CreateInstance and DeleteInstance.
[00031] The QueryINNVCapability method may be invoked by the Hypervisor
at startup or whenever a new iSCSI Initiator arrives on the system. The following are
the operations that occur in the QueryINNVCapability method. First, the Hypervisor
15 queries the iSCSI initiator to determine whether it supports Initiator IQN Name
Virtualization (IINV) capability. If the IINV capability is supported by the iSCSI
initiator running on the Hypervisor, the iSCSI Initiator may return TRUE and also may
return a count of the maximum number of VINs that can be supported by the iSCSI
initiator.
20
[00032] The QueryInstances method may be invoked by the Hypervisor after it
determines that the iSCSI initiator has the IINV capability. The following are example
operations that occur in the QueryInstances method. First, the Hypervisor queries the
iSCSI initiator for a list of VIN already created on the iSCSI Initiator. In response to
25 the query, the iSCSI Initiator consults its persistent store and returns all the VINs
created on the iSCSI Initiator.
[00033] The CreateInstance method may be invoked by the Hypervisor
whenever there is a request to create a new Virtual iSCSI Adapter on behalf of a Guest
30 Partition. The following are example operations that occur in the CreateInstance
method. The Hypervisor checks if the number of VIN + 1 will not exceed the maximum
number of VINs that can be created on the iSCSI Initiator. If the number, VIN + 1, will
not exceed the maximum count allowed, the Hypervisor may generate a new VIN that
is unique across all instances of VINs on the Hypervisor. The Hypervisor may assign
9
the VIN via the iSCSI Initiator. The iSCSI Initiator may store the VIN in its persistent
store and may increment the count of the number of VINs it has been assigned.
[00034] The DeleteInstance method may be invoked by the Hypervisor
whenever there is a request to delete an existing Virtual iSCSI Adapter from a Gu5 est
Partition. The following are example operations that occur in the DeleteInstance
method. The Hypervisor de-assigns the VIN via the iSCSI Initiator. The iSCSI
Initiator deletes the VIN from its persistent store and decrements the count of the
number of VINs it has been assigned currently.
10
[00035] Following the instantiation of the Virtual iSCSI Adapter 109 at block
201 of Figures 2 and 3, the iSCSI initiator 115 may operate on behalf of the Virtual
iSCSI Adapter 109. When the iSCSI initiator 115 running on the Hypervisor 107 does
an iSCSI login at block 203 of Figures 2 and 3 to an iSCSI Target 305 on behalf of a
15 Virtual iSCSI Adapter 109, the iSCSI initiator 115 may use the VIN as the Initiator
Name in the iSCSI Login Protocol Data Unit (PDU). After the iSCSI target 305
accepts the login request and the logical relationship between an Initiator and a Target
(the I_T nexus) gets established, the iSCSI initiator 115 running in the Hypervisor 107
will receive an acceptance response from the iSCSI Target at block 205. A storage
20 stack running on the Hypervisor 107 may then query the iSCSI target 305 for a list of
available Logical Units at block 207. At 209, the iSCSI Initiator may receive a report
back from the iSCSI Target 305 with the list of Logical Units per the Access Control
List at block 209. The Hypervisor may map the list of available Logical Units 307 to
one or more iSCSI Virtual Adapters at block 211. A Logical Unit 307 may therefore be
25 directly available to the Guest Partition 101.
[00036] Figure 4 is a block diagram of an iSCSI storage area network (SAN)
configuration according to one or more example embodiments of the present disclosure.
The example iSCSI SAN configuration comprises different Logical Units mapped to
30 their respective Guest Partitions depending on the Access Control List (ACL) setup on
the iSCSI target 305.
[00037] For the example in Figure 4, Target Name_0 (403) is coupled by the
ACL to one or more Logical Units comprising physical disks 421 and 423. The Guest
10
Partition_0 (101) is able to access the physical disks 421 and 423 via virtual disks 409
and 411. Likewise, Guest Partition_1 (103) is able to access the physical disks 425,
427 and 429 via virtual disks 413, 415 and 417, and Guest Partition_2 (105) is able to
access the physical disk 431 via virtual disk 419.
5
[00038] Since the iSCSI initiator running on the Hypervisor logs into the target
using different VINs, each Guest Partition can only access the Logical Units mapped to
it via the target side ACL. This achieves traffic isolation at the Logical Unit level.
10 [00039] Figure 4 also illustrates how traffic isolation may be achieved at a
network layer by means of VLAN IDs as defined in the IEEE 802.1Q standard
according to one or more example embodiments of the present disclosure. The iSCSI
Initiator running on the Hypervisor ensures that iSCSI storage traffic originating from a
different Virtual IQN Name (VIN) may be tagged with a different VLAN ID. Therefore
15 the traffic from different Guest Partitions is completely isolated from one another as
illustrated by the grey paths for example.
[00040] Typically in a Converged Network Adapter (iSCSI/FCoE) based SAN,
bandwidth provisioning is implemented at a network link level. However, in a
20 virtualized environment, there is no direct visibility into how much bandwidth or IOPs
is available to an individual Guest Partition. This disclosure proposes a method for
applying QoS provisioning directly to a Guest Partition by assigning bandwidth and
IOPs parameters directly to Guest Partitions via a Virtual Adapter
25 [00041] Figure 5 shows a SAN Configuration with an Initiator Layer/Physical
Adapter 115 running on a Hypervisor 107. The Hypervisor exports a Virtual Adapter
109, 111 and 113 for every Guest Partition 101, 103 and 105 respectfully. The Virtual
Adapters 109, 111 and 113 are a logical abstraction of an Initiator and can be
implemented by one or more of the following ways: 1) SR-IOV/MR-IOV PCI Express
30 endpoint that can act as an Initiator; 2) a virtualized iSCSI Adapter exported via the
Initiator IQN Name Virtualization (INNV) method as described with respect to Figure
1; and 3) a virtualized FCoE Adapter exported via the NPIV method as defined in the
T11 Fibre Channel – Link Specification (FC-LS) standard.
11
[00042] Referring to Figure 5 and Figure 1, the Hypervisor 107 is an entity that
may implement a Virtual Machine Manager. A Hypervisor 107 may also host the
Initiator Layer and/or Physical Adapter 115. The Hypervisor 107 may be operable to
create Virtual Adapters 109, 111 and 113 on behalf of a Guest Partition and attach the
Virtual Adapter to the Guest Partition 101, 103 and 105 respectfully. If the 5 Guest
Partition is not capable of running the Initiator service directly, the Hypervisor may
determine the Logical Units attached behind a Target device and assigns it to the Guest
Partition. The Hypervisor 107 may also establish a communication path between the
Virtual Adapter 109, 111 and 113 running on the Guest Partition 101, 103 and 105
10 respectively and the Initiator Layer/Physical Adapter 115 running on the Hypervisor
107 for flow of storage related SCSI commands and management IOCTLs.
[00043] The Initiator Layer/Physical Adapter 115 running on the Hypervisor 107
contains a QoS Manager 503 that implements support for the management of Virtual
15 Adapters 109, 111 and 113. The Virtual Adapters 109, 111 and 113 support setting the
following QoS related parameters per Guest Partition: 1) Minimum IOPs; 2) Maximum
IOPs; 3) Minimum Throughput; and 4) Maximum Throughput. These QoS parameters
can be setup at the time of creating the Virtual Adapter and can be modified at a later
point on demand.
20
[00044] The Initiator Layer/Physical Adapter 115 supports the following
operations on each Virtual Adapter QoS parameter on demand: 1) store parameters in a
persistent Data Store 505; 2) retrieve parameters; 3) modify parameters; and 4) delete
parameters from the Data Store 505.
25
[00045] The Data Store 505 that may be used to store this configuration must be
available across power cycles. In addition to enforcing the QoS parameters, the
Initiator Layer/Physical Adapter 115 can also monitor the resource usage on a per-
Virtual Adapter basis and can report the following statistics via a QueryStatistics
30 operation: 1) Max IOPs; 2) Max Throughput; 3) Average IOPs; 4) Average
Throughput; 5) Bytes Transmitted; and 6) Bytes Received.
[00046] Hypervisors or QoS Management module 501 can query the Initiator
Layer/Physical Adapter 115 and determine the resource usage on a per-Virtual Adapter
35 basis.
12
[00047] QoS Management module 501 may interact with the Hypervisor 107 and
assigns IOPs and Bandwidth parameters to a Virtual Adapter. The Hypervisor may
pass these parameters to the Physical Adapter/Initiator Layer 115. The Physical
Adapter/Initiator Layer 115 may accept these QoS parameters and save them in the data
5 store 505.
[00048] The Physical Adapter/Initiator Layer 115 limits the flow of inbound and
outbound traffic on behalf of the Virtual Adapter as per its QoS settings. Since there
could be multiple traffic flows within a single Virtual Adapter (e.g. multiple TCP/IP
10 connections), the Physical Adapter/iSCSI Initiator Layer 115 accounts for traffic across
all these connections when applying the QoS parameters, since the Physical
Adapter/Initiator Layer can monitor the resource usage on a per-Virtual Adapter
instance basis.
15 [00049] By providing visibility into the QoS provisioning for Guest Partitions, it
may be possible to determine the storage resource allocated to a Guest Partition.
Therefore, data center administrators may provision their storage resource directly to
Guest Partitions. In addition, the resource monitoring and chargeback schemes
implemented by various vendors can also be simplified with this visibility into the QoS
20 provisioning. Data center administrators may design and implement Service-Level
Agreements (SLAs) where QoS is determined based on the data usage (in Bytes as well
as Byte rate) by Guest Partitions.
[00050] The aforementioned traffic isolation provides an end-to-end unified
25 scheme of applying QoS parameters in an iSCSI SAN. The unified scheme of applying
QoS parameters is achieved by allowing both the iSCSI Initiator (e.g. physical adapter)
and the iSCSI Target to participate in the QoS negotiation. Instead of having separate
QoS applications for an Initiator and Target, this disclosure utilizes the extensions
defined in iSCSI RFC 3720 so that there is a unified method of resource provisioning
30 across both entities. QoS management software can be developed that has the whole
SAN view of QoS settings instead of just the initiator or target side view. Since the
Initiators and Targets participate in negotiating for the QoS parameters, the QoS
Management module 501 and/or the QoS manager 503 can get immediate feedback on
the QoS being provisioned.
13
[00051] The QoS negotiation may utilize new public extension keys in addition
to the Login/Text Operational Text Keys in an iSCSI Login PDU. These additional
keys are sent during operational parameter negotiation in the iSCSI Login phase. The
iSCSI Initiator communicates its QoS parameters to an iSCSI Target via these
extension keys. A participating iSCSI target looks at these extension keys and applie5 s
the QoS parameters on its side of the network.
[00052] Figure 6 illustrates communication between the iSCSI Target and the
iSCSI Initiator according to one or more example embodiments of the present
10 disclosure. Changes if any are communicated from the iSCSI Target back to the iSCSI
Initiator during the Login exchange. The iSCSI Initiator 115 responds to changes if any
by configuring the QoS parameters on its side of the network. QoS Management
module 501 and/or 601 may query for the updated QoS parameters from the iSCSI
Initiator 115 at the end of the Login sequence.
15
[00053] The iSCSI protocol may be extended to implement end-to-end iSCSI
provisioning by defining extensions to Text and Operational Keys for the iSCSI Login
PDU for QoS. The key format follows the Public Extension Key format as defined in
RFC 3720, X<#>.
20
[00054] For example, a first extension may be a Boolean key with the registered
string “iSCSIQoS.” When the iSCSI Initiator and iSCSI Target negotiate support for
end-to-end QoS provisioning, the iSCSI Initiator may set the iSCSIQoS key value to
Yes to indicate support for QoS on its side. If the iSCSI Target supports QoS and is
25 willing to participate in QoS, it must also set iSCSIQoS to Yes in the iSCSI Login
Response. If it is incapable or unwilling to participate in the QoS scheme, it must set
iSCSIQoS to No. QoS is enabled if both iSCSI Initiator and iSCSI Target have set
X#iSCSIQoS=Yes. If the key X#iSCSIQoS=Yes has been set, then the rest of the QoS
related parameters must be present and valid.
30
[00055] If iSCSI Initiator and iSCSI Target negotiate support for specifying the
limits of Input/Output Operations per Second (IOPs), the additional extensions may
comprise two 32-bit values to specify a minimum and maximum IOPs. For example,
the registered string “QoSMinIOPs” may be used to represent the minimum IOPs
35 available for bi-directional traffic between the initiator and target. Likewise, the
14
registered strings “QoSMaxIOPs” may be used to represent the maximum IOPs
available for bi-directional traffic between the initiator and target. The iSCSI Initiator
and iSCSI Target may negotiate for the number of IOPs supported. This number must
be greater than or equal to the X#QoSMinIOPs parameter and less than or equal to the
X#QoSMaxIOPs parame5 ter.
[00056] If iSCSI Initiator and iSCSI Target negotiate support for throughput in
units of MBps (MegaBytesPerSecond), the additional extensions may comprise two 32-
bit values to specify a minimum and maximum MBps. For example, the registered
10 string “QoSMinMBps” may be used to represent the minimum MBps available for bidirectional
traffic between the initiator and target. Likewise, the registered strings
“QoSMaxMBps” may be used to represent the maximum MBps available for bidirectional
traffic between the initiator and target. The iSCSI Initiator and iSCSI
Target may negotiate for the throughput supported. This number must be greater than
15 or equal to the X#QoSMinMBps parameter and less than or equal to the
X#QoSMaxMBps parameter.
[00057] One example embodiment of this disclosure comprises a first system
operable to provide access to an iSCSI target, comprising: a guest partition comprising
20 a virtual adapter; and a hypervisor operable to manage a quality of service between the
virtual adapter and the iSCSI target. The hypervisor, in the first system, may be
operable to instantiate the virtual adapter in the guest partition. The hypervisor, in the
first system, may be operable to control one or more quality of service related
parameters. The one or more quality of service related parameters may control a
25 number of operations per second. The one or more quality of service related
parameters may control a throughput, for example. The hypervisor, in the first system,
may be operable to set a minimum quality of service related parameter and a maximum
quality of service related parameter. The hypervisor, in the first system, may be
operable to manage a quality of service for a plurality of virtual adapters. The
30 hypervisor, in the first system, may be operable to update one or more quality of service
parameters after creating the virtual adapter. The hypervisor, in the first system, may
comprise a physical adapter that is operable to monitor the resource usage of the virtual
adapter. The hypervisor, in the first system, may comprise an adapter that is operable
to limit the flow of inbound and outbound traffic on behalf of the virtual adapter.
15
[00058] Another example embodiment of this disclosure comprises a first
method operable to provide access to an iSCSI target, comprising: creating a virtual
adapter in a guest partition; and managing a quality of service between the virtual
adapter and the iSCSI target. In the first method, a hypervisor may be operable to
create the virtual adapter in the guest partition. The first method may comprise settin5 g
one or more quality of service related parameters. The one or more quality of service
related parameters may control a number of operations per second. The one or more
quality of service related parameters may control a throughput, for example. In the first
method, a hypervisor may be operable to set a minimum quality of service related
10 parameter and a maximum quality of service related parameter. In the first method, a
hypervisor may be operable to manage a quality of service for a plurality of virtual
adapters. The first method may comprise updating one or more quality of service
parameters after creating the virtual adapter. The first method may comprise
monitoring the resource usage of the virtual adapter. The first method may comprise
15 limiting the flow of inbound and outbound traffic on behalf of the virtual adapter.
[00059] Another example embodiment of this disclosure comprises a second
system operable to provide access to an iSCSI target, comprising: a guest partition
comprising a virtual adapter; a hypervisor comprising a physical adapter operable to
20 communicate with the iSCSI target on behalf of the virtual adapter; and a quality of
service manager operable to monitor a quality of service negotiation between the
physical adapter and the iSCSI target. The negotiation, in the second system, may
utilize one or more extension keys in addition to one or more operational text keys in a
login protocol data unit. The hypervisor, in the second system, may be operable to
25 control one or more quality of service related parameters. The one or more quality of
service related parameters may control a number of operations per second. The one or
more quality of service related parameters may control a throughput, for example. The
hypervisor, in the second system, may be operable to set a minimum quality of service
related parameter and a maximum quality of service related parameter. The hypervisor,
30 in the second system, may be operable to manage a quality of service for a plurality of
virtual adapters. The hypervisor, in the second system, may be operable to update one
or more quality of service parameters after creating the virtual adapter. The hypervisor,
in the second system, may comprise a physical adapter that is operable to monitor the
16
resource usage of the virtual adapter. The hypervisor, in the second system, may
comprise an adapter that is operable to limit the flow of inbound and outbound traffic
on behalf of the virtual adapter.
[00060] Another example embodiment of this disclosure comprises 5 rises a second
method operable to provide access to an iSCSI target, comprising: creating a virtual
adapter in a guest partition, wherein the virtual adapter is operable to communicate with
the iSCSI target via a hypervisor comprising a physical adapter; and monitoring a
quality of service negotiation between the physical adapter and the iSCSI target. In the
10 second method, the negotiation may utilize one or more extension keys in addition to
one or more operational text keys in a login protocol data unit. The second method
may comprise setting one or more quality of service related parameters. The one or
more quality of service related parameters may control a number of operations per
second. The one or more quality of service related parameters may control a
15 throughput, for example. In the second method, the hypervisor may be operable to set a
minimum quality of service related parameter and a maximum quality of service related
parameter. In the second method, the hypervisor may be operable to manage a quality
of service for a plurality of virtual adapters. The second method may comprise
updating one or more quality of service parameters after creating the virtual adapter.
20 The second method comprises monitoring the resource usage of the virtual adapter.
The second method comprises limiting the flow of inbound and outbound traffic on
behalf of the virtual adapter.
[00061] The present disclosure may be embedded in a computer program
25 product, which comprises all the features enabling the implementation of the example
embodiments described herein, and which when loaded in a computer system is able to
carry out these example embodiments. Computer program in the present context means
any expression, in any language, code or notation, of a set of instructions intended to
cause a system having an information processing capability to perform a particular
30 function either directly or after either or both of the following: a) conversion to another
language, code or notation; b) reproduction in a different material form.
[00062] While the present disclosure has been described with reference to certain
example embodiments, it will be understood by those skilled in the art that various
17
changes may be made and equivalents may be substituted without departing from the
scope of the present disclosure. In addition, many modifications may be made to adapt
a particular situation or material to the teachings of the present disclosure without
departing from its scope. Therefore, it is intended that the present disclosure not be
limited to the particular example embodiment disclosed, but that the present disclosu5 re
will include all example embodiments falling within the scope of the appended claims.
10
18

WE CLAIM:
1. A system operable to provide access to an iSCSI target, comprising:
a guest partition; and
a hypervisor operable to instantiate a virtual adapter in the guest partit5 ion,
wherein the instantiation comprises splitting an iSCSI Qualified Name (IQN) into a
physical IQN and a virtual IQN, the virtual IQN being assigned to the guest partition
via the virtual adapter and the virtual IQN being used to login to the iSCSI target.
10 2. The system of claim 1, wherein the hypervisor is operable to query the iSCSI target
for a list of logical units.
3. The system of claim 1, wherein the hypervisor is operable to attach one or more
logical units to the virtual adapter.
15
4. The system of claim 1, wherein a logical unit is mapped to the guest partition
according to an Access Control List (ACL) on the iSCSI target.
5. The system of claim 1, wherein the hypervisor comprises an iSCSI initiator.
20
6. The system of claim 5, wherein the physical IQN is assigned to the iSCSI initiator.
7. The system of claim 1, wherein one or more logical units are exported to the guest
partition.
25
8. The system of claim 1, wherein the guest partition and the hypervisor are elements
of a multi-tenant environment comprising a plurality of guest partitions.
9. The system of claim 8, wherein the multi-tenant environment is operable to achieve
30 traffic isolation at a logical unit level according to an Access Control List (ACL) on the
iSCSI target and achieve traffic isolation at a network layer level by way of using a
plurality of distinct VLAN ID tags for traffic from a plurality of different virtual
adapters.
19
10. The system of claim 1, wherein the hypervisor comprises a storage stack operable
to query the iSCSI target for a list of available logical units.
11. A method for providing access to an iSCSI target, comprising:
assigning a virtual IQN name (VIN) to 5 a virtual adapter;
logging into the iSCSI target on behalf of the virtual adapter using the VIN;
receiving an acceptance response from the iSCSI target;
querying the iSCSI target for a list of available logical units;
receiving a report from the iSCSI target with the list of available logical units;
10 and
mapping the list of available logical units to the iSCSI virtual adapter.
12. The method of claim 11, wherein an iSCSI initiator on a hypervisor is operable to
assign the VIN to the virtual adapter.
15
13. The method of claim 11, wherein an iSCSI initiator on a hypervisor is operable to
login to the iSCSI Target using the VIN as an initiator name.
14. The method of claim 11, wherein an iSCSI initiator on a hypervisor is operable to
20 receive the acceptance response from the iSCSI target.
15. The method of claim 11, wherein a storage stack running on a hypervisor is
operable to query the iSCSI target for the list of available logical units.
25 16. The method of claim 11, wherein the list of available logical units is associated
with an access control list.
17. The method of claim 11, wherein a hypervisor is operable to map the list of
available logical units to one or more iSCSI virtual adapters.
30
18. The method of claim 11, wherein the method comprises instantiating the virtual
adapter in a guest partition.
20
19. The method of claim 18, wherein the instantiating comprises splitting an iSCSI
Qualified Name (IQN) into a physical IQN and a virtual IQN.
20. The method of claim 19, wherein the method comprises assigning the physical IQN
to an iSCSI initiator 5 tiator on a hypervisor.
Dated this 26th day of September, 2013.

Documents

Application Documents

# Name Date
1 2013-09-26 Form 5.pdf 2013-09-26
1 2014-11-10-Request for certified copy.pdf 2014-11-10
2 2013-09-26 Form 3.pdf 2013-09-26
2 2846-DEL-2013-Request For Certified Copy-Online(10-11-2014).pdf 2014-11-10
3 2014-07-17-Request for certified copy.pdf 2014-07-17
3 2013-09-26 Drawings.pdf 2013-09-26
4 2846-DEL-2013-Request For Certified Copy-Online(17-07-2014).pdf 2014-07-17
4 2013-09-26 Complete Specification.pdf 2013-09-26
5 2846-del-2013-Assignment-(15-10-2013).pdf 2013-10-15
5 2846-del-2013-GPA-(15-10-2013).pdf 2013-10-15
6 2846-del-2013-Correspondence Others-(15-10-2013).pdf 2013-10-15
7 2846-del-2013-Assignment-(15-10-2013).pdf 2013-10-15
7 2846-del-2013-GPA-(15-10-2013).pdf 2013-10-15
8 2013-09-26 Complete Specification.pdf 2013-09-26
8 2846-DEL-2013-Request For Certified Copy-Online(17-07-2014).pdf 2014-07-17
9 2013-09-26 Drawings.pdf 2013-09-26
9 2014-07-17-Request for certified copy.pdf 2014-07-17
10 2846-DEL-2013-Request For Certified Copy-Online(10-11-2014).pdf 2014-11-10
10 2013-09-26 Form 3.pdf 2013-09-26
11 2014-11-10-Request for certified copy.pdf 2014-11-10
11 2013-09-26 Form 5.pdf 2013-09-26