]>
HDS iSCSI volume driver This Cinder volume driver provides iSCSI support for HUS (Hitachi Unified Storage) arrays such as, HUS-110, HUS-130, and HUS-150.
System requirements Use the HDS hus-cmd command to communicate with an HUS array. You can download this utility package from the HDS support site (https://HDSSupport.hds.com. Platform: Ubuntu 12.04LTS or newer.
Supported Cinder operations These operations are supported: Create volume Delete volume Attach volume Detach volume Clone volume Extend volume Create snapshot Delete snapshot Copy image to volume Copy volume to image Create volume from snapshot get_volume_stats Thin provisioning, also known as Hitachi Dynamic Pool (HDP), is supported for volume or snapshot creation. Cinder volumes and snapshots do not have to reside in the same pool.
Configuration The HDS driver supports the concept of differentiated services, where volume type can be associated with the fine-tuned performance characteristics of HDP—the the dynamic pool where volumes shall be created Do not confuse differentiated services with the Cinder volume service. . For instance, an HDP can consist of fast SSDs to provide speed. HDP can provide a certain reliability based on things like its RAID level characteristics. HDS driver maps volume type to the option in its configuration file. Configuration is read from an XML-format file. Examples are shown for single and multi back-end cases. Configuration is read from an XML file. This example shows the configuration for single back-end and for multi-back-end cases. It is not recommended to manage a HUS array simultaneously from multiple Cinder instances or servers. It is okay to manage multiple HUS arrays by using multiple Cinder instances (or servers). Single back-end In a single back-end deployment, only one Cinder instance runs on the Cinder server and controls one HUS array: this setup requires these configuration files: Set the option in the /etc/cinder/cinder.conf file to use the HDS volume driver. This option points to a configuration file. The configuration file location is not fixed. volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml Configure at the location specified previously. For example, /opt/hds/hus/cinder_hds_conf.xml: <?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.16</mgmt_ip0> <mgmt_ip1>172.17.44.17</mgmt_ip1> <username>system</username> <password>manager</password> <svc_0> <volume_type>default</volume_type> <iscsi_ip>172.17.39.132</iscsi_ip> <hdp>9</hdp> </svc_0> <snapshot> <hdp>13</hdp> </snapshot> <lun_start> 3000 </lun_start> <lun_end> 4000 </lun_end> </config> Multi back-end In a multi back-end deployment, more than one Cinder instance runs on the same server. In this example, two HUS arrays are used, possibly providing different storage performance: Configure /etc/cinder/cinder.conf: the hus1 configuration blocks are created. Set the option to point to an unique configuration file for each block. Set the option for each back-end to cinder.volume.drivers.hds.hds.HUSDriver enabled_backends=hus1,hus2 [hus1] volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hus1_conf.xml volume_backend_name=hus-1 [hus2] volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml volume_backend_name=hus-2 Configure /opt/hds/hus/cinder_hus1_conf.xml: <?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.16</mgmt_ip0> <mgmt_ip1>172.17.44.17</mgmt_ip1> <username>system</username> <password>manager</password> <svc_0> <volume_type>regular</volume_type> <iscsi_ip>172.17.39.132</iscsi_ip> <hdp>9</hdp> </svc_0> <snapshot> <hdp>13</hdp> </snapshot> <lun_start> 3000 </lun_start> <lun_end> 4000 </lun_end> </config> Configure the /opt/hds/hus/cinder_hus2_conf.xml file: <?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.20</mgmt_ip0> <mgmt_ip1>172.17.44.21</mgmt_ip1> <username>system</username> <password>manager</password> <svc_0> <volume_type>platinum</volume_type> <iscsi_ip>172.17.30.130</iscsi_ip> <hdp>2</hdp> </svc_0> <snapshot> <hdp>3</hdp> </snapshot> <lun_start> 2000 </lun_start> <lun_end> 3000 </lun_end> </config> Type extra specs: <option>volume_backend</option> and volume type If you use volume types, you must configure them in the configuration file and set the option to the appropriate back-end. In the previous multi back-end example, the platinum volume type is served by hus-2, and the regular volume type is served by hus-1. cinder type-key regular set volume_backend_name=hus-1 cinder type-key platinum set volume_backend_name=hus-2 Non differentiated deployment of HUS arrays You can deploy multiple Cinder instances that each control a separate HUS array. Each instance has no volume type associated with it. The Cinder filtering algorithm selects the HUS array with the largest available free space. In each configuration file, you must define the default in the service labels.
HDS iSCSI volume driver configuration options These details apply to the XML format configuration file that is read by HDS volume driver. These differentiated service labels are predefined: svc_0, svc_1, svc_2, and svc_3 There is no relative precedence or weight among these four labels. . Each respective service label associates with these parameters and tags: : A create_volume call with a certain volume type shall be matched up with this tag. default is special in that any service associated with this type is used to create volume when no other labels match. Other labels are case sensitive and should exactly match. If no configured volume_types match the incoming requested type, an error occurs in volume creation. , the pool ID associated with the service. An iSCSI port dedicated to the service. Typically a Cinder volume instance has only one such service label. For example, any svc_0, svc_1, svc_2, or svc_3 can be associated with it. But any mix of these service labels can be used in the same instance get_volume_stats() always provides the available capacity based on the combined sum of all the HDPs that are used in these services labels. .
Configuration options
Option Type Default Description
Required Management Port 0 IP address
Required Management Port 1 IP address
Optional Username is required only if secure mode is used
Optional Password is required only if secure mode is used
Optional (at least one label has to be defined) Service labels: these four predefined names help four different sets of configuration options -- each can specify iSCSI port address, HDP and an unique volume type.
Required A service label which helps specify configuration for snapshots, such as, HDP.
Required tag is used to match volume type. Default meets any type of , or if it is not specified. Any other volume_type is selected if exactly matched during create_volume.
Required iSCSI port IP address where volume attaches for this volume type.
Required HDP, the pool number where volume, or snapshot should be created.
Optional 0 LUN allocation starts at this number.
Optional 4096 LUN allocation is up to, but not including, this number.