HDS iSCSI Volume Driver This cinder volume driver allows iSCSI support for HUS (Hitachi Unified Storage) arrays, such as, HUS-110, HUS-130 and HUS-150.
System Requirements HDS utility hus-cmd is required to communicate with a HUS array. This utility package is downloadable from HDS support website. Platform: Ubuntu 12.04LTS or higher.
Supported Cinder Operations The following operations are supported: Create volume Delete volume Attach volume Detach volume Clone volume Extend volume Create snapshot Delete snapshot Copy image to volume Copy volume to image Create volume from snapshot get_volume_stats Thin provisioning aka HDP (Hitachi Dynamic Pool) is supported for volume or snapshot creation. Cinder-volumes and cinder-snapshots don't have to reside in the same pool .
Configuration HDS driver supports the concept of differentiated services, Not to be confused with Cinder volume service where volume type can be associated with the fine tuned performance characteristics of HDP -- the dynamic pool where volumes shall be created. For instance an HDP can consist of fast SSDs to provide speed. Another HDP can provide a certain reliability based on such as, its RAID level characteristics. HDS driver maps volume type to the volume_type tag in its configuration file, as shown below. Configuration is read from an xml format file. Its sample is shown below, for single backend and for multi-backend cases. HUS configuration file is read at the start of cinder-volume service. Any configuration changes after that require a service restart. It is not recommended to manage a HUS array simultaneously from multiple cinder instances or servers. It is okay to run manage multiple HUS arrays using multiple cinder instances (or servers) Single Backend Single Backend deployment is where only one cinder instance is running on the cinder server, controlling just one HUS array: this setup involves two configuration files as shown: Set /etc/cinder/cinder.conf to use HDS volume driver. hds_cinder_config_file option is used to point to a configuration file. Configuration file location is not fixed. volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml Configure hds_cinder_config_file at the location specified above (example: /opt/hds/hus/cinder_hds_conf.xml). <?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.16</mgmt_ip0> <mgmt_ip1>172.17.44.17</mgmt_ip1> <username>system</username> <password>manager</password> <svc_0> <volume_type>default</volume_type> <iscsi_ip>172.17.39.132</iscsi_ip> <hdp>9</hdp> </svc_0> <snapshot> <hdp>13</hdp> </snapshot> <lun_start> 3000 </lun_start> <lun_end> 4000 </lun_end> </config> Multi Backend Multi Backend deployment is where more than one cinder instance is running in the same server. In the example below, two HUS arrays are used, possibly providing different storage performance. Configure /etc/cinder/cinder.conf: two config blocks hus1, and hus2 are created. hds_cinder_config_file option is used to point to an unique configuration file for each block. Set volume_driver for each backend to cinder.volume.drivers.hds.hds.HUSDriver enabled_backends=hus1,hus2 [hus1] volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hus1_conf.xml volume_backend_name=hus-1 [hus2] volume_driver = cinder.volume.drivers.hds.hds.HUSDriver hds_cinder_config_file = /opt/hds/hus/cinder_hus2_conf.xml volume_backend_name=hus-2 Configure /opt/hds/hus/cinder_hus1_conf.xml: <?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.16</mgmt_ip0> <mgmt_ip1>172.17.44.17</mgmt_ip1> <username>system</username> <password>manager</password> <svc_0> <volume_type>regular</volume_type> <iscsi_ip>172.17.39.132</iscsi_ip> <hdp>9</hdp> </svc_0> <snapshot> <hdp>13</hdp> </snapshot> <lun_start> 3000 </lun_start> <lun_end> 4000 </lun_end> </config> Configure /opt/hds/hus/cinder_hus2_conf.xml: <?xml version="1.0" encoding="UTF-8" ?> <config> <mgmt_ip0>172.17.44.20</mgmt_ip0> <mgmt_ip1>172.17.44.21</mgmt_ip1> <username>system</username> <password>manager</password> <svc_0> <volume_type>platinum</volume_type> <iscsi_ip>172.17.30.130</iscsi_ip> <hdp>2</hdp> </svc_0> <snapshot> <hdp>3</hdp> </snapshot> <lun_start> 2000 </lun_start> <lun_end> 3000 </lun_end> </config> Type extra specs: volume_backend and volume type If volume types are used, they should be configured in the configuration file as well. Also set volume_backend_name attribute to use the appropriate backend. Following the multi backend example above, the volume type platinum is served by hus-2, and regular is served by hus-1. cinder type-key regular set volume_backend_name=hus-1 cinder type-key platinum set volume_backend_name=hus-2 Non differentiated deployment of HUS arrays Multiple cinder instances, each controlling a separate HUS array and with no volume type being associated with any of them, can be deployed. In this case, Cinder filtering algorithm shall select the HUS array with the largest available free space. It is necessary and sufficient in that case to simply include in each configuration file, the default volume_type in the service labels.
HDS iSCSI volume driver configuration options These details apply to the xml format configuration file read by HDS volume driver. Four differentiated service labels are predefined: svc_0, svc_1, svc_2, svc_3There is no relative precedence or weight amongst these four labels.. Each such service label in turn associates with the following parameters/tags: volume-types: A create_volume call with a certain volume type shall be matched up with this tag. default is special in that any service associated with this type is used to create volume when no other labels match. Other labels are case sensitive and should exactly match. If no configured volume_types match the incoming requested type, an error occurs in volume creation. HDP, the pool ID associated with the service. An iSCSI port dedicated to the service. Typically a cinder volume instance would have only one such service label (such as, any of svc_0, svc_1, svc_2, svc_3) associated with it. But any mix of these four service labels can be used in the same instance get_volume_stats() shall always provide the available capacity based on the combined sum of all the HDPs used in these services labels..
List of configuration options
Option Type Default Description
mgmt_ip0 Required Management Port 0 IP address
mgmt_ip1 Required Management Port 1 IP address
username Optional Username is required only if secure mode is used
password Optional Password is required only if secure mode is used
svc_0, svc_1, svc_2, svc_3 Optional (at least one label has to be defined) Service labels: these four predefined names help four different sets of configuration options -- each can specify iSCSI port address, HDP and an unique volume type.
snapshot Required A service label which helps specify configuration for snapshots, such as, HDP.
volume_type Required volume_type tag is used to match volume type. Default meets any type of volume_type, or if it is not specified. Any other volume_type is selected if exactly matched during create_volume.
iscsi_ip Required iSCSI port IP address where volume attaches for this volume type.
hdp Required HDP, the pool number where volume, or snapshot should be created.
lun_start Optional 0 LUN allocation starts at this number.
lun_end Optional 4096 LUN allocation is up-to (not including) this number.