EMC SMI-S iSCSI driver The EMC volume driver, EMCSMISISCSIDriver is based on the existing ISCSIDriver, with the ability to create/delete and attach/detach volumes and create/delete snapshots, and so on. The driver runs volume operations by communicating with the backend EMC storage. It uses a CIM client in Python called PyWBEM to perform CIM operations over HTTP. The EMC CIM Object Manager (ECOM) is packaged with the EMC SMI-S provider. It is a CIM server that enables CIM clients to perform CIM operations over HTTP by using SMI-S in the back-end for EMC storage operations. The EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management. It supports VMAX and VNX storage systems.
System requirements EMC SMI-S Provider V4.5.1 and higher is required. You can download SMI-S from the EMC Powerlink web site (login is required). See the EMC SMI-S Provider release notes for installation instructions. EMC storage VMAX Family and VNX Series are supported.
Supported operations VMAX and VNX arrays support these operations: Create volume Delete volume Attach volume Detach volume Create snapshot Delete snapshot Create cloned volume Copy image to volume Copy volume to image Only VNX supports these operations: Create volume from snapshot Only thin provisioning is supported.
Task flow To set up the EMC SMI-S iSCSI driver Install the python-pywbem package for your distribution. See . Download SMI-S from PowerLink and install it. Add your VNX/VMAX arrays to SMI-S. For information, see and the SMI-S release notes. Register with VNX. See . Create a masking view on VMAX. See .
Install the <package>python-pywbem</package> package Install the python-pywbem package for your distribution, as follows: On Ubuntu: $ sudo apt-get install python-pywbem On openSUSE: $ zypper install python-pywbem On Fedora: $ yum install pywbem
Set up SMI-S You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. The host can be either a physical server or VM hosted by an ESX server. See the EMC SMI-S Provider release notes for supported platforms and installation instructions. You must discover storage arrays on the SMI-S server before you can use the Cinder driver. Follow instructions in the SMI-S release notes. SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to that directory and type TestSmiProvider.exe. Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC Cinder driver.
Register with VNX To export a VNX volume to a compute node, you must register the node with VNX. Register the node On the compute node 1.1.1.1, do the following (assume 10.10.61.35 is the iscsi target): $ sudo /etc/init.d/open-iscsi start $ sudo iscsiadm -m discovery -t st -p 10.10.61.35 $ cd /etc/iscsi $ sudo more initiatorname.iscsi $ iscsiadm -m node Log in to VNX from the compute node using the target corresponding to the SPA port: $ sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l Where iqn.1992-04.com.emc:cx.apm01234567890.a0 is the initiator name of the compute node. Login to Unisphere, go to VNX00000->Hosts->Initiators, Refresh and wait until initiator iqn.1992-04.com.emc:cx.apm01234567890.a0 with SP Port A-8v0 appears. Click the Register button, select CLARiiON/VNX, and enter the host name myhost1 and IP address myhost1. Click Register. Now host 1.1.1.1 also appears under Hosts->Host List. Log out of VNX on the compute node: $ sudo iscsiadm -m node -u Log in to VNX from the compute node using the target corresponding to the SPB port: $ sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l In Unisphere register the initiator with the SPB port. Log out: $ sudo iscsiadm -m node -u
Create a masking view on VMAX For VMAX, you must set up the Unisphere for VMAX server. On the Unisphere for VMAX server, create initiator group, storage group, and port group and put them in a masking view. initiator group contains the initiator names of the OpenStack hosts. Storage group must have at least six gatekeepers.
<filename>cinder.conf</filename> configuration file Make the following changes in /etc/cinder/cinder.conf. For VMAX, add the following entries, where 10.10.61.45 is the IP address of the VMAX iscsi target: iscsi_target_prefix = iqn.1992-04.com.emc iscsi_ip_address = 10.10.61.45 volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml For VNX, add the following entries, where 10.10.61.35 is the IP address of the VNX iscsi target: iscsi_target_prefix = iqn.2001-07.com.vnx iscsi_ip_address = 10.10.61.35 volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml Restart the cinder-volume service.
<filename>cinder_emc_config.xml</filename> configuration file Create the /etc/cinder/cinder_emc_config.xml file. You do not need to restart the service for this change. For VMAX, add the following lines to the XML file: For VNX, add the following lines to the XML file: Where: StorageType is the thin pool from which the user wants to create the volume. Only thin LUNs are supported by the plug-in. Thin pools can be created using Unisphere for VMAX and VNX. EcomServerIp and EcomServerPort are the IP address and port number of the ECOM server which is packaged with SMI-S. EcomUserName and EcomPassword are credentials for the ECOM server. To attach VMAX volumes to an OpenStack VM, you must create a Masking View by using Unisphere for VMAX. The Masking View must have an Initiator Group that contains the initiator of the OpenStack compute node that hosts the VM.