EMC SMI-S iSCSI driverThe EMC volume driver, EMCSMISISCSIDriver
is based on the existing ISCSIDriver, with
the ability to create/delete and attach/detach
volumes and create/delete snapshots, and so on.The driver runs volume operations by communicating with the
backend EMC storage. It uses a CIM client in Python called PyWBEM
to perform CIM operations over HTTP.
The EMC CIM Object Manager (ECOM) is packaged with the EMC
SMI-S provider. It is a CIM server that enables CIM clients to
perform CIM operations over HTTP by using SMI-S in the
back-end for EMC storage operations.The EMC SMI-S Provider supports the SNIA Storage Management
Initiative (SMI), an ANSI standard for storage management. It
supports VMAX and VNX storage systems.System requirementsEMC SMI-S Provider V4.5.1 and higher is required. You
can download SMI-S from the
EMC
Powerlink web site (login is required).
See the EMC SMI-S Provider
release notes for installation instructions.EMC storage VMAX Family and VNX Series are
supported.Supported operationsVMAX and VNX arrays support these operations:Create volumeDelete volumeAttach volumeDetach volumeCreate snapshotDelete snapshotCreate cloned volumeCopy image to volumeCopy volume to imageOnly VNX supports these operations:Create volume from snapshotOnly thin provisioning is supported.Task flowTo set up the EMC SMI-S iSCSI driverInstall the python-pywbem
package for your distribution. See .Download SMI-S from PowerLink and install it.
Add your VNX/VMAX arrays to SMI-S.For information, see and the SMI-S release notes.Register with VNX. See .Create a masking view on VMAX. See .Install the python-pywbem packageInstall the python-pywbem package for your
distribution, as follows:On Ubuntu:$sudo apt-get install python-pywbemOn openSUSE:$zypper install python-pywbemOn Fedora:$yum install pywbemSet up SMI-SYou can install SMI-S on a non-OpenStack host.
Supported platforms include different flavors of
Windows, Red Hat, and SUSE Linux. The host can be
either a physical server or VM hosted by an ESX
server. See the EMC SMI-S Provider release notes for
supported platforms and installation
instructions.You must discover storage arrays on the SMI-S
server before you can use the Cinder driver.
Follow instructions in the SMI-S release
notes.SMI-S is usually installed at
/opt/emc/ECIM/ECOM/bin on
Linux and C:\Program
Files\EMC\ECIM\ECOM\bin on Windows.
After you install and configure SMI-S, go to that
directory and type
TestSmiProvider.exe.Use addsys in
TestSmiProvider.exe to add an
array. Use dv and examine the
output after the array is added. Make sure that the
arrays are recognized by the SMI-S server before using
the EMC Cinder driver.Register with VNXTo export a VNX volume to a compute node, you must
register the node with VNX.Register the nodeOn the compute node 1.1.1.1, do
the following (assume 10.10.61.35
is the iscsi target):$sudo /etc/init.d/open-iscsi start$sudo iscsiadm -m discovery -t st -p 10.10.61.35$cd /etc/iscsi$sudo more initiatorname.iscsi$iscsiadm -m nodeLog in to VNX from the compute node using the target
corresponding to the SPA port:$sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -lWhere
iqn.1992-04.com.emc:cx.apm01234567890.a0
is the initiator name of the compute node. Login to
Unisphere, go to
VNX00000->Hosts->Initiators,
Refresh and wait until initiator
iqn.1992-04.com.emc:cx.apm01234567890.a0
with SP Port A-8v0 appears.Click the Register button,
select CLARiiON/VNX,
and enter the host name myhost1 and
IP address myhost1. Click Register.
Now host 1.1.1.1 also appears under
Hosts->Host List.Log out of VNX on the compute node:$sudo iscsiadm -m node -uLog in to VNX from the compute node using the target
corresponding to the SPB port:$sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -lIn Unisphere register the initiator with the SPB
port.Log out:$sudo iscsiadm -m node -uCreate a masking view on VMAXFor VMAX, you must set up the Unisphere for VMAX
server. On the Unisphere for VMAX server, create
initiator group, storage group, and port group and put
them in a masking view. initiator group contains the
initiator names of the OpenStack hosts. Storage group
must have at least six gatekeepers.cinder.conf configuration
fileMake the following changes in
/etc/cinder/cinder.conf.For VMAX, add the following entries, where
10.10.61.45 is the IP address
of the VMAX iscsi target:iscsi_target_prefix = iqn.1992-04.com.emc
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xmlFor VNX, add the following entries, where
10.10.61.35 is the IP address
of the VNX iscsi target:iscsi_target_prefix = iqn.2001-07.com.vnx
iscsi_ip_address = 10.10.61.35
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xmlRestart the cinder-volume service.cinder_emc_config.xml
configuration fileCreate the /etc/cinder/cinder_emc_config.xml file. You do not
need to restart the service for this change.For VMAX, add the following lines to the XML
file:For VNX, add the following lines to the XML
file:Where:StorageType is the thin pool from which the user
wants to create the volume. Only thin LUNs are supported by the plug-in.
Thin pools can be created using Unisphere for VMAX and VNX.EcomServerIp and
EcomServerPort are the IP address and port
number of the ECOM server which is packaged with SMI-S.EcomUserName and
EcomPassword are credentials for the ECOM
server.
To attach VMAX volumes to an OpenStack VM, you must
create a Masking View by using Unisphere for
VMAX. The Masking View must have an Initiator Group
that contains the initiator of the OpenStack compute
node that hosts the VM.