EMC SMI-S iSCSI driverThe EMC SMI-S iSCSI driver, which is based on the iSCSI
driver, can create, delete, attach, and detach volumes. It can
also create and delete snapshots, and so on.The EMC SMI-S iSCSI driver runs volume operations by
communicating with the back-end EMC storage. It uses a CIM
client in Python called PyWBEM to perform CIM operations over
HTTP.The EMC CIM Object Manager (ECOM) is packaged with the EMC
SMI-S provider. It is a CIM server that enables CIM clients to
perform CIM operations over HTTP by using SMI-S in the
back-end for EMC storage operations.The EMC SMI-S Provider supports the SNIA Storage Management
Initiative (SMI), an ANSI standard for storage management. It
supports VMAX and VNX storage systems.System requirementsEMC SMI-S Provider V4.5.1 and higher is required. You
can download SMI-S from the EMC
Powerlink web site. See the EMC SMI-S Provider
release notes for installation instructions.EMC storage VMAX Family and VNX Series are
supported.Supported operationsVMAX and VNX arrays support these operations:Create volumeDelete volumeAttach volumeDetach volumeCreate snapshotDelete snapshotCreate cloned volumeCopy image to volumeCopy volume to imageOnly VNX supports these operations:Create volume from snapshotOnly thin provisioning is supported.Task flowTo set up the EMC SMI-S iSCSI driverInstall the python-pywbem
package for your distribution. See .Download SMI-S from PowerLink and install it.
Add your VNX/VMAX arrays to SMI-S.For information, see and the SMI-S release notes.Register with VNX. See .Create a masking view on VMAX. See .Install the python-pywbem
packageInstall the python-pywbem
package for your distribution:On Ubuntu:$sudo apt-get install python-pywbemOn openSUSE:$zypper install python-pywbemOn Fedora:$yum install pywbemSet up SMI-SYou can install SMI-S on a non-OpenStack host.
Supported platforms include different flavors of
Windows, Red Hat, and SUSE Linux. The host can be
either a physical server or VM hosted by an ESX
server. See the EMC SMI-S Provider release notes for
supported platforms and installation
instructions.You must discover storage arrays on the SMI-S
server before you can use the Cinder driver.
Follow instructions in the SMI-S release
notes.SMI-S is usually installed at
/opt/emc/ECIM/ECOM/bin on
Linux and C:\Program
Files\EMC\ECIM\ECOM\bin on Windows.
After you install and configure SMI-S, go to that
directory and type
TestSmiProvider.exe.Use addsys in
TestSmiProvider.exe to add an
array. Use dv and examine the
output after the array is added. Make sure that the
arrays are recognized by the SMI-S server before using
the EMC Cinder driver.Register with VNXTo export a VNX volume to a Compute node, you must
register the node with VNX.On the Compute node 1.1.1.1, run
these commands (assume 10.10.61.35
is the iscsi target):$sudo /etc/init.d/open-iscsi start$sudo iscsiadm -m discovery -t st -p 10.10.61.35$cd /etc/iscsi$sudo more initiatorname.iscsi$iscsiadm -m nodeLog in to VNX from the Compute node by using the
target corresponding to the SPA port:$sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -lAssume that
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
is the initiator name of the Compute node. Log in to
Unisphere, go to
VNX00000->Hosts->Initiators,
refresh, and wait until initiator
iqn.1993-08.org.debian:01:1a2b3c4d5f6g
with SP Port A-8v0 appears.Click Register, select
CLARiiON/VNX, and enter the
myhost1 host name and
myhost1 IP address. Click
Register. Now the
1.1.1.1 host appears under
HostsHost List as well.Log out of VNX on the Compute node:$sudo iscsiadm -m node -uLog in to VNX from the Compute node using the target
corresponding to the SPB port:$sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -lIn Unisphere, register the initiator with the SPB
port.Log out:$sudo iscsiadm -m node -uCreate a masking view on VMAXFor VMAX, you must set up the Unisphere for VMAX
server. On the Unisphere for VMAX server, create
initiator group, storage group, and port group and put
them in a masking view. initiator group contains the
initiator names of the OpenStack hosts. Storage group
must have at least six gatekeepers.cinder.conf configuration
fileMake the following changes in
/etc/cinder/cinder.conf.For VMAX, add the following entries, where
10.10.61.45 is the IP address
of the VMAX iscsi target:iscsi_target_prefix = iqn.1992-04.com.emc
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xmlFor VNX, add the following entries, where
10.10.61.35 is the IP address
of the VNX iscsi target:iscsi_target_prefix = iqn.2001-07.com.vnx
iscsi_ip_address = 10.10.61.35
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xmlRestart the cinder-volume service.cinder_emc_config.xml
configuration fileCreate the file
/etc/cinder/cinder_emc_config.xml.
You do not need to restart the service for this
change.For VMAX, add the following lines to the XML
file:For VNX, add the following lines to the XML
file:To attach VMAX volumes to an OpenStack VM, you must
create a masking view by using Unisphere for VMAX. The
masking view must have an initiator group that
contains the initiator of the OpenStack compute node
that hosts the VM.StorageType is the thin pool
where the user wants to create the volume from. Only
thin LUNs are supported by the plug-in. Thin pools can
be created using Unisphere for VMAX and VNX.EcomServerIp and
EcomServerPort are the IP
address and port number of the ECOM server which is
packaged with SMI-S. EcomUserName and EcomPassword are
credentials for the ECOM server.