openstack-manuals/doc/config-reference/block-storage/drivers/emc-volume-driver.xml
Andreas Jaeger 39ac6cc258 Lowercase compute node
It's "compute node", not "Compute node" (similarly compute host).

Also, fix capitalization of "live migration".

Change-Id: I57ac46b845e217c2607cf99dfabcfaab25d84ea5
2014-03-06 09:06:07 +01:00

261 lines
13 KiB
XML

<section xml:id="emc-smis-iscsi-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml stop-chunking?>
<title>EMC SMI-S iSCSI driver</title>
<para>The EMC volume driver, <literal>EMCSMISISCSIDriver</literal>
is based on the existing <literal>ISCSIDriver</literal>, with
the ability to create/delete and attach/detach
volumes and create/delete snapshots, and so on.</para>
<para>The driver runs volume operations by communicating with the
backend EMC storage. It uses a CIM client in Python called PyWBEM
to perform CIM operations over HTTP.
</para>
<para>The EMC CIM Object Manager (ECOM) is packaged with the EMC
SMI-S provider. It is a CIM server that enables CIM clients to
perform CIM operations over HTTP by using SMI-S in the
back-end for EMC storage operations.</para>
<para>The EMC SMI-S Provider supports the SNIA Storage Management
Initiative (SMI), an ANSI standard for storage management. It
supports VMAX and VNX storage systems.</para>
<section xml:id="emc-reqs">
<title>System requirements</title>
<para>EMC SMI-S Provider V4.5.1 and higher is required. You
can download SMI-S from the
<link xlink:href="http://powerlink.emc.com">EMC
Powerlink</link> web site (login is required).
See the EMC SMI-S Provider
release notes for installation instructions.</para>
<para>EMC storage VMAX Family and VNX Series are
supported.</para>
</section>
<section xml:id="emc-supported-ops">
<title>Supported operations</title>
<para>VMAX and VNX arrays support these operations:</para>
<itemizedlist>
<listitem>
<para>Create volume</para>
</listitem>
<listitem>
<para>Delete volume</para>
</listitem>
<listitem>
<para>Attach volume</para>
</listitem>
<listitem>
<para>Detach volume</para>
</listitem>
<listitem>
<para>Create snapshot</para>
</listitem>
<listitem>
<para>Delete snapshot</para>
</listitem>
<listitem>
<para>Create cloned volume</para>
</listitem>
<listitem>
<para>Copy image to volume</para>
</listitem>
<listitem>
<para>Copy volume to image</para>
</listitem>
</itemizedlist>
<para>Only VNX supports these operations:</para>
<itemizedlist>
<listitem>
<para>Create volume from snapshot</para>
</listitem>
</itemizedlist>
<para>Only thin provisioning is supported.</para>
</section>
<section xml:id="emc-prep">
<title>Task flow</title>
<procedure>
<title>To set up the EMC SMI-S iSCSI driver</title>
<step>
<para>Install the <package>python-pywbem</package>
package for your distribution. See <xref
linkend="install-pywbem"/>.</para>
</step>
<step>
<para>Download SMI-S from PowerLink and install it.
Add your VNX/VMAX arrays to SMI-S.</para>
<para>For information, see <xref linkend="setup-smi-s"
/> and the SMI-S release notes.</para>
</step>
<step>
<para>Register with VNX. See <xref
linkend="register-emc"/>.</para>
</step>
<step>
<para>Create a masking view on VMAX. See <xref
linkend="create-masking"/>.</para>
</step>
</procedure>
<section xml:id="install-pywbem">
<title>Install the <package>python-pywbem</package> package</title>
<para>Install the <package>python-pywbem</package> package for your
distribution, as follows:</para>
<itemizedlist>
<listitem>
<para>On Ubuntu:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install python-pywbem</userinput></screen>
</listitem>
<listitem>
<para>On openSUSE:</para>
<screen><prompt>$</prompt> <userinput>zypper install python-pywbem</userinput></screen>
</listitem>
<listitem>
<para>On Fedora:</para>
<screen><prompt>$</prompt> <userinput>yum install pywbem</userinput></screen>
</listitem>
</itemizedlist>
</section>
<section xml:id="setup-smi-s">
<title>Set up SMI-S</title>
<para>You can install SMI-S on a non-OpenStack host.
Supported platforms include different flavors of
Windows, Red Hat, and SUSE Linux. The host can be
either a physical server or VM hosted by an ESX
server. See the EMC SMI-S Provider release notes for
supported platforms and installation
instructions.</para>
<note>
<para>You must discover storage arrays on the SMI-S
server before you can use the Cinder driver.
Follow instructions in the SMI-S release
notes.</para>
</note>
<para>SMI-S is usually installed at
<filename>/opt/emc/ECIM/ECOM/bin</filename> on
Linux and <filename>C:\Program
Files\EMC\ECIM\ECOM\bin</filename> on Windows.
After you install and configure SMI-S, go to that
directory and type
<command>TestSmiProvider.exe</command>.</para>
<para>Use <command>addsys</command> in
<command>TestSmiProvider.exe</command> to add an
array. Use <command>dv</command> and examine the
output after the array is added. Make sure that the
arrays are recognized by the SMI-S server before using
the EMC Cinder driver.</para>
</section>
<section xml:id="register-emc">
<title>Register with VNX</title>
<para>To export a VNX volume to a compute node, you must
register the node with VNX.</para>
<procedure>
<title>Register the node</title>
<step><para>On the compute node <literal>1.1.1.1</literal>, do
the following (assume <literal>10.10.61.35</literal>
is the iscsi target):</para>
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/open-iscsi start</userinput>
<prompt>$</prompt> <userinput>sudo iscsiadm -m discovery -t st -p 10.10.61.35</userinput>
<prompt>$</prompt> <userinput>cd /etc/iscsi</userinput>
<prompt>$</prompt> <userinput>sudo more initiatorname.iscsi</userinput>
<prompt>$</prompt> <userinput>iscsiadm -m node</userinput></screen></step>
<step><para>Log in to VNX from the compute node using the target
corresponding to the SPA port:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
<para>Where
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
is the initiator name of the compute node. Login to
Unisphere, go to
<literal>VNX00000</literal>-&gt;Hosts-&gt;Initiators,
Refresh and wait until initiator
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
with SP Port <literal>A-8v0</literal> appears.</para></step>
<step><para>Click the <guibutton>Register</guibutton> button,
select <guilabel>CLARiiON/VNX</guilabel>,
and enter the host name <literal>myhost1</literal> and
IP address <literal>myhost1</literal>. Click <guibutton>Register</guibutton>.
Now host <literal>1.1.1.1</literal> also appears under
Hosts-&gt;Host List.</para></step>
<step><para>Log out of VNX on the compute node:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
<step>
<para>Log in to VNX from the compute node using the target
corresponding to the SPB port:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
</step>
<step> <para>In Unisphere register the initiator with the SPB
port.</para></step>
<step><para>Log out:</para>
<screen><prompt>$</prompt> <userinput>sudo iscsiadm -m node -u</userinput></screen></step>
</procedure>
</section>
<section xml:id="create-masking">
<title>Create a masking view on VMAX</title>
<para>For VMAX, you must set up the Unisphere for VMAX
server. On the Unisphere for VMAX server, create
initiator group, storage group, and port group and put
them in a masking view. initiator group contains the
initiator names of the OpenStack hosts. Storage group
must have at least six gatekeepers.</para>
</section>
<section xml:id="emc-config-file">
<title><filename>cinder.conf</filename> configuration
file</title>
<para>Make the following changes in
<filename>/etc/cinder/cinder.conf</filename>.</para>
<para>For VMAX, add the following entries, where
<literal>10.10.61.45</literal> is the IP address
of the VMAX iscsi target:</para>
<programlisting language="ini">iscsi_target_prefix = iqn.1992-04.com.emc
iscsi_ip_address = 10.10.61.45
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
<para>For VNX, add the following entries, where
<literal>10.10.61.35</literal> is the IP address
of the VNX iscsi target:</para>
<programlisting language="ini">iscsi_target_prefix = iqn.2001-07.com.vnx
iscsi_ip_address = 10.10.61.35
volume_driver = cinder.volume.drivers.emc.emc_smis_iscsi.EMCSMISISCSIDriver
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml</programlisting>
<para>Restart the <systemitem class="service"
>cinder-volume</systemitem> service.</para>
</section>
<section xml:id="emc-config-file-2">
<title><filename>cinder_emc_config.xml</filename>
configuration file</title>
<para>Create the <filename>/etc/cinder/cinder_emc_config.xml</filename> file. You do not
need to restart the service for this change.</para>
<para>For VMAX, add the following lines to the XML
file:</para>
<programlisting language="xml"><xi:include href="samples/emc-vmax.xml" parse="text"/></programlisting>
<para>For VNX, add the following lines to the XML
file:</para>
<programlisting language="xml"><xi:include href="samples/emc-vnx.xml" parse="text"/></programlisting>
<para>Where:</para>
<itemizedlist>
<listitem>
<para><systemitem>StorageType</systemitem> is the thin pool from which the user
wants to create the volume. Only thin LUNs are supported by the plug-in.
Thin pools can be created using Unisphere for VMAX and VNX.</para>
</listitem>
<listitem>
<para><systemitem>EcomServerIp</systemitem> and
<systemitem>EcomServerPort</systemitem> are the IP address and port
number of the ECOM server which is packaged with SMI-S.</para>
</listitem>
<listitem>
<para><systemitem>EcomUserName</systemitem> and
<systemitem>EcomPassword</systemitem> are credentials for the ECOM
server.</para>
</listitem>
</itemizedlist>
<note>
<para>
To attach VMAX volumes to an OpenStack VM, you must
create a Masking View by using Unisphere for
VMAX. The Masking View must have an Initiator Group
that contains the initiator of the OpenStack compute
node that hosts the VM.
</para>
</note>
</section>
</section>
</section>