openstack-manuals/doc/config-reference/block-storage/drivers/emc-vnx-direct-driver.xml
Andreas Jaeger 8cef76c74e Config Reference: Cleanup Block Storage Supported Ops
Use the same wording and list (as supported) for each driver.

Change-Id: I3aebe52b0111b45b95e1f2e3e43017918deb1f6b
2014-09-15 13:11:05 +02:00

191 lines
9.8 KiB
XML

<section xml:id="emc-vnx-direct-driver"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<?dbhtml stop-chunking?>
<title>EMC VNX direct driver</title>
<para>Use the EMC VNX direct driver to create, attach, detach, and
delete volumes, create and delete snapshots, and so on. This driver
is based on the Cinder-defined <literal>ISCSIDriver</literal> driver.
</para>
<para>To complete volume operations, the driver uses the NaviSec
command-line interface (CLI) to communicate with back-end EMC VNX
storage.
</para>
<section xml:id="emc-vnx-direct-reqs">
<title>System requirements</title>
<itemizedlist>
<listitem>
<para>Flare version 5.32 or later.</para>
</listitem>
<listitem>
<para>You must activate VNX Snapshot and Thin Provisioning license
for the array. Ensure that all the iSCSI ports from the
VNX are accessible through OpenStack hosts.</para>
</listitem>
<listitem>
<para>Navisphere CLI v7.32 or later.</para>
</listitem>
</itemizedlist>
<para>EMC storage VNX Series are supported.</para>
</section>
<section xml:id="emc-vnx-direct-supported-ops">
<title>Supported operations</title>
<itemizedlist>
<listitem>
<para>Create, delete, attach, and detach volumes.</para>
</listitem>
<listitem>
<para>Create, list, and delete volume snapshots.</para>
</listitem>
<listitem>
<para>Create a volume from a snapshot.</para>
</listitem>
<listitem>
<para>Copy an image to a volume.</para>
</listitem>
<listitem>
<para>Copy a volume to an image.</para>
</listitem>
<listitem>
<para>Clone a volume.</para>
</listitem>
<listitem>
<para>Extend a volume.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="emc-vnx-direct-prep">
<title>Set up the VNX direct driver</title>
<para>Complete these high-level tasks to set up the VNX direct driver:
</para>
<orderedlist>
<listitem>
<para>Install NaviSecCLI. You must install the NaviSecCLI
tool on the controller node and all the Cinder nodes in
an OpenStack deployment. See <xref
linkend="install-naviseccli"/>.
</para>
</listitem>
<listitem>
<para>Register with VNX. See <xref
linkend="register-vnx-direct-iscsi"/>
</para>
</listitem>
</orderedlist>
<section xml:id="install-naviseccli">
<title>Install NaviSecCLI</title>
<para>On Ubuntu x64, download the NaviSecCLI deb package from <link xlink:href="https://github.com/emc-openstack/naviseccli">EMC's OpenStack GitHub</link> web site.
</para>
<para>For all the other variants of Linux, download the NaviSecCLI rpm package from EMC's support web site for <link xlink:href="https://support.emc.com/downloads/36656_VNX2-Series">VNX2 series</link> or <link xlink:href="https://support.emc.com/downloads/12781_VNX1-Series">VNX1 series</link>. Login is required.
</para>
</section>
<section xml:id="register-vnx-direct-iscsi">
<title>Register with VNX</title>
<para>To export a VNX volume to a compute node or a volume node,
you must register the node with VNX.</para>
<procedure>
<title>To register the node</title>
<step><para>On the compute node or volume node <literal>1.1.1.1</literal>, do
the following (assume <literal>10.10.61.35</literal>
is the iSCSI target):</para>
<screen><prompt>#</prompt> <userinput>/etc/init.d/open-iscsi start</userinput>
<prompt>#</prompt> <userinput>iscsiadm -m discovery -t st -p 10.10.61.35</userinput>
<prompt>#</prompt> <userinput>cd /etc/iscsi</userinput>
<prompt>#</prompt> <userinput>more initiatorname.iscsi</userinput>
<prompt>#</prompt> <userinput>iscsiadm -m node</userinput></screen></step>
<step><para>Log in to VNX from the node using the target
corresponding to the SPA port:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.61.35 -l</userinput></screen>
<para>Where
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
is the initiator name of the node. Login to
Unisphere, go to
<literal>VNX00000</literal>-&gt;Hosts-&gt;Initiators,
Refresh and wait until initiator
<literal>iqn.1992-04.com.emc:cx.apm01234567890.a0</literal>
with SP Port <literal>A-8v0</literal> appears.</para></step>
<step><para>Click <guibutton>Register</guibutton>,
select <guilabel>CLARiiON/VNX</guilabel>,
and enter the host name <literal>myhost1</literal> and
IP address <literal>myhost1</literal>. Click <guibutton>Register</guibutton>.
Now host <literal>1.1.1.1</literal> also appears under
Hosts-&gt;Host List.</para></step>
<step><para>Log out of VNX on the node:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -u</userinput></screen></step>
<step>
<para>Log in to VNX from the node using the target
corresponding to the SPB port:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.10.11 -l</userinput></screen>
</step>
<step> <para>In Unisphere register the initiator with the SPB
port.</para></step>
<step><para>Log out:</para>
<screen><prompt>#</prompt> <userinput>iscsiadm -m node -u</userinput></screen></step>
</procedure>
</section>
<section xml:id="emc-vnx-direct-config-file">
<title><filename>cinder.conf</filename> configuration
file</title>
<para>Make the following changes in
<filename>/etc/cinder/cinder.conf</filename>.</para>
<para>For the VNX iSCSI driver, add the following entries, where
<literal>10.10.61.35</literal> is the IP address
of the VNX iSCSI target, <literal>10.10.72.41</literal>
is the IP address of the VNX array (SPA or SPB),
<systemitem>default_timeout</systemitem> is the default
time out for CLI operations in minutes, and
<systemitem>max_luns_per_storage_group</systemitem> is
the default max number of LUNs in a storage group:</para>
<programlisting language="ini">
iscsi_ip_address = 10.10.61.35
san_ip = 10.10.72.41
san_login = global_username
san_password = password
naviseccli_path = /opt/Navisphere/bin/naviseccli
storage_vnx_pool_name = poolname
default_timeout = 10
max_luns_per_storage_group=256
volume_driver=cinder.volume.drivers.emc.emc_cli_iscsi.EMCCLIISCSIDriver
</programlisting>
<note>
<para>To find out <literal>max_luns_per_storage_group
</literal> for each VNX model, refer to the
<link xlink:href="https://support.emc.com/search/?text=White%20Paper%20Introduction%20to%20the%20VNX%20Series">EMC's support</link> web site (login is required).
</para>
</note>
<para>Restart the <systemitem class="service"
>cinder-volume</systemitem> service.</para>
</section>
<section xml:id="emc-vnx-direct-volume-type">
<title>Volume type support</title>
<para>Volume type support allows user to choose thick/thin
provisioning capabilities.</para>
<para>Here is an example of how to setup volume type.
First create volume types. Then define extra specs for
each volume type.</para>
<procedure>
<title>To set up volume types</title>
<step>
<para>Setup volume types:</para>
<screen><prompt>$</prompt> <userinput>cinder type-create "TypeA"</userinput>
<prompt>$</prompt> <userinput>cinder type-create "TypeB"</userinput></screen>
</step>
<step>
<para>Setup volume type extra specs:</para>
<screen><prompt>$</prompt> <userinput>cinder type-key "TypeA" set storagetype:provisioning=thick</userinput>
<prompt>$</prompt> <userinput>cinder type-key "TypeB" set storagetype:provisioning=thin</userinput></screen>
</step>
</procedure>
<para>The previous example creates two volume types:
<literal>TypeA</literal> and <literal>TypeB</literal>.
For <literal>TypeA</literal>, <literal>storagetype:provisioning
</literal> is set to <literal>thick</literal>. Similarly
for <literal>TypeB</literal>, <literal>storagetype:provisioning
</literal> is set to <literal>thin</literal>. If <literal>
storagetype:provisioning</literal> is not specified, it will be
default to <literal>thick</literal>.</para>
</section>
</section>
</section>