openstack-manuals/doc/config-reference/compute/section_hypervisor_vmware.xml
andydugas b65983e523 Added link to resource
To complete the configuration for firewall to allow VNC, a VIB is
required. The doc did not show how to obtain the VIB. Added a note
with a link to the git repo containing the package.

Change-Id: Ibd744023764c1b9b89d38bd632fb5d63df9058e9
Closes-Bug: #1295199
2014-10-13 21:26:50 +02:00

1022 lines
42 KiB
XML

<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE section[
<!ENTITY % openstack SYSTEM "../../common/entities/openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="vmware">
<title>VMware vSphere</title>
<?dbhtml stop-chunking?>
<section xml:id="vmware-intro">
<title>Introduction</title>
<para>OpenStack Compute supports the VMware vSphere product family
and enables access to advanced features such as vMotion, High
Availability, and Dynamic Resource Scheduling (DRS).</para>
<para>This section describes how to configure VMware-based virtual
machine images for launch. vSphere versions 4.1 and later are
supported.</para>
<para>The VMware vCenter driver enables the
<systemitem class="service">nova-compute</systemitem> service to communicate
with a VMware vCenter server that manages one or more ESX host
clusters. The driver aggregates the ESX hosts in each cluster to
present one large hypervisor entity for each cluster to the
Compute scheduler. Because individual ESX hosts are not exposed
to the scheduler, Compute schedules to the granularity of
clusters and vCenter uses DRS to select the actual ESX host
within the cluster. When a virtual machine makes its way into a
vCenter cluster, it can use all vSphere features.</para>
<para>The following sections describe how to configure the VMware
vCenter driver.</para>
</section>
<section xml:id="vmware_architecture">
<title>High-level architecture</title>
<para>The following diagram shows a high-level view of the VMware
driver architecture:</para>
<figure>
<title>VMware driver architecture</title>
<mediaobject>
<imageobject>
<imagedata fileref="../../common/figures/vmware-nova-driver-architecture.jpg" format="JPG" contentwidth="6in"/>
</imageobject>
</mediaobject>
</figure>
<para>As the figure shows, the OpenStack Compute Scheduler sees
three hypervisors that each correspond to a cluster in vCenter.
<systemitem class="service">Nova-compute</systemitem> contains
the VMware driver. You can run with multiple <systemitem class="service">nova-compute</systemitem> services. While
Compute schedules at the granularity of a cluster, the VMware
driver inside <systemitem class="service">nova-compute</systemitem>
interacts with the vCenter APIs to select an appropriate ESX host within
the cluster. Internally, vCenter uses DRS for placement.</para>
<para>The VMware vCenter driver also interacts with the OpenStack
Image Service to copy VMDK images from the Image Service back
end store. The dotted line in the figure represents VMDK images
being copied from the OpenStack Image Service to the vSphere
data store. VMDK images are cached in the data store so the copy
operation is only required the first time that the VMDK image is
used.</para>
<para>After OpenStack boots a VM into a vSphere cluster, the VM
becomes visible in vCenter and can access vSphere advanced
features. At the same time, the VM is visible in the OpenStack
dashboard and you can manage it as you would any other OpenStack
VM. You can perform advanced vSphere operations in vCenter while
you configure OpenStack resources such as VMs through the
OpenStack dashboard.</para>
<para>The figure does not show how networking fits into the
architecture. Both <systemitem class="service">nova-network</systemitem>
and the OpenStack Networking Service are supported. For details, see
<xref linkend="VMware_networking"/>.</para>
</section>
<section xml:id="vmware_configuration_overview">
<title>Configuration overview</title>
<para>To get started with the VMware vCenter driver, complete the
following high-level steps:</para>
<orderedlist>
<listitem>
<para>Configure vCenter. See <xref linkend="vmware-prereqs"/>.</para>
</listitem>
<listitem>
<para>Configure the VMware
vCenter driver in the <filename>nova.conf</filename> file. See <xref linkend="VMwareVCDriver_details"/>.</para>
</listitem>
<listitem>
<para>Load desired VMDK images into the OpenStack Image
Service. See <xref linkend="VMware_images"/>.</para>
</listitem>
<listitem>
<para>Configure networking with either <systemitem class="service">nova-network</systemitem>
or the OpenStack Networking Service. See <xref linkend="VMware_networking"/>.</para>
</listitem>
</orderedlist>
</section>
<section xml:id="vmware-prereqs">
<title>Prerequisites and limitations</title>
<para>Use the following list to prepare a vSphere environment that
runs with the VMware vCenter driver:</para>
<orderedlist>
<listitem>
<para><emphasis role="bold">Copying VMDK files (vSphere 5.1
only).</emphasis> In vSphere 5.1, copying large image files
(for example, 12&nbsp;GB and greater) from Glance can take a long
time. To improve performance, VMware recommends that you
upgrade to VMware vCenter Server 5.1 Update 1 or later. For
more information, see the <link xlink:href="https://www.vmware.com/support/vsphere5/doc/vsphere-vcenter-server-51u1-release-notes.html#resolvedissuescimapi">Release Notes</link>.</para>
</listitem>
<listitem>
<para><emphasis role="bold">DRS</emphasis>. For any cluster
that contains multiple ESX hosts, enable DRS and enable
fully automated placement.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shared storage</emphasis>. Only
shared storage is supported and data stores must be shared
among all hosts in a cluster. It is recommended to remove
data stores not intended for OpenStack from clusters being
configured for OpenStack.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Clusters and data
stores</emphasis>. Do not use OpenStack clusters and data
stores for other purposes. If you do, OpenStack displays
incorrect usage information.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Networking</emphasis>. The
networking configuration depends on the desired networking
model. See <xref linkend="VMware_networking"/>.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Security groups</emphasis>. If you
use the VMware driver with OpenStack Networking and the NSX
plug-in, security groups are supported. If you use
<systemitem class="service">nova-network</systemitem>,
security groups are not supported.</para>
<note><para>The NSX plug-in is the only plug-in that is
validated for vSphere.</para></note>
</listitem>
<listitem>
<para><emphasis role="bold">VNC</emphasis>. The port range
5900 - 6105 (inclusive) is automatically enabled for VNC
connections on every ESX host in all clusters under
OpenStack control. For more information about using a VNC
client to connect to virtual machine, see <link xlink:href="http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1246">http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1246</link>.</para>
<note><para>In addition to the default VNC port
numbers (5900 to 6000) specified in the above document, the
following ports are also used: 6101, 6102, and 6105.</para></note>
<para>You must modify the ESXi firewall configuration to allow
the VNC ports. Additionally, for the firewall modifications
to persist after a reboot, you must create a custom vSphere
Installation Bundle (VIB) which is then installed onto the
running ESXi host or added to a custom image profile used to
install ESXi hosts. For details about how to create a VIB
for persisting the firewall configuration modifications, see
<link xlink:href="http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381">
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007381</link>.</para>
<note><para>The VIB can be downloaded from
<link xlink:href="https://github.com/openstack-vmwareapi-team/Tools">
https://github.com/openstack-vmwareapi-team/Tools</link>.</para></note>
</listitem>
<listitem>
<para><emphasis role="bold">Ephemeral Disks</emphasis>.
Ephemeral disks are not supported. A future major release
will address this limitation.</para>
</listitem>
<listitem>
<para>
Injection of SSH keys into compute instances hosted by vCenter is
not currently supported.</para>
</listitem>
<listitem>
<para>
To use multiple vCenter installations with OpenStack, each vCenter
must be assigned to a separate availability zone. This is required
as the OpenStack Block Storage VMDK driver does not currently work
across multiple vCenter installations.</para>
</listitem>
</orderedlist>
</section>
<section xml:id="vCenter_service_perms">
<title>VMware vCenter service account</title>
<para>OpenStack integration requires a vCenter service account with the following
minimum permissions. Apply the permissions to the
<systemitem>Datacenter</systemitem> root object, and select the <guibutton>Propagate to Child Objects</guibutton> option.</para>
<table rules="all" width="1047">
<caption>vCenter permissions tree</caption>
<col width="108pt" align="left"/>
<col width="120pt" align="left"/>
<col width="260pt" align="left"/>
<col width="210pt" align="left"/>
<tbody>
<tr>
<td>All Privileges</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td/>
<td>Datastore</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Allocate space</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Browse datastore</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Low level file operation</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Remove file</td>
<td/>
</tr>
<tr>
<td/>
<td>Folder</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Create folder</td>
<td/>
</tr>
<tr>
<td/>
<td>Host</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Configuration</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Maintenance</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Network configuration</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Storage partition configuration</td>
</tr>
<tr>
<td/>
<td>Network</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Assign network</td>
<td/>
</tr>
<tr>
<td/>
<td>Resource</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Assign virtual machine to resource pool</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Migrate powered off virtual machine</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Migrate powered on virtual machine</td>
<td/>
</tr>
<tr>
<td/>
<td>Virtual Machine</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Configuration</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Add existing disk</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Add new disk</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Add or remove device</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Advanced</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>CPU count</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Disk change tracking</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Host USB device</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Memory</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Raw device</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Remove disk</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Rename</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Swapfile placement</td>
</tr>
<tr>
<td/>
<td/>
<td>Interaction</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Configure CD media</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Power Off</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Power On</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Reset</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Suspend</td>
</tr>
<tr>
<td/>
<td/>
<td>Inventory</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Create from existing</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Create new</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Move</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Remove</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Unregister</td>
</tr>
<tr>
<td/>
<td/>
<td>Provisioning</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Clone virtual machine</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Customize</td>
</tr>
<tr>
<td/>
<td/>
<td>Sessions</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Validate session</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>View and stop sessions</td>
</tr>
<tr>
<td/>
<td/>
<td>Snapshot management</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Create snapshot</td>
</tr>
<tr>
<td/>
<td/>
<td/>
<td>Remove snapshot</td>
</tr>
<tr>
<td/>
<td>vApp</td>
<td/>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Export</td>
<td/>
</tr>
<tr>
<td/>
<td/>
<td>Import</td>
<td/>
</tr>
</tbody>
</table>
</section>
<section xml:id="VMwareVCDriver_details">
<title>VMware vCenter driver</title>
<para>Use the VMware vCenter driver (VMwareVCDriver) to connect
OpenStack Compute with vCenter. This recommended configuration
enables access through vCenter to advanced vSphere features like
vMotion, High Availability, and Dynamic Resource Scheduling
(DRS).</para>
<section xml:id="VMwareVCDriver_configuration_options">
<title>VMwareVCDriver configuration options</title>
<para>When you use the VMwareVCDriver (vCenter versions 5.1 and
later) with OpenStack Compute, add the following
VMware-specific configuration options to the
<filename>nova.conf</filename> file:</para>
<programlisting language="ini">[DEFAULT]
compute_driver=vmwareapi.VMwareVCDriver
[vmware]
host_ip=&lt;vCenter host IP&gt;
host_username=&lt;vCenter username&gt;
host_password=&lt;vCenter password&gt;
cluster_name=&lt;vCenter cluster name&gt;
datastore_regex=&lt;optional datastore regex&gt;</programlisting>
<note>
<itemizedlist>
<listitem>
<para>vSphere vCenter versions 5.0 and earlier: You must
specify the location of the WSDL files by adding the
<code>wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl</code>
setting to the above configuration. For more
information, see
<link linkend="VMware_additional_config">vSphere 5.0 and
earlier additional set up</link>.</para>
</listitem>
<listitem>
<para>Clusters: The vCenter driver can support multiple
clusters. To use more than one cluster, simply add
multiple <option>cluster_name</option> lines in
<filename>nova.conf</filename> with the appropriate
cluster name. Clusters and data stores used by the
vCenter driver should not contain any VMs other than
those created by the driver.</para>
</listitem>
<listitem>
<para>Data stores: The <option>datastore_regex</option>
setting specifies the data stores to use with Compute.
For example, <option>datastore_regex="nas.*"</option>
selects all the data stores that have a name starting
with "nas". If this line is omitted, Compute uses the
first data store returned by the vSphere API. It is
recommended not to use this field and instead remove
data stores that are not intended for OpenStack.</para>
</listitem>
<listitem>
<para>Reserved host memory: The
<option>reserved_host_memory_mb</option> option value is
512&nbsp;MB by default. However, VMware recommends that
you set this option to 0&nbsp;MB because the vCenter
driver reports the effective memory available to the
virtual machines.</para>
</listitem>
</itemizedlist>
</note>
<para>A <systemitem class="service">nova-compute</systemitem>
service can control one or more clusters containing multiple
ESX hosts, making <systemitem class="service">nova-compute</systemitem>
a critical service from a high availability perspective. Because the
host that runs <systemitem class="service">nova-compute</systemitem> can
fail while the vCenter and ESX still run, you must protect the
<systemitem class="service">nova-compute</systemitem>
service against host failures.</para>
<note>
<para>Many <filename>nova.conf</filename> options are relevant
to libvirt but do not apply to this driver.</para>
</note>
<para>You must complete additional configuration for
environments that use vSphere 5.0 and earlier. See <xref linkend="VMware_additional_config"/>.</para>
</section>
</section>
<section xml:id="VMware_images">
<title>Images with VMware vSphere</title>
<para>The vCenter driver supports images in the VMDK format. Disks
in this format can be obtained from VMware Fusion or from an ESX
environment. It is also possible to convert other formats, such
as qcow2, to the VMDK format using the <option>qemu-img</option>
utility. After a VMDK disk is available, load it into the
OpenStack Image Service. Then, you can use it with the VMware
vCenter driver. The following sections provide additional
details on the supported disks and the commands used for
conversion and upload.</para>
<section xml:id="VMware_supported_images">
<title>Supported image types</title>
<para>Upload images to the OpenStack Image Service in VMDK
format. The following VMDK disk types are supported:</para>
<itemizedlist>
<listitem>
<para><emphasis role="italic">VMFS Flat Disks</emphasis>
(includes thin, thick, zeroedthick, and eagerzeroedthick).
Note that once a VMFS thin disk is exported from VMFS to a
non-VMFS location, like the OpenStack Image Service, it
becomes a preallocated flat disk. This impacts the
transfer time from the OpenStack Image Service to the data
store when the full preallocated flat disk, rather than
the thin disk, must be transferred.</para>
</listitem>
<listitem>
<para><emphasis role="italic">Monolithic Sparse
disks</emphasis>. Sparse disks get imported from the
OpenStack Image Service into ESX as thin provisioned
disks. Monolithic Sparse disks can be obtained from VMware
Fusion or can be created by converting from other virtual
disk formats using the <code>qemu-img</code>
utility.</para>
</listitem>
</itemizedlist>
<para>The following table shows the <option>vmware_disktype</option>
property that applies to each of the supported VMDK disk
types:</para>
<table rules="all">
<caption>OpenStack Image Service disk type settings</caption>
<thead>
<tr>
<th>vmware_disktype property</th>
<th>VMDK disk type</th>
</tr>
</thead>
<tbody>
<tr>
<td>sparse</td>
<td>
<para>Monolithic Sparse</para>
</td>
</tr>
<tr>
<td>thin</td>
<td>
<para>VMFS flat, thin provisioned</para>
</td>
</tr>
<tr>
<td>preallocated (default)</td>
<td>
<para>VMFS flat,
thick/zeroedthick/eagerzeroedthick</para>
</td>
</tr>
</tbody>
</table>
<para>The <option>vmware_disktype</option> property is set when an
image is loaded into the OpenStack Image Service. For example,
the following command creates a Monolithic Sparse image by
setting <option>vmware_disktype</option> to
<literal>sparse</literal>:</para>
<screen><prompt>$</prompt> <userinput>glance image-create --name "ubuntu-sparse" --disk-format vmdk \
--container-format bare \
--property vmware_disktype="sparse" \
--property vmware_ostype="ubuntu64Guest" &lt; ubuntuLTS-sparse.vmdk</userinput></screen>
<note><para>Specifying <literal>thin</literal> does not
provide any advantage over <literal>preallocated</literal>
with the current version of the driver. Future versions might
restore the thin properties of the disk after it is downloaded
to a vSphere data store.</para></note>
</section>
<section xml:id="VMware_converting_images">
<title>Convert and load images</title>
<para>Using the <code>qemu-img</code> utility, disk images in
several formats (such as, qcow2) can be converted to the VMDK
format.</para>
<para>For example, the following command can be used to convert
a <link
xlink:href="http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img">qcow2 Ubuntu Trusty cloud image</link>:</para>
<screen><prompt>$</prompt> <userinput>qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img \
-O vmdk trusty-server-cloudimg-amd64-disk1.vmdk</userinput></screen>
<para>VMDK disks converted through <code>qemu-img</code> are
<emphasis role="italic">always</emphasis> monolithic sparse
VMDK disks with an IDE adapter type. Using the previous
example of the Ubuntu Trusty image after the
<code>qemu-img</code> conversion, the command to upload the
VMDK disk should be something like:</para>
<screen><prompt>$</prompt> <userinput>glance image-create --name trusty-cloud --is-public False \
--container-format bare --disk-format vmdk \
--property vmware_disktype="sparse" \
--property vmware_adaptertype="ide" &lt; \
trusty-server-cloudimg-amd64-disk1.vmdk</userinput></screen>
<para>Note that the <option>vmware_disktype</option> is set to
<emphasis role="italic">sparse</emphasis> and the
<code>vmware_adaptertype</code> is set to <emphasis role="italic">ide</emphasis> in the previous command.</para>
<para>If the image did not come from the <code>qemu-img</code>
utility, the <code>vmware_disktype</code> and
<code>vmware_adaptertype</code> might be different. To
determine the image adapter type from an image file, use the
following command and look for the
<option>ddb.adapterType=</option> line:</para>
<screen><prompt>$</prompt> <userinput>head -20 &lt;vmdk file name></userinput></screen>
<para>Assuming a preallocated disk type and an iSCSI lsiLogic
adapter type, the following command uploads the VMDK
disk:</para>
<screen><prompt>$</prompt> <userinput>glance image-create --name "ubuntu-thick-scsi" --disk-format vmdk \
--container-format bare \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property vmware_ostype="ubuntu64Guest" &lt; ubuntuLTS-flat.vmdk</userinput></screen>
<para>Currently, OS boot VMDK disks with an IDE adapter type
cannot be attached to a virtual SCSI controller and likewise
disks with one of the SCSI adapter types (such as, busLogic,
lsiLogic) cannot be attached to the IDE controller. Therefore,
as the previous examples show, it is important to set the
<option>vmware_adaptertype</option> property correctly. The
default adapter type is lsiLogic, which is SCSI, so you can
omit the <parameter>vmware_adaptertype</parameter> property if
you are certain that the image adapter type is
lsiLogic.</para>
</section>
<section xml:id="VMware_tagging_images">
<title>Tag VMware images</title>
<para>In a mixed hypervisor environment, OpenStack Compute uses
the <option>hypervisor_type</option> tag to match images to
the correct hypervisor type. For VMware images, set the
hypervisor type to <literal>vmware</literal>. Other valid
hypervisor types include: <literal>xen</literal>,
<literal>qemu</literal>, <literal>lxc</literal>,
<literal>uml</literal>, and <literal>hyperv</literal>.
Note that <literal>qemu</literal> is used for both QEMU and
KVM hypervisor types.</para>
<screen><prompt>$</prompt> <userinput>glance image-create --name "ubuntu-thick-scsi" --disk-format vmdk \
--container-format bare \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property hypervisor_type="vmware" \
--property vmware_ostype="ubuntu64Guest" &lt; ubuntuLTS-flat.vmdk</userinput></screen>
</section>
<section xml:id="VMware_optimizing_images">
<title>Optimize images</title>
<para>Monolithic Sparse disks are considerably faster to
download but have the overhead of an additional conversion
step. When imported into ESX, sparse disks get converted to
VMFS flat thin provisioned disks. The download and conversion
steps only affect the first launched instance that uses the
sparse disk image. The converted disk image is cached, so
subsequent instances that use this disk image can simply use
the cached version.</para>
<para>To avoid the conversion step (at the cost of longer
download times) consider converting sparse disks to thin
provisioned or preallocated disks before loading them into the
OpenStack Image Service.</para>
<para>Use one of the following tools to pre-convert sparse
disks.</para>
<variablelist>
<varlistentry>
<term><emphasis role="bold">vSphere CLI
tools</emphasis></term>
<listitem>
<para>Sometimes called the remote CLI or rCLI.</para>
<para>Assuming that the sparse disk is made available on a
data store accessible by an ESX host, the following
command converts it to preallocated format:</para>
<programlisting>vmkfstools --server=ip_of_some_ESX_host -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk</programlisting>
<para>Note that the vifs tool from the same CLI package can
be used to upload the disk to be converted. The vifs
tool can also be used to download the converted disk if
necessary.</para>
</listitem>
</varlistentry>
<varlistentry><term><emphasis role="bold">vmkfstools directly on the ESX
host</emphasis></term> <listitem>
<para>If the SSH service is enabled on an ESX host, the
sparse disk can be uploaded to the ESX data store
through scp and the vmkfstools local to the ESX host can
use used to perform the conversion. After you log in to
the host through ssh, run this command:</para>
<programlisting>vmkfstools -i /vmfs/volumes/datastore1/sparse.vmdk /vmfs/volumes/datastore1/converted.vmdk</programlisting>
</listitem></varlistentry>
<varlistentry><term><emphasis role="bold">vmware-vdiskmanager</emphasis></term><listitem>
<para><code>vmware-vdiskmanager</code> is a utility that
comes bundled with VMware Fusion and VMware Workstation.
The following example converts a sparse disk to
preallocated format:</para>
<programlisting>'/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager' -r sparse.vmdk -t 4 converted.vmdk</programlisting>
</listitem></varlistentry>
</variablelist>
<para>In the previous cases, the converted vmdk is
actually a pair of files:</para> <itemizedlist><listitem>
<para>The descriptor file <emphasis role="italic"
>converted.vmdk</emphasis>.</para>
</listitem>
<listitem>
<para>The actual virtual disk data file <emphasis
role="italic">converted-flat.vmdk</emphasis>.</para>
</listitem></itemizedlist>
<para>The file to be uploaded to the OpenStack Image
Service is <emphasis role="italic"
>converted-flat.vmdk</emphasis>.</para>
</section>
<section xml:id="VMware_copying_images">
<title>Image handling</title>
<para>The ESX hypervisor requires a copy of the VMDK file in
order to boot up a virtual machine. As a result, the vCenter
OpenStack Compute driver must download the VMDK via HTTP from
the OpenStack Image Service to a data store that is visible to
the hypervisor. To optimize this process, the first time a
VMDK file is used, it gets cached in the data store.
Subsequent virtual machines that need the VMDK use the cached
version and don't have to copy the file again from the
OpenStack Image Service.</para>
<para>Even with a cached VMDK, there is still a copy operation
from the cache location to the hypervisor file directory in
the shared data store. To avoid this copy, boot the image in
linked_clone mode. To learn how to enable this mode, see <xref linkend="VMware_config"/>.</para>
<note><para>You can also use
the <code>vmware_linked_clone</code> property in the OpenStack
Image Service to
override the linked_clone mode on a per-image basis.</para></note>
<para>You can automatically purge unused images after a
specified period of time. To configure this action, set these
options in the <literal>DEFAULT</literal> section in the
<filename>nova.conf</filename> file:</para>
<variablelist><varlistentry>
<term><parameter>remove_unused_base_images</parameter></term>
<listitem>
<para>Set this parameter to <userinput>True</userinput> to
specify that unused images should be removed after the
duration specified in the <parameter>remove_unused_original_minimum_age_seconds</parameter> parameter.
The default is <userinput>True</userinput>.</para></listitem>
</varlistentry>
<varlistentry>
<term><parameter>remove_unused_original_minimum_age_seconds</parameter></term>
<listitem><para>Specifies the duration in seconds after which an unused
image is purged from the cache. The default is
<userinput>86400</userinput> (24 hours).</para></listitem>
</varlistentry>
</variablelist>
</section>
</section>
<section xml:id="VMware_networking">
<title>Networking with VMware vSphere</title>
<para>The VMware driver supports networking with the <systemitem class="service">nova-network</systemitem> service or the
OpenStack Networking Service. Depending on your installation,
complete these configuration steps before you provision
VMs:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">
The <systemitem class="service">nova-network</systemitem>
service with the FlatManager or FlatDHCPManager</emphasis>.
Create a port group with the same name as the
<literal>flat_network_bridge</literal> value in the
<filename>nova.conf</filename> file. The default value is
<literal>br100</literal>. If you specify another value,
the new value must be a valid Linux bridge identifier that
adheres to Linux bridge naming conventions.</para>
<para>All VM NICs are attached to this port group.</para>
<para>Ensure that the flat interface of the node that runs
the <systemitem class="service">nova-network</systemitem>
service has a path to this network.</para>
<note>
<para>When configuring the port binding for this port group
in vCenter, specify <literal>ephemeral</literal> for the
port binding type. For more information, see
<link xlink:href="http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1022312">Choosing a port binding
type in ESX/ESXi</link> in the VMware Knowledge Base.</para>
</note>
</listitem>
<listitem>
<para><emphasis role="bold">The <systemitem class="service">nova-network</systemitem> service with the
VlanManager</emphasis>. Set the
<literal>vlan_interface</literal> configuration option to
match the ESX host interface that handles VLAN-tagged VM
traffic.</para>
<para>OpenStack Compute automatically creates the
corresponding port groups.</para>
</listitem>
<listitem>
<para>If you are using the OpenStack Networking Service:
Before provisioning VMs, create a port group with the same
name as the <literal>vmware.integration_bridge</literal>
value in <filename>nova.conf</filename> (default is
<literal>br-int</literal>). All VM NICs are attached to
this port group for management by the OpenStack Networking
plug-in.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="VMware_volumes">
<title>Volumes with VMware vSphere</title>
<para>The VMware driver supports attaching volumes from the
OpenStack Block Storage service. The VMware VMDK driver for
OpenStack Block Storage is recommended and should be used for
managing volumes based on vSphere data stores. For more information
about the VMware VMDK driver, see <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/vmware-vmdk-driver.html">VMware VMDK Driver</link>. Also an iSCSI volume driver
provides limited support and can be used only for
attachments.</para>
</section>
<section xml:id="VMware_additional_config">
<title>vSphere 5.0 and earlier additional set up</title>
<para>Users of vSphere 5.0 or earlier must host their WSDL files
locally. These steps are applicable for vCenter 5.0 or ESXi 5.0
and you can either mirror the WSDL from the vCenter or ESXi
server that you intend to use or you can download the SDK
directly from VMware. These workaround steps fix a <link xlink:href="http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&amp;externalId=2010507">known issue</link> with the WSDL that was resolved in later
versions.</para>
<para>When setting the VMwareVCDriver configuration options, you
must include the <code>wsdl_location</code> option. For more
information, see
<link linkend="VMwareVCDriver_configuration_options">VMwareVCDriver
configuration options</link> above.</para>
<procedure>
<title>To mirror WSDL from vCenter (or ESXi)</title>
<step>
<para>Set the <code>VMWAREAPI_IP</code> shell variable to the
IP address for your vCenter or ESXi host from where you plan
to mirror files. For example:</para>
<screen><prompt>$</prompt> <userinput>export VMWAREAPI_IP=&lt;your_vsphere_host_ip&gt;</userinput></screen>
</step>
<step>
<para>Create a local file system directory to hold the WSDL
files:</para>
<screen><prompt>$</prompt> <userinput>mkdir -p /opt/stack/vmware/wsdl/5.0</userinput></screen>
</step>
<step>
<para>Change into the new directory.
<screen><prompt>$</prompt> <userinput>cd /opt/stack/vmware/wsdl/5.0</userinput> </screen></para>
</step>
<step>
<para>Use your OS-specific tools to install a command-line
tool that can download files like
<command>wget</command>.</para>
</step>
<step>
<para>Download the files to the local file cache:</para>
<programlisting language="bash">wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim.wsdl
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/core-types.xsd
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-messagetypes.xsd
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-types.xsd
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-messagetypes.xsd
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-types.xsd
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-messagetypes.xsd
wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-types.xsd</programlisting>
<para>Because the <filename>reflect-types.xsd</filename> and
<filename>reflect-messagetypes.xsd</filename> files do not
fetch properly, you must stub out these files. Use the
following XML listing to replace the missing file content.
The XML parser underneath Python can be very particular and
if you put a space in the wrong place, it can break the
parser. Copy the following contents and formatting
carefully.</para>
<programlisting language="xml">&lt;?xml version="1.0" encoding="UTF-8"?&gt;
&lt;schema
targetNamespace="urn:reflect"
xmlns="http://www.w3.org/2001/XMLSchema"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
elementFormDefault="qualified"&gt;
&lt;/schema&gt; </programlisting>
</step>
<step>
<para>Now that the files are locally present, tell the driver
to look for the SOAP service WSDLs in the local file system
and not on the remote vSphere server. Add the following
setting to the <filename>nova.conf</filename> file for your
<systemitem class="service">nova-compute</systemitem>
node:</para>
<programlisting language="ini">[vmware]
wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl</programlisting>
</step>
</procedure>
<para>Alternatively, download the version appropriate SDK from
<link xlink:href="http://www.vmware.com/support/developer/vc-sdk/">http://www.vmware.com/support/developer/vc-sdk/</link> and
copy it to the <filename>/opt/stack/vmware</filename> file. Make
sure that the WSDL is available, in for example
<filename>/opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl</filename>.
You must point <filename>nova.conf</filename> to fetch this WSDL
file from the local file system by using a URL.</para>
<para>When using the VMwareVCDriver (vCenter) with OpenStack
Compute with vSphere version 5.0 or earlier,
<filename>nova.conf</filename> must include the following
extra config option:</para>
<programlisting language="ini">[vmware]
wsdl_location=file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl</programlisting>
</section>
<section xml:id="VMware_config">
<title>Configuration reference</title>
<para>To customize the VMware driver, use the configuration option settings
documented in <xref linkend="config_table_nova_vmware"/>.</para>
</section>
</section>