VMware vSphere
Introduction OpenStack Compute supports the VMware vSphere product family. This section describes the additional configuration required to launch VMWare-based virtual machine images. vSphere versions 4.1 and greater are supported. There are two OpenStack Compute drivers that can be used with vSphere: vmwareapi.VMwareVCDriver: a driver that lets nova-compute communicate with a VMware vCenter server managing a cluster of ESX hosts. With this driver and access to shared storage, advanced vSphere features like vMotion, High Availability, and Dynamic Resource Scheduling (DRS) are available. With this driver, one nova-compute service is run per vCenter cluster. vmwareapi.VMwareESXDriver: a driver that lets nova-compute communicate directly to an ESX host, but does not support advanced VMware features. With this driver, one nova-compute service is run per ESX host.
Prerequisites You will need to install the following software installed on each nova-compute node: python-suds: This software is needed by the nova-compute service to communicate with vSphere APIs. If not installed, the "nova-compute" service shuts down with the message: "Unable to import suds". On Ubuntu, these packages can be installed by running: $ sudo apt-get install python-suds
Using the VMwareVCDriver This section covers details of using the VMwareVCDriver.
VMWareVCDriver configuration options When using the VMwareVCDriver (i.e., vCenter) with OpenStack Compute, nova.conf must include the following VMWare-specific config options: [DEFAULT] compute_driver=vmwareapi.VMwareVCDriver [vmware] host_ip=<vCenter host IP> host_username=< vCenter username> host_password=< vCenter password> cluster_name=< vCenter cluster name> wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl Remember that you will have only one nova-compute service per cluster. It is recommended that this host run as a VM with high-availability enabled as part of that same cluster. Also note that many of the nova.conf options mentioned elsewhere in this document that are relevant to libvirt do not apply to using this driver.
vSphere 5.0 (and below) additional setup Users of vSphere 5.0 or earlier will need to locally host their WSDL files. These steps are applicable for vCenter 5.0 or ESXi 5.0 and you may accomplish this by either mirroring the WSDL from the vCenter or ESXi server you intend on using, or you may download the SDK directly from VMware. These are both workaround steps used to fix a known issue with the WSDL that was resolved in later versions. To mirror the WSDL from vCenter (or ESXi), create a local file system directory to hold the WSDL files in. You'll need to set the IP address for your vCenter or ESXi host that you'll be mirroring the files from. Setting the following shell variable will let you cut and paste commands from these instructions. $ export VMWAREAPI_IP=<your_vsphere_host_ip> $ mkdir -p /opt/stack/vmware/wsdl/5.0 Change into the new directory. $ cd /opt/stack/vmware/wsdl/5.0 Next, we'll mirror all the WSDL files. For that we need a tool that can download the files. $ sudo apt-get install wget Now that we have a tool to fetch the files for us, we can download them to the local file cache. wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vimService.wsdl wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim.wsdl wget --no-check-certificate https://$VMWAREAPI_IP/sdk/core-types.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-messagetypes.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/query-types.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/vim-messagetypes.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-messagetypes.xsd wget --no-check-certificate https://$VMWAREAPI_IP/sdk/reflect-types.xsd There will be two files that did not fetch properly reflect-types.xsd and reflect-messagetypes.xsd these two files will need to be stubbed out. The following XML listing can be used to replace the missing file content. The XML parser underneath Python can be very particular and if you put a space in the wrong place it can break the parser. Copy the contents below carefully and watch the formatting carefully. <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="urn:reflect" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:xsd="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> </schema> Now that the files are locally present, you will need to tell the driver to look for the SOAP service WSDLs in the local file system and not on the remote vSphere server. The following setting should be added to the nova.conf for your nova-compute node. [vmware] wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl Alternatively, download the version appropriate SDK from http://www.vmware.com/support/developer/vc-sdk/ and copy it into /opt/stack/vmware. You should ensure that the WSDL is available, in eg /opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl. Below we will point nova.conf to fetch this WSDL file from the local file system using a URL. When using the VMwareVCDriver (i.e., vCenter) with OpenStack Compute with vSphere version 5.0 or below, nova.conf must include the following extra config option: [vmware] wsdl_location=file:///opt/stack/vmware/SDK/wsdl/vim25/vimService.wsdl
Requirements + Limitations The VMwareVCDriver is new in Grizzly, and as a result there are some important deployment requirements and limitations to be aware of. In many cases, these items will be addressed in future releases. Each cluster can only be configured with a single Datastore. If multiple Datastores are configured, the first one returned via the vSphere API will be used. Because a single nova-compute is used per cluster, the nova-scheduler views this as a single host with resources amounting to the aggregate resources of all ESX hosts managed by the cluster. This may result in unexpected behavior depending on your choice of scheduler. Security Groups are not supported if Nova-Network is used. Security Groups are only supported if the VMware driver is used in conjunction with the OpenStack Networking Service running the Nicira NVP plugin.
Using the VMwareESXDriver This section covers details of using the VMwareESXDriver.
VMWareESXDriver configuration options When using the VMwareESXDriver (i.e., no vCenter) with OpenStack Compute, configure nova.conf with the following VMWare-specific config options: [DEFAULT] compute_driver=vmwareapi.VMwareESXDriver [vmware] host_ip=<ESXi host IP> host_username=< ESXi host username> host_password=< ESXi host password> wsdl_location=http://127.0.0.1:8080/vmware/SDK/wsdl/vim25/vimService.wsdl Remember that you will have one nova-compute service per ESXi host. It is recommended that this host run as a VM on the same ESXi host it is managing. Also note that many of the nova.conf options mentioned elsewhere in this document that are relevant to libvirt do not apply to using this driver.
Requirements + Limitations The ESXDriver is unable to take advantage of many of the advanced capabilities associated with the vSphere platform, namely vMotion, High Availability, and Dynamic Resource Scheduler (DRS).
Images with VMware vSphere When using either VMware driver, images should be uploaded to the OpenStack Image Service in the VMDK format. Both thick and thin images are currently supported and all images must be flat (i.e. contained within 1 file). For example To load a thick image with a SCSI adaptor: $ glance image-create name="ubuntu-thick-scsi" disk_format=vmdk container_format=bare \ is_public=true --property vmware_adaptertype="lsiLogic" \ --property vmware_disktype="preallocated" \ --property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk To load a thin image with an IDE adaptor: $ glance image-create name="unbuntu-thin-ide" disk_format=vmdk container_format=bare \ is_public=true --property vmware_adaptertype="ide" \ --property vmware_disktype="thin" \ --property vmware_ostype="ubuntu64Guest" < unbuntuLTS-thin-flat.vmdk The complete list of supported vmware disk properties is documented in the Image Management section. It's critical that the adaptertype is correct; In fact, the image will not boot with the incorrect adaptertype. If you have the meta-data VMDK file the ddb.adapterType property specifies the adaptertype. The default adaptertype is "lsilogic" which is SCSI.
Networking with VMware vSphere The VMware driver support networking with both Nova-Network and the OpenStack Networking Service. If using nova-network with the FlatManager or FlatDHCPManager, before provisioning VMs, create a port group with the same name as the 'flat_network_bridge' value in nova.conf (default is 'br100'). All VM NICs will be attached to this port group. If using nova-network with the VlanManager, before provisioning VMs, make sure the 'vlan_interface' configuration option is set to match the ESX host interface that will handle VLAN-tagged VM traffic. OpenStack Compute will automatically create the corresponding port groups. If using the OpenStack Networking Service, before provisioning VMs, create a port group with the same name as the 'vmware.integration_bridge' value in nova.conf (default is 'br-int'). All VM NICs will be attached to this port group for management by the OpenStack Networking Plugin.
Volumes with VMware vSphere The VMware driver has limited support for attaching Volumes from the OpenStack Block Storage service, supporting attachments only if the volume driver type is 'iscsi'. There is no support for volumes based on vCenter Datastores in this release.
Configuration Reference