Configuring MigrationsThis feature is for cloud administrators only.
Migration allows an administrator to move a virtual machine instance from one compute host
to another. This feature is useful when a compute host requires maintenance. Migration can also
be useful to redistribute the load when many VM instances are running on a specific physical machine.There are two types of migration:
Migration (or non-live migration):
In this case, the instance is shut down (and the instance knows that it was rebooted) for a period of time to be
moved to another hypervisor.Live migration (or true live migration):
Almost no instance downtime, it is useful when the instances must be kept
running during the migration.There are three types of live migration:
Shared storage based live migration: In this case both hypervisors have access to a shared storage.Block live migration: for this type of migration, no shared storage is required.Volume-backed live migration: when instances are backed by volumes, rather than ephemeral disk, no shared storage is required, and migration is supported (currently only in libvirt-based hypervisors).The following sections describe how to configure your hosts and compute nodes
for migrations using the KVM and XenServer hypervisors.
KVM-LibvirtPrerequisitesHypervisor: KVM with libvirtShared storage:NOVA-INST-DIR/instances/ (for example,
/var/lib/nova/instances) has to be mounted by shared storage.
This guide uses NFS but other options, including the OpenStack Gluster Connector are available.Instances: Instance can be migrated with iSCSI
based volumesMigrations done by the Compute service do not use libvirt's live migration
functionality by default. Because of this, guests are suspended before migration and may
therefore experience several minutes of downtime. See the section on True Migration for
KVM and Libvirt in the OpenStack Compute Administration Guide for more details.This guide assumes the default value for instances_path in your
nova.conf (NOVA-INST-DIR/instances). If
you have changed the state_path or instances_path
variables, please modify accordingly.You must specify vncserver_listen=0.0.0.0 or live migration does not work correctly.Example Nova Installation EnvironmentPrepare 3 servers at least; for example, HostA, HostB
and HostCHostA is the "Cloud Controller", and should be running: nova-api,
nova-scheduler, nova-network, cinder-volume,
nova-objectstore.HostB and HostC are the "compute nodes", running nova-compute.Ensure that, NOVA-INST-DIR (set with state_path in nova.conf) is same on
all hosts.In this example, HostA is the NFSv4 server that exports NOVA-INST-DIR/instances,
and HostB and HostC mount it.System configurationConfigure your DNS or /etc/hosts and
ensure it is consistent across all hosts. Make sure that the three hosts
can perform name resolution with each other. As a test,
use the ping command to ping each host from one
another.$ping HostA$ping HostB$ping HostCEnsure that the UID and GID of your nova and libvirt users
are identical between each of your servers. This ensures that the permissions
on the NFS mount works correctly.Follow the instructions at
the Ubuntu NFS HowTo to
setup an NFS server on HostA, and NFS Clients on HostB and HostC. Our aim is to export NOVA-INST-DIR/instances from HostA,
and have it readable and writable by the nova user on HostB and HostC.
Using your knowledge from the Ubuntu documentation, configure the
NFS server at HostA by adding a line to /etc/exportsNOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)Change the subnet mask (255.255.0.0) to the appropriate
value to include the IP addresses of HostB and HostC. Then
restart the NFS server.$/etc/init.d/nfs-kernel-server restart$/etc/init.d/idmapd restartSet the 'execute/search' bit on your shared directory.On both compute nodes, make sure to enable the
'execute/search' bit to allow qemu to be able to use the images
within the directories. On all hosts, execute the
following command:$chmod o+x NOVA-INST-DIR/instancesConfigure NFS at HostB and HostC by adding below to
/etc/fstab.HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0Then ensure that the exported
directory can be mounted.$mount -a -vCheck that "NOVA-INST-DIR/instances/"
directory can be seen at HostA$ls -ld NOVA-INST-DIR/instances/drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/Perform the same check at HostB and HostC - paying special
attention to the permissions (nova should be able to write)$ls -ld NOVA-INST-DIR/instances/drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/$df -kFilesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 921514972 4180880 870523828 1% /
none 16498340 1228 16497112 1% /dev
none 16502856 0 16502856 0% /dev/shm
none 16502856 368 16502488 1% /var/run
none 16502856 0 16502856 0% /var/lock
none 16502856 0 16502856 0% /lib/init/rw
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)Update the libvirt configurations. Modify
/etc/libvirt/libvirtd.conf:before : #listen_tls = 0
after : listen_tls = 0
before : #listen_tcp = 1
after : listen_tcp = 1
add: auth_tcp = "none"Modify /etc/libvirt/qemu.confbefore : #dynamic_ownership = 1
after : dynamic_ownership = 0Modify /etc/init/libvirt-bin.confbefore : exec /usr/sbin/libvirtd -d
after : exec /usr/sbin/libvirtd -d -lModify /etc/default/libvirt-binbefore :libvirtd_opts=" -d"
after :libvirtd_opts=" -d -l"Restart libvirt. After executing the command, ensure
that libvirt is successfully restarted.$stop libvirt-bin && start libvirt-bin$ps -ef | grep libvirtroot 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -lConfigure your firewall to allow libvirt to communicate between nodes.Information about ports used with libvirt can be found at the libvirt documentation
By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to
49261 is used for the KVM communications. As this guide has disabled libvirt auth, you
should take good care that these ports are only open to hosts within your installation.
You can now configure options for live migration. In
most cases, you do not need to configure any options. The
following chart is for advanced usage only.Enabling true live migrationBy default, the Compute service does not use libvirt's live migration functionality. To
enable this functionality, add the following line to nova.conf:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVEThe
Compute service does not use libvirt's live migration by default because there is a risk that
the migration process never ends. This can happen if the guest operating system
dirties blocks on the disk faster than they can migrated.XenServerShared StoragePrerequisitesCompatible XenServer hypervisors. For more information,
please refer to the Requirements for Creating Resource Pools
section of the XenServer Administrator's Guide.
Shared storage: an NFS export,
visible to all XenServer hosts.
Please check the NFS VHD
section of the XenServer Administrator's Guide for the supported
NFS versions.
To use shared storage live migration with XenServer hypervisors,
the hosts must be joined to a XenServer pool. To create that pool,
a host aggregate must be created with special metadata. This metadata is used by the XAPI plugins to establish the pool.
Add an NFS VHD storage to your master XenServer, and set it as default SR. For more information, please refer to the
NFS VHD section of the XenServer Administrator's Guide.
Configure all the compute nodes to use the default sr for pool operations, by including:
sr_matching_filter=default-sr:true
in your nova.conf configuration files across your compute nodes.
Create a host aggregate
$nova aggregate-create <name-for-pool> <availability-zone>
The command displays a table which contains the id of the newly created aggregate.
Now add special metadata to the aggregate, to mark it as a hypervisor pool
$nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true$nova aggregate-set-metadata <aggregate-id> operational_state=created
Make the first compute node part of that aggregate
$nova aggregate-add-host <aggregate-id> <name-of-master-compute>
At this point, the host is part of a XenServer pool.
Add additional hosts to the pool:
$nova aggregate-add-host <aggregate-id> <compute-host-name>At this point the added compute node and the host is shut down, to
join the host to the XenServer pool. The operation fails, if any server other than the
compute node is running/suspended on your host.Block migrationPrerequisitesCompatible XenServer hypervisors. The hypervisors must support the Storage XenMotion feature. Please refer
to the manual of your XenServer to make sure your edition has this feature.
Please note, that you need to use an extra option --block-migrate for the live migration
command, to use block migration.Block migration works only with EXT local storage SRs,
and the server should not have any volumes attached.