Configure migrationsOnly cloud administrators can perform live migrations. If your cloud
is configured to use cells, you can perform live migration
within but not between cells.Migration enables an administrator to move a virtual machine
instance from one compute host to another. This feature is useful
when a compute host requires maintenance. Migration can also be
useful to redistribute the load when many VM instances are running
on a specific physical machine.The migration types are:Migration (or non-live
migration). The instance is shut down (and the instance knows
that it was rebooted) for a period of time to be moved to
another hypervisor.Live migration (or true
live migration). Almost no instance downtime. Useful when the
instances must be kept running during the migration.The types of live migration are:Shared storage-based live
migration. Both hypervisors have access to shared
storage.Block live migration. No
shared storage is required.Volume-backed live
migration. When instances are backed by volumes
rather than ephemeral disk, no shared storage is required, and
migration is supported (currently only in libvirt-based
hypervisors).The following sections describe how to configure your hosts
and compute nodes for migrations by using the KVM and XenServer
hypervisors.KVM-LibvirtPrerequisitesHypervisor: KVM with
libvirtShared storage:NOVA-INST-DIR/instances/
(for example, /var/lib/nova/instances)
has to be mounted by shared storage. This guide uses NFS but
other options, including the OpenStack Gluster Connector are available.Instances: Instance can
be migrated with iSCSI based volumesNotesBecause the Compute service does not use the libvirt
live migration functionality by default, guests are
suspended before migration and might experience several
minutes of downtime. For details, see .This guide assumes the default value for
in your
nova.conf file
(NOVA-INST-DIR/instances).
If you have changed the state_path or
instances_path variables, modify
accordingly.You must specify
vncserver_listen=0.0.0.0 or live
migration does not work correctly.Example Compute installation environmentPrepare at least three servers; for example,
HostA, HostB, and
HostC.HostA is the
Cloud
Controller, and should run these services:
nova-api,
nova-scheduler,
nova-network, cinder-volume, and
nova-objectstore.HostB and HostC
are the compute nodes
that run nova-compute.Ensure that
NOVA-INST-DIR
(set with state_path in the
nova.conf file) is the same on all
hosts.In this example, HostA is the NFSv4
server that exports
NOVA-INST-DIR/instances,
and HostB and HostC
mount it.To configure your systemConfigure your DNS or /etc/hosts
and ensure it is consistent across all hosts. Make sure that
the three hosts can perform name resolution with each other.
As a test, use the ping command to ping
each host from one another.$ping HostA$ping HostB$ping HostCEnsure that the UID and GID of your nova and libvirt
users are identical between each of your servers. This
ensures that the permissions on the NFS mount works
correctly.Follow the instructions at the Ubuntu NFS HowTo to setup an NFS server on
HostA, and NFS Clients on
HostB and
HostC.The aim is to export
NOVA-INST-DIR/instances
from HostA, and have it readable and
writable by the nova user on HostB and
HostC.Using your knowledge from the Ubuntu documentation,
configure the NFS server at HostA by
adding this line to the /etc/exports
file:NOVA-INST-DIR/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)Change the subnet mask (255.255.0.0)
to the appropriate value to include the IP addresses of
HostB and HostC.
Then restart the NFS server:$/etc/init.d/nfs-kernel-server restart$/etc/init.d/idmapd restartSet the 'execute/search' bit on your shared
directory.On both compute nodes, make sure to enable the
'execute/search' bit to allow qemu to be able to use the
images within the directories. On all hosts, run the
following command:$chmod o+x NOVA-INST-DIR/instancesConfigure NFS at HostB and HostC by adding this line to
the /etc/fstab file:HostA:/ /NOVA-INST-DIR/instances nfs4 defaults 0 0Make sure that you can mount the exported directory can
be mounted:$mount -a -vCheck that HostA can see the
"NOVA-INST-DIR/instances/"
directory:$ls -ld NOVA-INST-DIR/instances/drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/Perform the same check at HostB and HostC, paying
special attention to the permissions (nova should be able to
write):$ls -ld NOVA-INST-DIR/instances/drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/$df -kFilesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 921514972 4180880 870523828 1% /
none 16498340 1228 16497112 1% /dev
none 16502856 0 16502856 0% /dev/shm
none 16502856 368 16502488 1% /var/run
none 16502856 0 16502856 0% /var/lock
none 16502856 0 16502856 0% /lib/init/rw
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)Update the libvirt configurations so that the calls can be made securely. These methods enable remote access over TCP and are not documented here, please consult your network administrator for assistance in deciding how to configure access.SSH tunnel to libvirtd's UNIX socketlibvirtd TCP socket, with GSSAPI/Kerberos for
auth+data encryptionlibvirtd TCP socket, with TLS for encryption and x509 client certs for authenticationlibvirtd TCP socket, with TLS for encryption and Kerberos for authenticationRestart libvirt. After you run the command, ensure that
libvirt is successfully restarted:$stop libvirt-bin && start libvirt-bin$ps -ef | grep libvirtroot 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -lConfigure your firewall to allow libvirt to communicate
between nodes.For information about ports that are used with libvirt, see the libvirt documentation By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to 49261 is used for the KVM communications. Based on the secure remote access TCP configuration you chose, be careful choosing what ports you open and understand who has access.You can now configure options for live migration. In
most cases, you do not need to configure any options. The
following chart is for advanced usage only.Enable true live migrationBy default, the Compute service does not use the libvirt
live migration functionality. To enable this functionality,
add the following line to the nova.conf
file:live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVEThe Compute service does not use libvirt's live migration
by default because there is a risk that the migration process
never ends. This can happen if the guest operating system
dirties blocks on the disk faster than they can
migrated.XenServerShared storagePrerequisitesCompatible XenServer
hypervisors. For more information, see the
Requirements for Creating Resource Pools section
of the XenServer Administrator's
Guide.Shared storage. An
NFS export, visible to all XenServer hosts.For the supported NFS versions, see the NFS VHD section of the XenServer
Administrator's Guide.To use shared storage live migration with XenServer
hypervisors, the hosts must be joined to a XenServer pool. To
create that pool, a host aggregate must be created with
special metadata. This metadata is used by the XAPI plug-ins
to establish the pool.To use shared storage live migration with XenServer
hypervisorsAdd an NFS VHD storage to your master XenServer, and
set it as default SR. For more information, please refer
to the NFS VHD section in the XenServer
Administrator's Guide.Configure all the compute nodes to use the default sr
for pool operations. Add this line to your
nova.conf configuration files
across your compute
nodes:sr_matching_filter=default-sr:trueCreate a host aggregate:$nova aggregate-create <name-for-pool> <availability-zone>The command displays a table that contains the ID of
the newly created aggregate.Now add special metadata to the aggregate, to mark it
as a hypervisor pool:$nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true$nova aggregate-set-metadata <aggregate-id> operational_state=createdMake the first compute node part of that
aggregate:$nova aggregate-add-host <aggregate-id> <name-of-master-compute>At this point, the host is part of a XenServer
pool.Add additional hosts to the pool:$nova aggregate-add-host <aggregate-id> <compute-host-name>At this point, the added compute node and the host
are shut down, to join the host to the XenServer pool.
The operation fails, if any server other than the
compute node is running/suspended on your host.Block migrationPrerequisitesCompatible XenServer
hypervisors. The hypervisors must support the
Storage XenMotion feature. See your XenServer manual to
make sure your edition has this feature.NotesTo use block migration, you must use the
--block-migrate parameter with
the live migration command.Block migration works only with EXT local storage
SRs, and the server must not have any volumes
attached.