ce4f54732f
Clarify that evacuate method is preferred over manual for node recovery Edit broken and obsolete external guide references Remove gerunds in related files Address request for stylistic edits from review Address linking issues from review Change-Id: Iccf188ad89a8cc277966cb9c0a530ba24613127b Closes-Bug:#1222735
349 lines
18 KiB
XML
349 lines
18 KiB
XML
<section xml:id="section_configuring-compute-migrations"
|
|
xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:ns5="http://www.w3.org/1999/xhtml"
|
|
xmlns:ns4="http://www.w3.org/2000/svg"
|
|
xmlns:ns3="http://www.w3.org/1998/Math/MathML"
|
|
xmlns:ns="http://docbook.org/ns/docbook"
|
|
version="5.0">
|
|
<?dbhtml stop-chunking?>
|
|
<title>Configure migrations</title>
|
|
<note>
|
|
<para>This feature is for cloud administrators only. If your cloud is configured to use cells,
|
|
you can perform live migration within a cell, but not between cells.</para>
|
|
</note>
|
|
<para>Migration allows an administrator to move a virtual machine instance from one compute host
|
|
to another. This feature is useful when a compute host requires maintenance. Migration can also
|
|
be useful to redistribute the load when many VM instances are running on a specific physical machine.</para>
|
|
<para>There are two types of migration:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis role="bold">Migration</emphasis> (or non-live migration):
|
|
In this case, the instance is shut down (and the instance knows that it was rebooted) for a period of time to be
|
|
moved to another hypervisor.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis role="bold">Live migration</emphasis> (or true live migration):
|
|
Almost no instance downtime, it is useful when the instances must be kept
|
|
running during the migration.</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
<para>There are three types of <emphasis role="bold">live migration</emphasis>:
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis role="bold">Shared storage based live migration</emphasis>: In this case both hypervisors have access to a shared storage.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis role="bold">Block live migration</emphasis>: for this type of migration, no shared storage is required.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis role="bold">Volume-backed live migration</emphasis>: when instances are backed by volumes, rather than ephemeral disk, no shared storage is required, and migration is supported (currently only in libvirt-based hypervisors).</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
<para>The following sections describe how to configure your hosts and compute nodes
|
|
for migrations using the KVM and XenServer hypervisors.
|
|
</para>
|
|
<section xml:id="configuring-migrations-kvm-libvirt">
|
|
<title>KVM-Libvirt</title>
|
|
<para><emphasis role="bold">Prerequisites</emphasis>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis role="bold">Hypervisor:</emphasis> KVM with libvirt</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis role="bold">Shared storage:</emphasis>
|
|
<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename> (for example,
|
|
<filename>/var/lib/nova/instances</filename>) has to be mounted by shared storage.
|
|
This guide uses NFS but other options, including the <link
|
|
xlink:href="http://gluster.org/community/documentation//index.php/OSConnect"
|
|
>OpenStack Gluster Connector</link> are available.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis role="bold">Instances:</emphasis> Instance can be migrated with iSCSI
|
|
based volumes</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<note>
|
|
<para>Because the Compute service does not use libvirt's live migration functionality by
|
|
default, guests are suspended before migration and may therefore experience several
|
|
minutes of downtime. See the <xref linkend="true-live-migration-kvm-libvirt"/> section for more details.</para>
|
|
</note>
|
|
<note>
|
|
<para>This guide assumes the default value for <literal>instances_path</literal> in your
|
|
<filename>nova.conf</filename> (<filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>). If
|
|
you have changed the <literal>state_path</literal> or <literal>instances_path</literal>
|
|
variables, please modify accordingly.</para>
|
|
</note>
|
|
<note>
|
|
<para>You must specify <literal>vncserver_listen=0.0.0.0</literal> or live migration does not work correctly.</para>
|
|
</note>
|
|
</para>
|
|
<para><emphasis role="bold">Example Compute Installation Environment</emphasis> <itemizedlist>
|
|
<listitem>
|
|
<para>Prepare 3 servers at least; for example, <literal>HostA</literal>, <literal>HostB</literal>
|
|
and <literal>HostC</literal></para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para><literal>HostA</literal> is the "Cloud Controller", and should be running: <systemitem class="service">nova-api</systemitem>,
|
|
<systemitem class="service">nova-scheduler</systemitem>, <literal>nova-network</literal>, <systemitem class="service">cinder-volume</systemitem>,
|
|
<literal>nova-objectstore</literal>.</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para><literal>HostB</literal> and <literal>HostC</literal> are the "compute nodes", running <systemitem class="service">nova-compute</systemitem>.</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>Ensure that, <literal><replaceable>NOVA-INST-DIR</replaceable></literal> (set with <literal>state_path</literal> in <filename>nova.conf</filename>) is same on
|
|
all hosts.</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>In this example, <literal>HostA</literal> is the NFSv4 server that exports <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename>,
|
|
and <literal>HostB</literal> and <literal>HostC</literal> mount it.</para>
|
|
</listitem>
|
|
</itemizedlist></para>
|
|
|
|
<para><emphasis role="bold">System configuration</emphasis></para>
|
|
|
|
<para><orderedlist>
|
|
<listitem>
|
|
<para>Configure your DNS or <filename>/etc/hosts</filename> and
|
|
ensure it is consistent across all hosts. Make sure that the three hosts
|
|
can perform name resolution with each other. As a test,
|
|
use the <command>ping</command> command to ping each host from one
|
|
another.</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>ping HostA</userinput>
|
|
<prompt>$</prompt> <userinput>ping HostB</userinput>
|
|
<prompt>$</prompt> <userinput>ping HostC</userinput></screen>
|
|
</listitem>
|
|
<listitem><para>Ensure that the UID and GID of your nova and libvirt users
|
|
are identical between each of your servers. This ensures that the permissions
|
|
on the NFS mount works correctly.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Follow the instructions at
|
|
|
|
<link xlink:href="https://help.ubuntu.com/community/SettingUpNFSHowTo">the Ubuntu NFS HowTo to
|
|
setup an NFS server on <literal>HostA</literal>, and NFS Clients on <literal>HostB</literal> and <literal>HostC</literal>.</link> </para>
|
|
<para>Our aim is to export <filename><replaceable>NOVA-INST-DIR</replaceable>/instances</filename> from <literal>HostA</literal>,
|
|
and have it readable and writable by the nova user on <literal>HostB</literal> and <literal>HostC</literal>.</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>
|
|
Using your knowledge from the Ubuntu documentation, configure the
|
|
NFS server at <literal>HostA</literal> by adding a line to <filename>/etc/exports</filename>
|
|
<programlisting><replaceable>NOVA-INST-DIR</replaceable>/instances HostA/255.255.0.0(rw,sync,fsid=0,no_root_squash)</programlisting>
|
|
</para>
|
|
<para>Change the subnet mask (<literal>255.255.0.0</literal>) to the appropriate
|
|
value to include the IP addresses of <literal>HostB</literal> and <literal>HostC</literal>. Then
|
|
restart the NFS server.</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>/etc/init.d/nfs-kernel-server restart</userinput>
|
|
<prompt>$</prompt> <userinput>/etc/init.d/idmapd restart</userinput></screen>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Set the 'execute/search' bit on your shared directory.</para>
|
|
<para>On both compute nodes, make sure to enable the
|
|
'execute/search' bit to allow qemu to be able to use the images
|
|
within the directories. On all hosts, execute the
|
|
following command:</para>
|
|
<screen><prompt>$</prompt> <userinput>chmod o+x <replaceable>NOVA-INST-DIR</replaceable>/instances</userinput> </screen>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Configure NFS at HostB and HostC by adding below to
|
|
<filename>/etc/fstab</filename>.</para>
|
|
|
|
<programlisting>HostA:/ /<replaceable>NOVA-INST-DIR</replaceable>/instances nfs4 defaults 0 0</programlisting>
|
|
|
|
<para>Then ensure that the exported
|
|
directory can be mounted.</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>mount -a -v</userinput></screen>
|
|
|
|
<para>Check that "<filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename>"
|
|
directory can be seen at HostA</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
|
|
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-19 14:34 nova-install-dir/instances/</computeroutput></screen>
|
|
|
|
<para>Perform the same check at HostB and HostC - paying special
|
|
attention to the permissions (nova should be able to write)</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>ls -ld <filename><replaceable>NOVA-INST-DIR</replaceable>/instances/</filename></userinput>
|
|
<computeroutput>drwxr-xr-x 2 nova nova 4096 2012-05-07 14:34 nova-install-dir/instances/</computeroutput></screen>
|
|
<screen><prompt>$</prompt> <userinput>df -k</userinput>
|
|
<computeroutput>Filesystem 1K-blocks Used Available Use% Mounted on
|
|
/dev/sda1 921514972 4180880 870523828 1% /
|
|
none 16498340 1228 16497112 1% /dev
|
|
none 16502856 0 16502856 0% /dev/shm
|
|
none 16502856 368 16502488 1% /var/run
|
|
none 16502856 0 16502856 0% /var/lock
|
|
none 16502856 0 16502856 0% /lib/init/rw
|
|
HostA: 921515008 101921792 772783104 12% /var/lib/nova/instances ( <--- this line is important.)</computeroutput></screen>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>Update the libvirt configurations. Modify
|
|
<filename>/etc/libvirt/libvirtd.conf</filename>:</para>
|
|
<programlisting language="bash">before : #listen_tls = 0
|
|
after : listen_tls = 0
|
|
|
|
before : #listen_tcp = 1
|
|
after : listen_tcp = 1
|
|
|
|
add: auth_tcp = "none"</programlisting>
|
|
|
|
<para>Modify <filename>/etc/libvirt/qemu.conf</filename></para>
|
|
<programlisting language="bash">before : #dynamic_ownership = 1
|
|
after : dynamic_ownership = 0</programlisting>
|
|
|
|
<para>Modify <filename>/etc/init/libvirt-bin.conf</filename></para>
|
|
<programlisting language="bash">before : exec /usr/sbin/libvirtd -d
|
|
after : exec /usr/sbin/libvirtd -d -l</programlisting>
|
|
|
|
<para>Modify <filename>/etc/default/libvirt-bin</filename></para>
|
|
<programlisting language="bash">before :libvirtd_opts=" -d"
|
|
after :libvirtd_opts=" -d -l"</programlisting>
|
|
<para>Restart libvirt. After executing the command, ensure
|
|
that libvirt is successfully restarted.</para>
|
|
|
|
<screen><prompt>$</prompt> <userinput>stop libvirt-bin && start libvirt-bin</userinput>
|
|
<prompt>$</prompt> <userinput>ps -ef | grep libvirt</userinput>
|
|
<computeroutput>root 1145 1 0 Nov27 ? 00:00:03 /usr/sbin/libvirtd -d -l</computeroutput></screen>
|
|
</listitem>
|
|
<listitem>
|
|
<para>Configure your firewall to allow libvirt to communicate between nodes.</para>
|
|
<para>Information about ports used with libvirt can be found at <link xlink:href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">the libvirt documentation</link>
|
|
By default, libvirt listens on TCP port 16509 and an ephemeral TCP range from 49152 to
|
|
49261 is used for the KVM communications. As this guide has disabled libvirt auth, you
|
|
should take good care that these ports are only open to hosts within your installation.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para>You can now configure options for live migration. In
|
|
most cases, you do not need to configure any options. The
|
|
following chart is for advanced usage only.</para>
|
|
</listitem>
|
|
</orderedlist></para>
|
|
<xi:include href="../../common/tables/nova-livemigration.xml"/>
|
|
|
|
<section xml:id="true-live-migration-kvm-libvirt">
|
|
<title>Enabling true live migration</title>
|
|
<para>By default, the Compute service does not use libvirt's live migration functionality. To
|
|
enable this functionality, add the following line to <filename>nova.conf</filename>:
|
|
<programlisting>live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE</programlisting>The
|
|
Compute service does not use libvirt's live migration by default because there is a risk that
|
|
the migration process never ends. This can happen if the guest operating system
|
|
dirties blocks on the disk faster than they can migrated.</para>
|
|
</section>
|
|
</section>
|
|
|
|
<!--status: good, right place-->
|
|
<section xml:id="configuring-migrations-xenserver">
|
|
<title>XenServer</title>
|
|
|
|
<section xml:id="configuring-migrations-xenserver-shared-storage">
|
|
<title>Shared Storage</title>
|
|
<para><emphasis role="bold">Prerequisites</emphasis>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis role="bold">Compatible XenServer hypervisors.</emphasis> For more information,
|
|
please refer to the <link xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#pooling_homogeneity_requirements">Requirements for Creating Resource Pools</link>
|
|
section of the XenServer Administrator's Guide.
|
|
</para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><emphasis role="bold">Shared storage:</emphasis> an NFS export,
|
|
visible to all XenServer hosts.
|
|
<note>
|
|
<para>Please check the <link xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701">NFS VHD</link>
|
|
section of the XenServer Administrator's Guide for the supported
|
|
NFS versions.
|
|
</para>
|
|
</note>
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
</para>
|
|
|
|
<para>
|
|
To use shared storage live migration with XenServer hypervisors,
|
|
the hosts must be joined to a XenServer pool. To create that pool,
|
|
a host aggregate must be created with special metadata. This metadata is used by the XAPI plugins to establish the pool.
|
|
</para>
|
|
|
|
<orderedlist>
|
|
<listitem>
|
|
<para>
|
|
Add an NFS VHD storage to your master XenServer, and set it as default SR. For more information, please refer to the
|
|
<link xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/reference.html#id1002701">NFS VHD</link> section of the XenServer Administrator's Guide.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
Configure all the compute nodes to use the default sr for pool operations, by including:
|
|
<programlisting>sr_matching_filter=default-sr:true</programlisting>
|
|
in your <filename>nova.conf</filename> configuration files across your compute nodes.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
Create a host aggregate
|
|
<screen><prompt>$</prompt> <userinput>nova aggregate-create <name-for-pool> <availability-zone></userinput></screen>
|
|
The command displays a table which contains the id of the newly created aggregate.
|
|
Now add special metadata to the aggregate, to mark it as a hypervisor pool
|
|
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata <aggregate-id> hypervisor_pool=true</userinput></screen>
|
|
<screen><prompt>$</prompt> <userinput>nova aggregate-set-metadata <aggregate-id> operational_state=created</userinput></screen>
|
|
Make the first compute node part of that aggregate
|
|
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host <aggregate-id> <name-of-master-compute></userinput></screen>
|
|
At this point, the host is part of a XenServer pool.
|
|
</para>
|
|
</listitem>
|
|
|
|
<listitem>
|
|
<para>
|
|
Add additional hosts to the pool:
|
|
<screen><prompt>$</prompt> <userinput>nova aggregate-add-host <aggregate-id> <compute-host-name></userinput></screen>
|
|
<note>
|
|
<para>At this point the added compute node and the host is shut down, to
|
|
join the host to the XenServer pool. The operation fails, if any server other than the
|
|
compute node is running/suspended on your host.</para>
|
|
</note>
|
|
</para>
|
|
</listitem>
|
|
</orderedlist>
|
|
|
|
</section> <!-- End of Shared Storage -->
|
|
|
|
<section xml:id="configuring-migrations-xenserver-block-migration">
|
|
<title>Block migration</title>
|
|
<para><emphasis role="bold">Prerequisites</emphasis>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><emphasis role="bold">Compatible XenServer hypervisors.</emphasis> The hypervisors must support the Storage XenMotion feature. Please refer
|
|
to the manual of your XenServer to make sure your edition has this feature.
|
|
</para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<note>
|
|
<para>Please note, that you need to use an extra option <literal>--block-migrate</literal> for the live migration
|
|
command, to use block migration.</para>
|
|
</note>
|
|
<note>
|
|
<para>Block migration works only with EXT local storage SRs,
|
|
and the server should not have any volumes attached.</para>
|
|
</note>
|
|
</para>
|
|
</section> <!-- End of Block migration -->
|
|
|
|
</section> <!-- End of XenServer/Migration -->
|
|
</section> <!-- End of configuring migrations -->
|