64b6c9261e
Current folder name New folder name Book title ---------------------------------------------------------- basic-install DELETE cli-guide DELETE common common NEW admin-guide-cloud Cloud Administrators Guide docbkx-example DELETE openstack-block-storage-admin DELETE openstack-compute-admin DELETE openstack-config config-reference OpenStack Configuration Reference openstack-ha high-availability-guide OpenStack High Availabilty Guide openstack-image image-guide OpenStack Virtual Machine Image Guide openstack-install install-guide OpenStack Installation Guide openstack-network-connectivity-admin admin-guide-network OpenStack Networking Administration Guide openstack-object-storage-admin DELETE openstack-security security-guide OpenStack Security Guide openstack-training training-guide OpenStack Training Guide openstack-user user-guide OpenStack End User Guide openstack-user-admin user-guide-admin OpenStack Admin User Guide glossary NEW OpenStack Glossary bug: #1220407 Change-Id: Id5ffc774b966ba7b9a591743a877aa10ab3094c7 author: diane fleming
125 lines
8.1 KiB
XML
125 lines
8.1 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
|
<section xml:id="installing-additional-compute-nodes"
|
|
xmlns="http://docbook.org/ns/docbook"
|
|
xmlns:xi="http://www.w3.org/2001/XInclude"
|
|
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
|
|
<title>Installing Additional Compute Nodes</title>
|
|
<para>There are many different ways to perform a multi-node
|
|
install of Compute in order to scale out your deployment and
|
|
run more compute nodes, enabling more virtual machines to run
|
|
simultaneously.</para>
|
|
<para>If your goal is to split your VM load across more than one
|
|
server, you can connect an additional <systemitem
|
|
class="service">nova-compute</systemitem> node to a cloud
|
|
controller node. This configuring can be reproduced on
|
|
multiple compute servers to start building a true multi-node
|
|
OpenStack Compute cluster.</para>
|
|
<para>To build out and scale the Compute platform, you spread out
|
|
services amongst many servers. While there are additional ways
|
|
to accomplish the build-out, this section describes adding
|
|
compute nodes, and the service we are scaling out is called
|
|
<systemitem class="service"
|
|
>nova-compute</systemitem>.</para>
|
|
<para>Ensure that the networking on each compute node is configured as documented in the <link
|
|
linkend="compute-configuring-guest-network">Pre-configuring the network</link>
|
|
section.</para>
|
|
<para>In this case, you can install all the nova- packages and
|
|
dependencies as you did for the Cloud Controller node, or just
|
|
install <systemitem class="service">nova-compute</systemitem>. Your installation can run any nova-
|
|
services anywhere, so long as the service can access
|
|
<filename>nova.conf</filename> so it knows where the
|
|
RabbitMQ or Qpid messaging server is installed.</para>
|
|
<para>When running in a high-availability mode for networking, the
|
|
compute node is where you configure the compute network, the
|
|
networking between your instances. Learn more about
|
|
high-availability for networking in the Compute Administration
|
|
manual.</para>
|
|
<para>Because you may need to query the database from the compute
|
|
node and learn more information about instances, the nova
|
|
client and MySQL client or PostgresSQL client packages should
|
|
be installed on any additional compute nodes.</para>
|
|
<para>Copy the <filename>nova.conf</filename> from your controller node to all additional
|
|
compute nodes. Modify the following configuration options so that they match the IP address of the compute
|
|
host:</para>
|
|
<itemizedlist>
|
|
<listitem>
|
|
<para><literal>my_ip</literal></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><literal>vncserver_listen</literal></para>
|
|
</listitem>
|
|
<listitem>
|
|
<para><literal>vncserver_proxyclient_address</literal></para>
|
|
</listitem>
|
|
</itemizedlist>
|
|
<para>By default, Nova sets the bridge device based on the
|
|
setting in <literal>flat_network_bridge</literal>. Now
|
|
you can edit
|
|
<filename>/etc/network/interfaces</filename> with
|
|
the following template, updated with your IP
|
|
information.</para>
|
|
<programlisting language="bash"># The loopback network interface
|
|
auto lo
|
|
iface lo inet loopback
|
|
|
|
# The primary network interface
|
|
auto br100
|
|
iface br100 inet static
|
|
bridge_ports eth0
|
|
bridge_stp off
|
|
bridge_maxwait 0
|
|
bridge_fd 0
|
|
address <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
netmask <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
network <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
broadcast <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
gateway <replaceable>xxx.xxx.xxx.xxx</replaceable>
|
|
# dns-* options are implemented by the resolvconf package, if installed
|
|
dns-nameservers <replaceable>xxx.xxx.xxx.xxx</replaceable></programlisting>
|
|
<para>Restart networking:</para>
|
|
<screen><prompt>$</prompt> <userinput>sudo service networking restart</userinput></screen>
|
|
<para>With <filename>nova.conf</filename> updated and
|
|
networking set, configuration is nearly complete.
|
|
First, bounce the relevant services to take the latest
|
|
updates:</para>
|
|
<screen><prompt>$</prompt> <userinput>sudo service libvirtd restart</userinput>
|
|
$ <userinput>sudo service nova-compute restart</userinput></screen>
|
|
<para>To avoid issues with KVM and permissions with Nova,
|
|
run the following commands to ensure we have VM's that
|
|
are running optimally:</para>
|
|
<screen><prompt>#</prompt> <userinput>chgrp kvm /dev/kvm</userinput>
|
|
<prompt>#</prompt> <userinput>chmod g+rwx /dev/kvm</userinput></screen>
|
|
<para>If you want to use the 10.04 Ubuntu Enterprise Cloud
|
|
images that are readily available at
|
|
http://uec-images.ubuntu.com/releases/10.04/release/,
|
|
you may run into delays with booting. Any server that
|
|
does not have <command>nova-api</command> running on
|
|
it needs this iptables entry so that UEC images can
|
|
get metadata info. On compute nodes, configure the
|
|
iptables with this next step:</para>
|
|
<screen><prompt>#</prompt> <userinput>iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination <replaceable>$NOVA_API_IP</replaceable>:8773</userinput></screen>
|
|
<para>Lastly, confirm that your compute node is talking to
|
|
your cloud controller. From the cloud controller, run
|
|
this database query:</para>
|
|
<screen><prompt>$</prompt> <userinput>mysql -u<replaceable>$MYSQL_USER</replaceable> -p<replaceable>$MYSQL_PASS</replaceable> nova -e 'select * from services;'</userinput></screen>
|
|
<para>In return, you should see something similar to
|
|
this:</para>
|
|
<screen><computeroutput>+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
|
|
| created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone |
|
|
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+
|
|
| 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova |
|
|
| 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova |
|
|
| 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova |
|
|
| 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova |
|
|
| 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova |
|
|
| 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova |
|
|
+---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+</computeroutput></screen>
|
|
<para>You can see that <literal>osdemo0{1,2,4,5}</literal>
|
|
are all running <systemitem class="service"
|
|
>nova-compute</systemitem>. When you start
|
|
spinning up instances, they will allocate on any node
|
|
that is running <systemitem class="service"
|
|
>nova-compute</systemitem> from this list.</para>
|
|
|
|
</section>
|