Installing Additional Compute Nodes There are many different ways to perform a multi-node install of Compute in order to scale out your deployment and run more compute nodes, enabling more virtual machines to run simultaneously. If your goal is to split your VM load across more than one server, you can connect an additional nova-compute node to a cloud controller node. This configuring can be reproduced on multiple compute servers to start building a true multi-node OpenStack Compute cluster. To build out and scale the Compute platform, you spread out services amongst many servers. While there are additional ways to accomplish the build-out, this section describes adding compute nodes, and the service we are scaling out is called nova-compute. Ensure that the networking on each compute node is configured as documented in the Pre-configuring the network section. In this case, you can install all the nova- packages and dependencies as you did for the Cloud Controller node, or just install nova-compute. Your installation can run any nova- services anywhere, so long as the service can access nova.conf so it knows where the RabbitMQ or Qpid messaging server is installed. When running in a high-availability mode for networking, the compute node is where you configure the compute network, the networking between your instances. Learn more about high-availability for networking in the Compute Administration manual. Because you may need to query the database from the compute node and learn more information about instances, the nova client and MySQL client or PostgresSQL client packages should be installed on any additional compute nodes. Copy the nova.conf from your controller node to all additional compute nodes. Modify the following configuration options so that they match the IP address of the compute host: my_ip vncserver_listen vncserver_proxyclient_address By default, Nova sets the bridge device based on the setting in flat_network_bridge. Now you can edit /etc/network/interfaces with the following template, updated with your IP information. # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto br100 iface br100 inet static bridge_ports eth0 bridge_stp off bridge_maxwait 0 bridge_fd 0 address xxx.xxx.xxx.xxx netmask xxx.xxx.xxx.xxx network xxx.xxx.xxx.xxx broadcast xxx.xxx.xxx.xxx gateway xxx.xxx.xxx.xxx # dns-* options are implemented by the resolvconf package, if installed dns-nameservers xxx.xxx.xxx.xxx Restart networking: $ sudo service networking restart With nova.conf updated and networking set, configuration is nearly complete. First, bounce the relevant services to take the latest updates: $ sudo service libvirtd restart $ sudo service nova-compute restart To avoid issues with KVM and permissions with Nova, run the following commands to ensure we have VM's that are running optimally: # chgrp kvm /dev/kvm # chmod g+rwx /dev/kvm If you want to use the 10.04 Ubuntu Enterprise Cloud images that are readily available at http://uec-images.ubuntu.com/releases/10.04/release/, you may run into delays with booting. Any server that does not have nova-api running on it needs this iptables entry so that UEC images can get metadata info. On compute nodes, configure the iptables with this next step: # iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $NOVA_API_IP:8773 Lastly, confirm that your compute node is talking to your cloud controller. From the cloud controller, run this database query: $ mysql -u$MYSQL_USER -p$MYSQL_PASS nova -e 'select * from services;' In return, you should see something similar to this: +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | created_at | updated_at | deleted_at | deleted | id | host | binary | topic | report_count | disabled | availability_zone | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ | 2011-01-28 22:52:46 | 2011-02-03 06:55:48 | NULL | 0 | 1 | osdemo02 | nova-network | network | 46064 | 0 | nova | | 2011-01-28 22:52:48 | 2011-02-03 06:55:57 | NULL | 0 | 2 | osdemo02 | nova-compute | compute | 46056 | 0 | nova | | 2011-01-28 22:52:52 | 2011-02-03 06:55:50 | NULL | 0 | 3 | osdemo02 | nova-scheduler | scheduler | 46065 | 0 | nova | | 2011-01-29 23:49:29 | 2011-02-03 06:54:26 | NULL | 0 | 4 | osdemo01 | nova-compute | compute | 37050 | 0 | nova | | 2011-01-30 23:42:24 | 2011-02-03 06:55:44 | NULL | 0 | 9 | osdemo04 | nova-compute | compute | 28484 | 0 | nova | | 2011-01-30 21:27:28 | 2011-02-03 06:54:23 | NULL | 0 | 8 | osdemo05 | nova-compute | compute | 29284 | 0 | nova | +---------------------+---------------------+------------+---------+----+----------+----------------+-----------+--------------+----------+-------------------+ You can see that osdemo0{1,2,4,5} are all running nova-compute. When you start spinning up instances, they will allocate on any node that is running nova-compute from this list.