Merge "Remove obsolete files from Install Guide"

This commit is contained in:
Jenkins 2013-10-17 15:06:21 +00:00 committed by Gerrit Code Review
commit d72c1cc5ec
14 changed files with 0 additions and 795 deletions

View File

@ -1,24 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="add-volume-node" xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink"
version="1.0">
<title>Adding Block Storage Nodes</title>
<para>To offer more storage to your tenant's VMs, add another volume node running cinder services by following these steps.</para>
<orderedlist>
<listitem><para>Install the required packages for cinder.</para></listitem>
<listitem><para>Create a volume group called cinder-volumes (configurable using the
<literal>cinder_volume</literal> parameter in <filename>cinder.conf</filename>).</para></listitem>
<listitem><para>Configure tgtd with its <filename>targets.conf</filename> file and start the
<literal>tgtd</literal> service.</para></listitem>
<listitem><para>Connect the node to the Block Storage (cinder) database by configuring the
<filename>cinder.conf</filename> file with the connection information.</para></listitem>
<listitem><para>Make sure the <literal>iscsi_ip_address</literal> setting in <filename>cinder.conf</filename>
matches the public IP of the node you're installing, then restart
the cinder services.</para></listitem>
</orderedlist>
<para>When you issue a <command>cinder-manage host list</command> command you should see the new volume node listed. If not, look at the logs in <filename>/var/log/cinder/volume.log</filename> for issues.</para>
</section>

View File

@ -1,112 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="cinder-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0">
<title>Installing and configuring Block Storage</title>
<para>Install the packages for OpenStack Block Storage on the cloud controller.</para>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>apt-get install cinder-api cinder-scheduler cinder-volume</userinput></screen>
<screen os="centos;rhel;fedora"><prompt>#</prompt> <userinput>yum install openstack-cinder openstack-utils openstack-selinux</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper install openstack-cinder-api openstack-cinder-scheduler \
openstack-cinder-volume</userinput></screen>
<note><para>If you are using XenServer type vhd images, you will also need the <command>vhd-util</command>
binary to be able to create volumes from uploaded images.</para>
<para>This could be installed by:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install blktap-utils</userinput></screen>
<screen os="opensuse"><prompt>#</prompt> <userinput>zypper in xen-tools</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install blktap</userinput></screen>
</note>
<para>Edit <filename>/etc/cinder/api-paste.ini</filename> (filter
authtoken).<programlisting>[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
service_protocol = http
service_host = 192.168.206.130
service_port = 5000
auth_host = 192.168.206.130
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = cinder
admin_password = openstack</programlisting></para>
<para>Edit <filename>/etc/cinder/cinder.conf</filename> to reflect your settings.</para>
<programlisting>
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
sql_connection = mysql://cinder:openstack@localhost/cinder
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900</programlisting>
<para>Configure messaging also in <filename>/etc/cinder/cinder.conf</filename>.</para>
<programlisting os="ubuntu">
rabbit_host = controller
rabbit_port = 5672
# Change the following settings if you're not using the default RabbitMQ configuration
#rabbit_userid = guest
#rabbit_password = guest
#rabbit_virtual_host = /nova</programlisting>
<programlisting os="rhel;centos;fedora">
qpid_hostname = controller</programlisting>
<programlisting os="opensuse">
rabbit_host = controller
rabbit_port = 5672
# Change the following settings if you're not using the default RabbitMQ configuration
#rabbit_userid = guest
#rabbit_password = guest
#rabbit_virtual_host = /nova</programlisting>
<para>Verify entries in <filename>/etc/nova/nova.conf</filename>. The
<literal>volume_api_class</literal> setting is the default setting since
grizzly.</para>
<programlisting>volume_api_class=nova.volume.cinder.API</programlisting>
<para>Set up the cinder database.</para>
<programlisting>CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY '<replaceable>CINDER_DBPASS</replaceable>';
FLUSH PRIVILEGES;</programlisting>
<para>Add a filter entry to the devices section
<filename>/etc/lvm/lvm.conf</filename> to keep LVM from scanning devices
used by virtual machines.</para>
<note><para>You must add every physical volume that is
needed for LVM on the Cinder host. You can get a list by running
<command>pvdisplay</command>.</para></note>
<para>Each item in the filter array starts with either an
"<literal>a</literal>" for accept, or an "<literal>r</literal>" for reject.
Physical volumes that are needed on the Cinder host begin with
"<literal>a</literal>". The array must end with
"<literal>r/.*/</literal>"</para>
<programlisting>devices {
...
filter = [ "a/sda1/", "a/sdb1/", "r/.*/"]
...
}</programlisting>
<para>Set up the target file.</para>
<note><para><literal>$state_path=/var/lib/cinder/</literal> and
<literal>$volumes_dir=$state_path/volumes</literal> are the
default values used by the Block Storage service.
<emphasis>These directories MUST exist!</emphasis></para></note>
<screen><prompt>$</prompt> <userinput>sudo sh -c "echo 'include /var/lib/cinder/volumes/*' >> /etc/tgt/targets.conf"</userinput></screen>
<para>Restart the <command>tgt</command> service.
<screen><prompt>$</prompt> <userinput>sudo restart tgt</userinput></screen></para>
<para>Populate the database.
<screen><prompt>$</prompt> <userinput>sudo cinder-manage db sync</userinput></screen></para>
<para>Restart the services.
<screen><prompt>$</prompt> <userinput>sudo service cinder-volume restart</userinput>
<prompt>$</prompt> <userinput>sudo service cinder-api restart</userinput>
<prompt>$</prompt> <userinput>sudo service cinder-scheduler restart</userinput></screen></para>
<para>Create a 1 GB test volume.
<screen><prompt>$</prompt> <userinput>cinder create --display_name test 1</userinput>
<prompt>$</prompt> <userinput>cinder list</userinput></screen></para>
<programlisting>+--------------------------------------+-----------+--------------+------+-------------+-------------+
| ID | Status | Display Name | Size | Volume Type | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+-------------+
| 5bbad3f9-50ad-42c5-b58c-9b6b63ef3532 | available | test | 1 | None | |
+--------------------------------------+-----------+--------------+------+-------------+-------------+</programlisting>
</section>

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="installing-the-cloud-controller"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Installing Compute Services</title>
<para>On both the controller and compute nodes install the required nova- packages, and
dependencies are automatically installed.</para>
<para os="ubuntu">On the controller node:</para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>sudo apt-get install nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert nova-conductor nova-consoleauth nova-doc nova-scheduler nova-network</userinput></screen>
<para os="ubuntu">On the compute node:</para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>sudo apt-get install nova-compute nova-network</userinput></screen>
<para os="ubuntu">If you see the error:
<screen>E: Unable to locate package nova-novncproxy</screen>ensure
that you have installed the Ubuntu Cloud Archive packages by
adding the following to
<filename>/etc/apt/sources.list.d/grizzly.list</filename>:
<screen>deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main</screen>Prior
to running apt-get update and apt-get upgrade, install the
keyring:
<screen><prompt>$</prompt> <userinput>sudo apt-get install ubuntu-cloud-keyring</userinput></screen>
</para>
<screen os="centos;rhel;fedora"><prompt>$</prompt> <userinput>sudo yum install openstack-nova</userinput></screen>
<screen os="opensuse;sles"><prompt>$</prompt> <userinput>sudo zypper install openstack-nova</userinput></screen>
</section>

View File

@ -1,84 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-configuring-guest-network"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Pre-configuring the network</title>
<para>These instructions are for using the FlatDHCP networking mode with a single network
interface. More complex configurations are described in the networking section, but this
configuration is known to work. These configuration options should be set on all compute
nodes.</para>
<para>Set your network interface in promiscuous mode so that it can receive packets that are
intended for virtual machines. As
root:<screen><prompt>#</prompt> <userinput>ip link set eth0 promisc on</userinput></screen></para>
<para os="ubuntu">Set up your /etc/network/interfaces file with these settings:</para>
<itemizedlist os="ubuntu"><listitem><para>eth0: public IP, gateway</para></listitem>
<listitem><para>br100: no ports, stp off, fd 0, first address from your defined network range.</para></listitem></itemizedlist>
<para os="ubuntu">Here's an Ubuntu/Debian example:</para>
<para os="ubuntu"><programlisting># The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
# Bridge network interface for VM networks
auto br100
iface br100 inet static
address 192.168.100.1
netmask 255.255.255.0
bridge_stp off
bridge_fd 0</programlisting></para>
<para os="centos;fedora;rhel">Here's an example network setup for RHEL, Fedora, or CentOS. Create
<filename>/etc/sysconfig/network-scripts/ifcfg-br100</filename>:</para>
<programlisting os="centos;fedora;rhel">DEVICE=br100
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
IPADDR=192.168.100.1
NETMASK=255.255.255.0</programlisting>
<para os="opensuse">Here's an example network setup for openSUSE. Create
<filename>/etc/sysconfig/network/ifcfg-br100</filename>:</para>
<programlisting os="opensuse">DEVICE=br100
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
STARTMODE=auto
IPADDR=192.168.100.1
NETMASK=255.255.255.0</programlisting>
<para>Also install bridge-utils:</para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>sudo apt-get install bridge-utils</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>$</prompt> <userinput>sudo yum install bridge-utils</userinput></screen>
<screen os="opensuse"><prompt>$</prompt> <userinput>sudo zypper install bridge-utils</userinput></screen>
<para>Ensure that you set up the bridge, although if you use flat_network_bridge=br100 in your
<filename>nova.conf</filename> file, nova will set up the bridge for you when you run
the <command>nova network-create</command> command.</para>
<screen><prompt>$</prompt> <userinput>sudo brctl addbr br100</userinput></screen>
<para>Lastly, restart networking to have these changes take
effect. (This method is deprecated but "<code>restart
networking</code>" doesn't always work.)</para>
<screen><prompt>$</prompt> <userinput>sudo /etc/init.d/networking restart</userinput></screen>
<section xml:id="config-requirements-rhel" os="rhel;centos">
<title>Configuration requirements with
RHEL</title>
<para>Set selinux in permissive mode:</para>
<screen><prompt>$</prompt> <userinput>sudo setenforce permissive</userinput></screen>
<para>Otherwise you will get issues like <link
xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=734346"
>https://bugzilla.redhat.com/show_bug.cgi?id=734346</link> /usr/bin/nova-dhcpbridge: No such file or
directory.</para>
<para>If you are using a distribution based on RHEL 6.2 or earlier, use the <command>openstack-config</command> utility to
turn off forced DHCP releases:</para>
<screen><prompt>$</prompt> <userinput>sudo openstack-config --set /etc/nova/nova.conf DEFAULT force_dhcp_release False</userinput></screen>
<para>If you are using a distribution based on RHEL 6.3 or later, install the dnsmasq utilities (<package>dnsmasq-utils</package>) package, which provides support for forced DHCP releases:</para>
<screen><prompt>$</prompt> <userinput>sudo yum install dnsmasq-utils</userinput></screen>
<para os="rhel;centos;fedora">If you intend to use guest images that don't have a single partition, then allow libguestfs to inspect the image so that files can be injected, by setting:
</para>
<screen><prompt>$></prompt> <userinput>sudo openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_inject_partition -1</userinput></screen>
</section></section>

View File

@ -1,14 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-create-network"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Creating the Network for Compute VMs</title>
<para>You must run the command that creates the network and the bridge using the br100 specified in the nova.conf file to create the network that the virtual machines
use. This example shows the
network range using <literal>192.168.100.0/24</literal> as the fixed range for our guest VMs, but you can substitute the range for the network you have
available. We're labeling it with <literal>private</literal> in this case.</para>
<screen><prompt>$</prompt> <userinput>nova-manage network create private --fixed_range_v4=192.168.100.0/24 --bridge_interface=br100</userinput></screen>
<note><para>You can find out more about the nova-manage network create command with <userinput>nova-manage
network create -h</userinput>.</para></note>
</section>

View File

@ -1,60 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="setting-up-sql-database-mysql"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Configuring the SQL Database (MySQL) on the Cloud Controller</title>
<para>Start the mysql command line client by running:</para>
<para>
<screen><prompt>$</prompt> <userinput>mysql -u root -p</userinput></screen>
</para>
<para>Enter the mysql root user's password when prompted.</para>
<para>To configure the MySQL database, create the nova database.</para>
<para>
<screen><prompt>mysql></prompt> <userinput>CREATE DATABASE nova;</userinput></screen></para>
<para>Create a MySQL user and password for the newly-created nova database that has full control
of the database.</para>
<screen><prompt>mysql></prompt> <userinput>GRANT ALL ON nova.* TO 'nova'@'%' IDENTIFIED BY '<replaceable>[YOUR_NOVADB_PASSWORD]</replaceable>';</userinput>
<prompt>mysql></prompt> <userinput>GRANT ALL ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '<replaceable>[YOUR_NOVADB_PASSWORD]</replaceable>';</userinput></screen>
<note>
<para>In the above commands, even though the <literal>'nova'@'%'</literal> also matches
<literal>'nova'@'localhost'</literal>, you must explicitly specify the
<literal>'nova'@'localhost'</literal> entry.</para>
<para>By default, MySQL will create entries in the user table with
<literal>User=''</literal> and <literal>Host='localhost'</literal>. The
<literal>User=''</literal> acts as a wildcard, matching all users. If you do not
have the <literal>'nova'@'localhost'</literal> account, and you try to log in as the
nova user, the precedence rules of MySQL will match against the <literal>User=''
Host='localhost'</literal> account before it matches against the
<literal>User='nova' Host='%'</literal> account. This will result in an error
message that looks like:</para>
<para>
<screen><computeroutput>ERROR 1045 (28000): Access denied for user 'nova'@'localhost' (using password: YES)</computeroutput></screen>
</para>
<para>Thus, we create a separate <literal>User='nova' Host='localhost'</literal> entry that
will match with higher precedence.</para>
<para>See the <link xlink:href="http://dev.mysql.com/doc/refman/5.5/en/connection-access.html"
>MySQL documentation on connection verification</link> for more details on how MySQL
determines which row in the user table it uses when authenticating connections.</para>
</note>
<para>Enter quit at the mysql> prompt to exit MySQL.</para>
<para>
<screen><prompt>mysql></prompt> <userinput>quit</userinput></screen>
</para>
<para>The command to populate the database is described later in the
documentation, in the Section entitled <link
linkend="compute-db-sync">Configuring the Database for Compute</link>.
</para>
<note><title>Securing MySQL</title>
<para>
Additional steps are required to configure MySQL for production mode.
In particular, anonymous accounts should be removed. On several distributions,
these accounts can be removed by running the following script
after installing mysql:
<command>/usr/bin/mysql_secure_installation</command>
</para>
</note>
</section>

View File

@ -1,42 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="setting-up-sql-database-postgresql"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Configuring the SQL Database (PostgreSQL) on the Cloud Controller</title>
<para>Optionally, if you choose not to use MySQL, you can install
and configure PostgreSQL for all your databases. Here's a
walkthrough for the Nova database:</para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>sudo apt-get install postgresql postgresql-client</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>$</prompt> <userinput>sudo yum install postgresql postgresql-server</userinput></screen>
<screen os="opensuse"><prompt>$</prompt> <userinput>zypper install postgresql postgresql-server</userinput></screen>
<para>Start the PostgreSQL command line client by running:</para>
<para><userinput>sudo su - postgres</userinput></para>
<para>Enter the postgresql root user's password if prompted.</para>
<para>To configure the database, create the nova database.</para>
<para><screen>postgres> psql
postgres=# CREATE USER novadbadmin;
postgres=# ALTER USER novadbadmin WITH PASSWORD '<replaceable>[YOUR_NOVADB_PASSWORD]</replaceable>';
postgres=# CREATE DATABASE nova;
postgres=# GRANT ALL PRIVILEGES ON DATABASE nova TO novadbadmin;
postgres=# \q
postgres> exit</screen></para>
<para>The database is created and we have a privileged user that
controls the database. Now we have to install the packages that
will help Nova access the database.</para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>sudo apt-get install python-sqlalchemy python-psycopg2</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>$</prompt> <userinput>sudo yum install python-sqlalchemy python-psycopg2</userinput></screen>
<screen os="opensuse"><prompt>$</prompt> <userinput>sudo zypper install python-SQLAlchemy python-psycopg2</userinput></screen>
<para>Configure the <filename>/etc/nova/nova.conf</filename> file,
to ensure it knows to use the PostgreSQL database:</para>
<literallayout class="monospaced">[database]
connection = postgres://novadbadmin:[<replaceable>[YOUR_NOVADB_PASSWORD]</replaceable>]@127.0.0.1/nova</literallayout>
<para>The command to populate the database is described later in the
documentation, in the section entitled <link
linkend="compute-db-sync">Configuring the Database for Compute</link>.
</para>
</section>

View File

@ -1,46 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-db-sync"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Configuring the Database for Compute</title>
<para>Create the tables in your backend data store by running
the following command:</para>
<screen os="ubuntu"><prompt>$</prompt> <userinput>sudo nova-manage db sync</userinput></screen>
<screen os="rhel;fedora;centos;opensuse"><prompt>#</prompt> <userinput>nova-manage db sync</userinput></screen>
<para>If you see any response, you can look in
<filename>/var/log/nova/nova-manage.log</filename> to see the problem. No
response means the command completed correctly and your
nova database is now populated.</para>
<note><title>Deprecation warnings</title>
<para>Note that if while running this command you see warnings
such as <literal>SADeprecationWarning: The 'listeners' argument to
Pool (and create_engine()) is deprecated. Use
event.listen().</literal>, these will be fixed in future version
of the libraries and can be safely ignored.</para>
</note>
<para>Restart all services in total, just to cover the entire spectrum. On the controller node
run:</para>
<para>
<screen os="ubuntu">sudo start nova-api
sudo start nova-conductor
sudo start nova-network
sudo start nova-scheduler
sudo start nova-novncproxy
sudo start libvirt-bin
sudo /etc/init.d/rabbitmq-server restart </screen>
</para>
<screen os="rhel;fedora;centos;opensuse"><prompt>#</prompt> <userinput>for svc in api objectstore conductor network volume scheduler cert; do sudo service openstack-nova-$svc start; done</userinput></screen>
<para>On the compute node run:</para>
<para>
<screen os="ubuntu">sudo start nova-compute
sudo start nova-network</screen>
</para>
<screen os="rhel;fedora;centos;opensuse"><prompt>#</prompt> <userinput>for svc in compute network; do sudo service openstack-nova-$svc start; done</userinput></screen>
<para>All nova services are now installed and started. If the
"start" command doesn't work, your services may not be
running correctly (or not at all). Review the logs in
<filename>/var/log/nova</filename> to look for clues.
</para>
</section>

View File

@ -1,124 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-minimum-configuration-settings"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Configuring OpenStack Compute</title>
<para>This section describes the relevant
<filename>nova.conf</filename> settings for getting a
minimal install running. Refer to the OpenStack Compute
Administration Manual for guidance on more configuration
options.</para>
<para>In general, you can use the same
<filename>nova.conf</filename> file across the controller
and compute nodes. However, the following configuration
options need to be changed on each compute host: <itemizedlist>
<listitem>
<para><literal>my_ip</literal></para>
</listitem>
<listitem>
<para><literal>vncserver_listen</literal></para>
</listitem>
<listitem>
<para><literal>vncserver_proxyclient_address</literal></para>
</listitem>
</itemizedlist>For the above configuration options, you must
use the IP address of the specific compute host, not the cloud
controller.</para>
<para>The packages automatically do these steps for a user named
nova, but if you are installing as another user you should
ensure that the <filename>nova.conf</filename> file should
have its owner set to <literal>root:nova</literal>, and mode
set to <literal>0640</literal>, since the file contains your
MySQL servers username and password.</para>
<note>
<para>If you are installing as another user, you should set
permissions correctly. This packaged install ensures that
the nova user belongs to the nova group and that the .conf
file permissions are set, but here are the manual
commands, which should be run as root:</para>
<screen>
<prompt>#</prompt> <userinput>groupadd nova</userinput>
<prompt>#</prompt> <userinput>usermod -g nova nova</userinput>
<prompt>#</prompt> <userinput>chown -R nova:nova /etc/nova</userinput>
<prompt>#</prompt> <userinput>chmod 640 /etc/nova/nova.conf</userinput>
</screen>
</note>
<para>The hypervisor is set by editing
<filename>/etc/nova/nova.conf</filename>. The hypervisor
defaults to <literal>kvm</literal>, but if you are working
within a VM already, switch to <literal>qemu</literal> on the
<literal>libvirt_type=</literal> line. To use Xen, refer
to the overview in this book for where to install nova
components.</para>
<note>
<para>You can also configure the <systemitem class="service"
>nova-compute</systemitem> service (and, for example
configure a hypervisor-per-compute-node) with a separate
<filename>nova-compute.conf</filename> file and then
referring to <filename>nova-compute.conf</filename> in the
<filename>nova.conf</filename> file.</para>
</note>
<para>Ensure the database connection defines your backend data
store by adding a <literal>connection</literal> line to the
<literal>[database]</literal> section in
<filename>nova.conf</filename>:
<literal>connection=mysql://<replaceable>[user]</replaceable>:<replaceable>[pass]</replaceable>@<replaceable>[primary
IP]</replaceable>/<replaceable>[db
name]</replaceable></literal>, such as
<literal>connection=mysql://nova:yourpassword@192.168.206.130/nova</literal>.</para>
<para>Add these settings to
<filename>/etc/nova/nova.conf</filename> for the network
configuration assumptions made for this installation scenario.
You can place comments in the <filename>nova.conf</filename>
file by entering a new line with a <literal>#</literal> sign
at the beginning of the line. To see a listing of all possible
configuration option settings, see <link
xlink:href="http://docs.openstack.org/trunk/config-reference/content/ch_configuring-openstack-compute.html"
>the reference in the <citetitle>OpenStack Configuration Reference</citetitle></link>.</para>
<programlisting>auth_strategy=keystone
network_manager=nova.network.manager.FlatDHCPManager
public_interface=eth0
flat_interface=eth0
flat_network_bridge=br100</programlisting>
<para>Here is an example <filename>nova.conf</filename> with
commented sections:</para>
<para>
<programlisting os="ubuntu;opensuse"><xi:include parse="text" href="../common/samples/nova.conf"/>
</programlisting>
<programlisting os="rhel;centos;fedora"><xi:include parse="text" href="../common/samples/nova.conf-yum"/>
</programlisting>
</para>
<para>
<note>
<para>The <literal>my_ip</literal> configuration option
will be different for each host, edit it
accordingly.</para>
</note>
</para>
<para>The controller node will run the <systemitem class="service"
>nova-api</systemitem>, <systemitem class="service"
>nova-scheduler</systemitem>, <systemitem class="service"
>nova-cert</systemitem>, <systemitem class="service"
>nova-consoleauth</systemitem>, <systemitem
class="service">nova-conductor</systemitem> and optionally
nova-network services. The compute node will run the
<systemitem class="service">nova-compute</systemitem> and
nova-network services. Stop the nova- services prior to
running db sync, by running stop commands as root. Otherwise
your logs show errors because the database has not yet been
populated. On the controller node run:</para>
<screen os="ubuntu">
<prompt>#</prompt> <userinput>stop nova-api</userinput>
<prompt>#</prompt> <userinput>stop nova-conductor</userinput>
<prompt>#</prompt> <userinput>stop nova-network</userinput>
<prompt>#</prompt> <userinput>stop nova-scheduler</userinput>
<prompt>#</prompt> <userinput>stop nova-novncproxy</userinput>
</screen>
<screen os="rhel;fedora;centos;opensuse"><prompt>$</prompt> <userinput>for svc in api objectstore conductor network volume scheduler cert; do sudo service openstack-nova-$svc stop ; sudo chkconfig openstack-nova-$svc on ; done</userinput></screen>
<para>On the compute node run:</para>
<screen os="ubuntu"><prompt>#</prompt> <userinput>stop nova-compute</userinput>
<prompt>#</prompt> <userinput>stop nova-network</userinput></screen>
<screen os="rhel;fedora;centos;opensuse"><prompt>$></prompt> <userinput>for svc in api compute network; do sudo service openstack-nova-$svc stop ; sudo chkconfig openstack-nova-$svc on ; done</userinput>
</screen>
</section>

View File

@ -1,33 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-network-planning"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"><title>Compute Network Planning</title>
<para>For both conserving network resources and ensuring that
network administrators understand the needs for networks and
public IP addresses for accessing the APIs and VMs as
necessary, this section offers recommendations and required
minimum sizes. Throughput of at least 1000 Mbps is suggested.
This walkthrough shows network configurations for a single
server.</para>
<para>For OpenStack Compute, networking is configured on
multi-node installations between the physical machines on
a single subnet. For networking between virtual machine
instances, three network options are available: flat,
DHCP, and VLAN. Two NICs (Network Interface Cards) are
recommended on the server running nova-network.</para>
<para>Management Network (RFC1918 IP Range, not publicly
routable): This network is utilized for all inter-server
communications within the cloud infrastructure. Recommended
size: 255 IPs (CIDR /24)</para>
<para>Public Network (Publicly routable IP range): This network is
utilized for providing Public IP accessibility to the API
endpoints within the cloud infrastructure. Minimum size: 8 IPs
(CIDR /29)</para>
<para>VM Network (RFC1918 IP Range, not publicly routable): This
network is utilized for providing primary IP addresses to the
cloud instances. Recommended size: 1024 IPs (CIDR /22)</para>
<para>Floating IP network (Publicly routable IP Range): This
network is utilized for providing Public IP accessibility to
selected cloud instances. Minimum size: 16 IPs (CIDR
/28)</para></section>

View File

@ -1,112 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-system-requirements"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Compute and Image System Requirements</title>
<para><emphasis role="bold">Hardware</emphasis>: OpenStack
components are intended to run on standard hardware.
Recommended hardware configurations for a minimum
production deployment are as follows for the cloud
controller nodes and compute nodes for Compute and the
Image Service, and object, account, container, and proxy
servers for Object Storage.</para>
<table rules="all">
<caption>Hardware Recommendations</caption>
<col width="20%"/>
<col width="23%"/>
<col width="57%"/>
<thead>
<tr>
<td>Server</td>
<td>Recommended Hardware</td>
<td>Notes</td>
</tr>
</thead>
<tbody>
<tr>
<td>Cloud Controller node (runs network, volume, API, scheduler and image
services)</td>
<td>
<para>Processor: 64-bit x86</para>
<para>Memory: 12 GB RAM</para>
<para>Disk space: 30 GB (SATA, SAS or SSD)</para>
<para>Volume storage: two disks with 2 TB (SATA) for volumes attached to the
compute nodes</para>
<para>Network: one 1 Gbps Network Interface Card (NIC)</para>
</td>
<td>
<para>Two NICS are recommended but not required. A quad core server with 12
GB RAM would be more than sufficient for a cloud controller node.</para>
</td>
</tr>
<tr>
<td>Compute nodes (runs virtual instances)</td>
<td>
<para>Processor: 64-bit x86</para>
<para>Memory: 32 GB RAM</para>
<para>Disk space: 30 GB (SATA)</para>
<para>Network: two 1 Gbps NICs</para>
</td>
<td>
<para>With 2 GB RAM you can run one m1.small instance on a node or three
m1.tiny instances without memory swapping, so 2 GB RAM would be a
minimum for a test-environment compute node. As an example, Rackspace
Cloud Builders use 96 GB RAM for compute nodes in OpenStack
deployments.</para>
<para>Specifically for virtualization on certain hypervisors on the node or
nodes running <systemitem class="service">nova-compute</systemitem>, you need a x86 machine with an AMD processor
with SVM extensions (also called AMD-V) or an Intel processor with VT
(virtualization technology) extensions.</para>
<para>For XenServer and XCP refer to the <link
xlink:href="http://docs.vmd.citrix.com/XenServer/6.0.0/1.0/en_gb/installation.html#sys_requirements">
XenServer installation guide</link> and the <link
xlink:href="http://hcl.vmd.citrix.com/">
XenServer hardware compatibility list</link>.</para>
<para>For LXC, the VT extensions are not required.</para>
</td>
</tr>
</tbody>
</table>
<note>
<para>While certain parts of OpenStack are known to work on
various operating systems, currently the only
feature-complete, production-supported host environment is
64-bit Linux.</para></note>
<para><emphasis role="bold">Operating System</emphasis>: OpenStack
currently has packages for the following distributions:
CentOS, Debian, Fedora, RHEL, openSUSE, SLES, and Ubuntu. These packages are
maintained by community members, refer to <link
xlink:href="http://wiki.openstack.org/Packaging"
>http://wiki.openstack.org/Packaging</link> for additional
links. <note>
<para os="ubuntu">The Grizzly version is available on the
most recent LTS (Long Term Support) version which is
12.04 (Precise Pangolin), via the Ubuntu Cloud
Archive. At this time, there are not
packages available for 12.10. It is also available on
the current Ubuntu development series, which is 13.04
(Raring Ringtail).</para>
<para os="fedora">The Grizzly release of OpenStack Compute
requires Fedora 16 or later.</para>
<para os="opensuse">Packages for openSUSE are available in
the Open Build Service.</para>
</note></para>
<para><emphasis role="bold">Database</emphasis>: For
OpenStack Compute, you need access to either a PostgreSQL
or MySQL database, or you can install it as part of the
OpenStack Compute installation process.</para>
<para><emphasis role="bold">Permissions</emphasis>: You can
install OpenStack services either as root or as a user
with sudo permissions if you configure the sudoers file
to enable all the permissions.</para>
<para><emphasis role="bold">Network Time Protocol</emphasis>:
You must install a time synchronization program such as
NTP. For Compute, time synchronization avoids problems
when scheduling VM launches on compute
nodes. For Object Storage, time synchronization ensures the
object replications are accurately updating objects when
needed so that the freshest content is served.</para>
</section>

View File

@ -1,30 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="compute-verifying-install"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Verifying the Compute Installation</title>
<para>You can ensure all the Compute services are running by using the
<command>nova-manage</command> command, as root:</para>
<screen><prompt>#</prompt> <userinput>nova-manage service list</userinput></screen>
<para>In return you should see "smiley faces" rather than three X symbols. Here's an example.</para>
<screen><computeroutput>Binary Host Zone Status State Updated_At
nova-compute myhost nova enabled :-) 2012-04-02 14:06:15
nova-cert myhost nova enabled :-) 2012-04-02 14:06:16
nova-scheduler myhost nova enabled :-) 2012-04-02 14:06:11
nova-network myhost nova enabled :-) 2012-04-02 14:06:13
nova-consoleauth myhost nova enabled :-) 2012-04-02 14:06:10</computeroutput></screen>
<para><note>
<para>If you see three X symbols and are running services
on separate hosts, ensure that ntp is synchronizing
time correctly and that all servers match their time.
Out-of-sync time stamps are the most common cause of
the XXX state.</para>
</note>You can find the version of the installation by using
the <command>nova-manage</command> command, as root:</para>
<screen><prompt>#</prompt> <userinput>nova-manage version</userinput></screen>
<para>The version number 2013.2 corresponds with the Havana
release of Compute.</para>
<literallayout class="monospaced">2013.2</literallayout>
</section>

View File

@ -1,69 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="example-installation-architecture">
<title>Example Installation Architectures</title>
<para>OpenStack Compute uses a shared-nothing, messaging-based
architecture. While very flexible, the fact that you can install each
nova- service on an independent server means there are many possible
methods for installing OpenStack Compute. Here are the types of
installation architectures:</para>
<itemizedlist>
<listitem>
<para>Single node: Only one server runs all nova- services and also
drives all the virtual instances. Use this configuration only for
trying out OpenStack Compute, or for development purposes.</para>
</listitem>
<listitem>
<para>Two nodes: A cloud controller node runs the nova- services
except for <systemitem class="service">nova-compute</systemitem>, and a compute node runs
<systemitem class="service">nova-compute</systemitem>. A client computer is likely needed to
bundle images and interfacing to the servers, but a client is not
required. Use this configuration for proof of concepts or development
environments.</para>
</listitem>
<listitem>
<para>Multiple nodes: You can add more compute nodes to the two node
installation by simply installing <systemitem class="service">nova-compute</systemitem> on
an additional server and copying a <filename>nova.conf</filename> file
to the added node. This would result in a multiple node installation.
You can also add a volume controller and a network controller as
additional nodes in a more complex multiple node installation. A
minimum of 4 nodes is best for running multiple virtual instances that
require a lot of processing power.</para>
</listitem>
</itemizedlist>
<para>This is an illustration of one possible multiple server installation
of OpenStack Compute. Virtual server networking in the cluster may
vary.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata fileref="figures/NOVA_install_arch.png" scale="80"/>
</imageobject>
</inlinemediaobject></para>
<para>An alternative architecture would be to add more messaging servers if you notice a lot of
back up in the messaging queue causing performance problems. In that case you would add an
additional messaging server in addition to or instead of scaling up the database server.
Your installation can run any nova- service on any server as long as the
<filename>nova.conf</filename> is configured to point to the messaging server and the
server can send messages to the server.</para>
<para>Multiple installation architectures are possible, here is another
example illustration.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata fileref="figures/NOVA_compute_nodes.png" scale="40"/>
</imageobject>
</inlinemediaobject></para>
</section>

View File

@ -1,20 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="service-architecture">
<title>Service Architecture</title>
<para>Because Compute has multiple services and many configurations are
possible, here is a diagram showing the overall service architecture and
communication systems between the services.</para>
<para><inlinemediaobject>
<imageobject>
<imagedata fileref="figures/NOVA_ARCH.png" scale="80"/>
</imageobject>
</inlinemediaobject></para>
</section>