openstack-manuals/doc/admin-guide-network/ch_install.xml
Diane Fleming 64b6c9261e Folder rename, file rename, flattening of directories
Current folder name	New folder name	        Book title
----------------------------------------------------------
basic-install 	        DELETE
cli-guide	        DELETE
common	                common
NEW	                admin-guide-cloud	Cloud Administrators Guide
docbkx-example	        DELETE
openstack-block-storage-admin 	DELETE
openstack-compute-admin 	DELETE
openstack-config 	config-reference	OpenStack Configuration Reference
openstack-ha 	        high-availability-guide	OpenStack High Availabilty Guide
openstack-image	        image-guide	OpenStack Virtual Machine Image Guide
openstack-install 	install-guide	OpenStack Installation Guide
openstack-network-connectivity-admin 	admin-guide-network 	OpenStack Networking Administration Guide
openstack-object-storage-admin 	DELETE
openstack-security 	security-guide	OpenStack Security Guide
openstack-training 	training-guide	OpenStack Training Guide
openstack-user 	        user-guide	OpenStack End User Guide
openstack-user-admin 	user-guide-admin	OpenStack Admin User Guide
glossary	        NEW        	OpenStack Glossary

bug: #1220407

Change-Id: Id5ffc774b966ba7b9a591743a877aa10ab3094c7
author: diane fleming
2013-09-08 15:15:50 -07:00

986 lines
49 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_install">
<title>OpenStack Networking Installation</title>
<para>Learn how to install and get the OpenStack Networking service
up and running.</para>
<section xml:id="install_prereqs">
<title>Initial Prerequisites</title>
<para>
<itemizedlist>
<listitem>
<para>If you are building a host from scratch and will use
OpenStack Networking, we strongly recommend that you use
Ubuntu 12.04 or 12.10 or Fedora 17 or 18. These platforms
have OpenStack Networking packages and receive significant
testing.</para>
</listitem>
<listitem>
<para>OpenStack Networking requires at least dnsmasq 2.59,
which contains all the necessary options.</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="install_ubuntu">
<title>Install Packages (Ubuntu)</title>
<note>
<para>This procedure uses the Cloud Archive for Ubuntu. You can
read more about it at <link
xlink:href="http://blog.canonical.com/2012/09/14/now-you-can-have-your-openstack-cake-and-eat-it/"
>http://bit.ly/Q8OJ9M</link>.</para>
</note>
<para>Point to Grizzly PPAs:</para>
<screen><prompt>$</prompt><userinput>echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/grizzly main | sudo tee /etc/apt/sources.list.d/grizzly.list</userinput>
<prompt>$</prompt><userinput>sudo apt-get install ubuntu-cloud-keyring</userinput>
<prompt>$</prompt><userinput>sudo apt-get update</userinput>
<prompt>$</prompt><userinput>sudo apt-get upgrade</userinput> </screen>
<section xml:id="install_neutron_server">
<title>Install neutron-server</title>
<para>The <systemitem class="service"
>neutron-server</systemitem> handles OpenStack Networking's
user requests and exposes the API.<procedure>
<title>To install the neutron-server</title>
<step>
<para>Install neutron-server and CLI for accessing the
API:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-server python-neutronclient</userinput> </screen>
</step>
<step>
<para>You must also install the plugin you choose to use,
for example:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-plugin-&lt;plugin-name&gt;</userinput></screen>
<!--<para>See
<xref linkend="flexibility"/>.</para>-->
</step>
<step>
<para>Most plugins require that you install a database and
configure it in a plugin configuration file. For
example:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install mysql-server python-mysqldb python-sqlalchemy</userinput></screen>
<para>If you already use a database for other OpenStack
services, you only need to create a neutron
database:</para>
<screen><prompt>$</prompt><userinput>mysql -u &lt;user&gt; -p &lt;pass&gt; -e “create database neutron”</userinput></screen>
</step>
<step>
<para>Configure the database in the plugins configuration
file:</para>
<substeps>
<step>
<para>Find the plugin configuration file in
<filename>/etc/neutron/plugins/&lt;plugin-name&gt;</filename>
(for example,
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>).
</para>
</step>
<step>
<para>Set in the file:</para>
<screen><computeroutput>sql_connection = mysql://&lt;user&gt;:&lt;password&gt;@localhost/neutron?charset=utf8</computeroutput></screen>
</step>
</substeps>
</step>
</procedure></para>
<section xml:id="rpc_setup">
<title>RPC Setup</title>
<para>Many OpenStack Networking plugins use RPC to enable
agents to communicate with the main <systemitem
class="service">neutron-server</systemitem> process. If
your plugin requires agents, they can use the same RPC
mechanism used by other OpenStack components like Nova.</para>
<para>
<procedure>
<title>To use RabbitMQ as the message bus for RPC</title>
<step>
<para>Install RabbitMQ on a host reachable through the
management network (this step is not necessary if
RabbitMQ has already been installed for another
service, like Compute):</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install rabbitmq-server</userinput>
<prompt>$</prompt><userinput>rabbitmqctl change_password guest &lt;password&gt;</userinput></screen>
</step>
<step>
<para>Update
<filename>/etc/neutron/neutron.conf</filename> with
the following values:</para>
<screen><computeroutput>rabbit_host=&lt;mgmt-IP-of-rabbit-host&gt; rabbit_password=&lt;password&gt; rabbit_userid=guest</computeroutput></screen>
</step>
</procedure>
</para>
<important>
<para>The <filename>/etc/neutron/neutron.conf</filename>
file should be copied to and used on all hosts running
<systemitem class="service">neutron-server</systemitem>
or any <systemitem class="service"
>neutron-*-agent</systemitem> binaries.</para>
</important>
</section>
<section xml:id="openvswitch_plugin">
<title>Plugin Configuration: OVS Plugin</title>
<para>If you use the Open vSwitch (OVS) plugin in a deployment
with multiple hosts, you will need to use either tunneling
or vlans to isolate traffic from multiple networks.
Tunneling is easier to deploy because it does not require
configuring VLANs on network switches.</para>
<para>The following procedure uses tunneling:</para>
<!--<para>See
<xref linkend="ch_adv_config"/> for more
advanced deployment options:
</para>-->
<para>
<procedure>
<title>To configure OpenStack Networking to use the OVS
plugin</title>
<step>
<para>Edit
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
</filename> to specify the following values (for
database configuration, see <xref
linkend="install_neutron_server"/>):</para>
<screen><computeroutput>enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=&lt;data-net-IP-address-of-node&gt;</computeroutput></screen>
</step>
<step>
<para>If you are using the neutron DHCP agent, add the
following to
<filename>/etc/neutron/dhcp_agent.ini</filename>:</para>
<screen><computeroutput>dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf</computeroutput></screen>
</step>
<step>
<para>Create
<filename>/etc/neutron/dnsmasq-neutron.conf</filename>,
and add the following values to lower the MTU size on
instances and prevent packet fragmentation over the
GRE tunnel:</para>
<screen><computeroutput>dhcp-option-force=26,1400</computeroutput></screen>
</step>
<step>
<para>After performing that change on the node running
<systemitem class="service"
>neutron-server</systemitem>, restart <systemitem
class="service">neutron-server</systemitem> to pick
up the new settings:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="nvp_plugin">
<title>Plugin Configuration: Nicira NVP Plugin</title>
<para>
<procedure>
<title>To configure OpenStack Networking to use the NVP
plugin</title>
<step>
<para>Install the NVP plugin, as follows:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-plugin-nicira</userinput></screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename> and
set:</para>
<screen><computeroutput>core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2</computeroutput></screen>
<para>Example <filename>neutron.conf</filename> file for
NVP:</para>
<screen><computeroutput>core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2</computeroutput>
<computeroutput>rabbit_host = 192.168.203.10</computeroutput>
<computeroutput>allow_overlapping_ips = True</computeroutput></screen>
</step>
<step>
<para>To tell OpenStack Networking about a controller
cluster, create a new [cluster:&lt;name&gt;] section
in the
<filename>/etc/neutron/plugins/nicira/nvp.ini</filename>
file, and add the following entries (for database
configuration, see <xref
linkend="install_neutron_server"/>): <itemizedlist>
<listitem>
<para>The UUID of the NVP Transport Zone that
should be used by default when a tenant creates
a network. This value can be retrieved from the
NVP Manager Transport Zones page:</para>
<screen><computeroutput>default_tz_uuid = &lt;uuid_of_the_transport_zone&gt;</computeroutput></screen>
</listitem>
<listitem>
<para>A connection string indicating parameters to
be used by the NVP plugin when connecting to the
NVP webservice API. There will be one of these
lines in the file for each NVP controller in
your deployment. An NVP operator will likely
want to update the NVP controller IP and
password, but the remaining fields can be the
defaults:</para>
<screen><computeroutput>nvp_controller_connection = &lt;controller_node_ip&gt;:&lt;controller_port&gt;:&lt;api_user&gt;:&lt;api_password&gt;:&lt;request_timeout&gt;:&lt;http_timeout&gt;:&lt;retries&gt;:&lt;redirects&gt;</computeroutput></screen>
</listitem>
<listitem>
<para>The UUID of an NVP L3 Gateway Service that
should be used by default when a tenant creates
a router. This value can be retrieved from the
NVP Manager Gateway Services page:</para>
<screen><computeroutput>default_l3_gw_service_uuid = &lt;uuid_of_the_gateway_service&gt;</computeroutput></screen>
<warning>
<para>Ubuntu packaging currently does not update
the neutron init script to point to the NVP
configuration file. Instead, you must manually
update
<filename>/etc/default/neutron-server</filename>
with the following:</para>
<screen><computeroutput>NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini</computeroutput> </screen>
</warning>
</listitem>
</itemizedlist></para>
</step>
<step>
<para>Restart neutron-server to pick up the new
settings:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</para>
<para>Example <filename>nvp.ini</filename> file:</para>
<screen><computeroutput>[database]</computeroutput>
<computeroutput>sql_connection=mysql://root:root@127.0.0.1/neutron </computeroutput>
<computeroutput>[cluster:main]</computeroutput>
<computeroutput>default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c</computeroutput>
<computeroutput>default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf</computeroutput>
<computeroutput>nvp_controller_connection=10.0.0.2:443:admin:admin:30:10:2:2</computeroutput>
<computeroutput>nvp_controller_connection=10.0.0.3:443:admin:admin:30:10:2:2</computeroutput>
<computeroutput>nvp_controller_connection=10.0.0.4:443:admin:admin:30:10:2:2 </computeroutput></screen>
<note>
<para>To debug <filename>nvp.ini</filename> configuration
issues, run the following command from the host running
neutron-server:
<screen><prompt>$</prompt><userinput>check-nvp-config &lt;path/to/nvp.ini&gt;</userinput></screen>This
command tests whether <systemitem class="service"
>neutron-server</systemitem> can log into all of the NVP
Controllers, SQL server, and whether all of the UUID
values are correct.</para>
</note>
</section>
<section xml:id="bigswitch_floodlight_plugin">
<title>Configuring Big Switch, Floodlight REST Proxy
Plugin</title>
<para>
<procedure>
<title>To use the REST Proxy plugin with OpenStack
Networking</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename> and
set:</para>
<screen><computeroutput>core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2</computeroutput></screen>
</step>
<step>
<para>Edit the plugin configuration file,
<filename>/etc/neutron/plugins/bigswitch/restproxy.ini</filename>,
and specify a comma-separated list of
<systemitem>controller_ip:port</systemitem> pairs:
<screen><computeroutput>server = &lt;controller-ip&gt;:&lt;port&gt;</computeroutput></screen>For
database configuration, see <xref
linkend="install_neutron_server"/>.</para>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to pick up the new
settings:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="ryu_plugin">
<title>Configuring Ryu Plugin</title>
<para>
<procedure>
<title>To use the Ryu plugin with OpenStack
Networking</title>
<step>
<para>Install the Ryu plugin, as follows:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-plugin-ryu</userinput> </screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename> and
set:</para>
<screen><computeroutput>core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2</computeroutput></screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/plugins/ryu/ryu.ini</filename>
(for database configuration, see <xref
linkend="install_neutron_server"/>), and update the
following in the <systemitem>[ovs]</systemitem>
section for the
<systemitem>ryu-neutron-agent</systemitem>: <itemizedlist>
<listitem>
<para>The
<systemitem>openflow_rest_api</systemitem> is
used to tell where Ryu is listening for REST
API. Substitute
<systemitem>ip-address</systemitem> and
<systemitem>port-no</systemitem> based on your
Ryu setup.</para>
</listitem>
<listitem>
<para>The <literal>ovsdb_interface</literal> is
used for Ryu to access the
<systemitem>ovsdb-server</systemitem>.
Substitute eth0 based on your set up. The IP
address is derived from the interface name. If
you want to change this value irrespective of
the interface name,
<systemitem>ovsdb_ip</systemitem> can be
specified. If you use a non-default port for
<systemitem>ovsdb-server</systemitem>, it can
be specified by
<systemitem>ovsdb_port</systemitem>.</para>
</listitem>
<listitem>
<para><systemitem>tunnel_interface</systemitem>
needs to be set to tell what IP address is used
for tunneling (if tunneling isn't used, this
value is ignored). The IP address is derived
from the network interface name.</para>
</listitem>
</itemizedlist></para>
<para>You can use the same configuration file for many
Compute nodes by using a network interface name with a
different IP address:</para>
<screen><computeroutput>openflow_rest_api = &lt;ip-address&gt;:&lt;port-no&gt; ovsdb_interface = &lt;eth0&gt; tunnel_interface = &lt;eth0&gt; </computeroutput></screen>
</step>
<step>
<para>Restart <systemitem class="service">neutron-server</systemitem> to pick up the
new settings:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="PLUMgridplugin">
<title>Configuring PLUMgrid Plugin</title>
<para>
<procedure>
<title>To use the PLUMgrid plugin with OpenStack
Networking</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename> and
set:</para>
<screen><computeroutput>core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2</computeroutput></screen>
</step>
<step>
<para>Edit <filename>/etc/neutron/plugins/plumgrid/plumgrid.ini</filename> under the
<systemitem>[PLUMgridDirector]</systemitem> section, and specify the IP address,
port, admin user name, and password of the PLUMgrid Director:
<programlisting language="ini">[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"</programlisting>
For database configuration, see
<xref linkend="install_neutron_server"/>.</para>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to pick up the new
settings:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</para>
</section>
</section>
<section xml:id="install_neutron_agent">
<title>Install Software on Data Forwarding Nodes</title>
<para>Plugins typically have requirements for particular
software that must be run on each node that handles data
packets. This includes any node running <systemitem
class="service">nova-compute</systemitem>, as well as nodes
running dedicated OpenStack Networking service agents like
<systemitem>neutron-dhcp-agent</systemitem>,
<systemitem>neutron-l3-agent</systemitem>, or
<systemitem>neutron-lbaas-agent</systemitem> (see below for
more information about individual service agents).</para>
<para>A data-forwarding node typically has a network interface
with an IP address on the “management network” and another
interface on the “data network”.</para>
<para>In this section, you will learn how to install and
configure a subset of the available plugins, which may include
the installation of switching software (for example, Open
vSwitch) as well as agents used to communicate with the
<systemitem class="service">neutron-server</systemitem>
process running elsewhere in the data center.</para>
<section xml:id="install_neutron_agent_ovs">
<title>Node Setup: OVS Plugin</title>
<para>If you use the Open vSwitch plugin, you must also
install Open vSwitch as well as the
<systemitem>neutron-plugin-openvswitch-agent</systemitem>
agent on each data-forwarding node:</para>
<warning>
<para>Do not install the openvswitch-brcompat package as
it breaks the security groups functionality.</para>
</warning>
<para>
<procedure>
<title>To set up each node for the OVS plugin</title>
<step>
<para>Install the OVS agent package (this pulls in the
Open vSwitch software as a dependency):</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-plugin-openvswitch-agent</userinput></screen>
</step>
<step>
<para>On each node running
<systemitem>neutron-plugin-openvswitch-agent</systemitem>: <itemizedlist>
<listitem>
<para>Replicate the
<filename>ovs_neutron_plugin.ini</filename>
file created in the first step onto the node.
</para>
</listitem>
<listitem>
<para>If using tunneling, the node's
<filename>ovs_neutron_plugin.ini</filename>
file must also be updated with the node's IP
address configured on the data network using the
<systemitem>local_ip</systemitem> value.
</para>
</listitem>
</itemizedlist></para>
</step>
<step>
<para>Restart Open vSwitch to properly load the kernel
module:</para>
<screen><prompt>$</prompt><userinput>sudo service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-plugin-openvswitch-agent restart</userinput></screen>
</step>
<step>
<para>All nodes running
<systemitem>neutron-plugin-openvswitch-agent</systemitem>
must have an OVS bridge named "br-int". To create the
bridge, run:</para>
<screen><prompt>$</prompt><userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="install_neutron_agent_nvp">
<title>Node Setup: Nicira NVP Plugin</title>
<para>If you use the Nicira NVP plugin, you must also install
Open vSwitch on each data-forwarding node. However, you do
not need to install an additional agent on each node.</para>
<warning>
<para>It is critical that you are running a version of Open
vSwitch that is compatible with the current version of the
NVP Controller software. Do not use the version of Open
vSwitch installed by default on Ubuntu. Instead, use the
version of Open Vswitch provided on the Nicira support
portal for your version of the NVP Controller.</para>
</warning>
<para>
<procedure>
<title>To set up each node for the Nicira NVP
plugin</title>
<step>
<para>Ensure each data-forwarding node has an IP address
on the "management network", as well as an IP address
on the "data network" used for tunneling data traffic.
For full details on configuring your forwarding node,
please see the <citetitle>NVP Administrator
Guide</citetitle>.</para>
</step>
<step>
<para>Use the <citetitle>NVP Administrator
Guide</citetitle> to add the node as a "Hypervisor"
using the NVP Manager GUI. Even if your forwarding
node has no VMs and is only used for services agents
like <systemitem>neutron-dhcp-agent</systemitem> or
<systemitem>neutron-lbaas-agent</systemitem>, it
should still be added to NVP as a Hypervisor.</para>
</step>
<step>
<para>After following the <citetitle>NVP Administrator
Guide</citetitle>, use the page for this Hypervisor
in the NVP Manager GUI to confirm that the node is
properly connected to the NVP Controller Cluster and
that the NVP Controller Cluster can see the
integration bridge "br-int".</para>
</step>
</procedure>
</para>
</section>
<section xml:id="install_neutron_agent_ryu">
<title>Node Setup: Ryu Plugin</title>
<para>If you use the Ryu plugin, you must install both Open
vSwitch and Ryu, in addition to the Ryu agent package: <procedure>
<title>To set up each node for the Ryu plugin</title>
<step>
<para>Install Ryu (there isn't currently an Ryu package
for ubuntu):</para>
<screen><prompt>$</prompt><userinput>sudo pip install ryu</userinput></screen>
</step>
<step>
<para>Install the Ryu agent and Open vSwitch
packages:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms</userinput></screen>
</step>
<step>
<para>Replicate the
<filename>ovs_ryu_plugin.ini</filename> and
<filename>neutron.conf</filename> files created in
the above step on all nodes running
<systemitem>neutron-plugin-ryu-agent</systemitem>.
</para>
</step>
<step>
<para>Restart Open vSwitch to properly load the kernel
module:</para>
<screen><prompt>$</prompt><userinput>sudo service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-plugin-ryu-agent restart</userinput> </screen>
</step>
<step>
<para>All nodes running
<systemitem>neutron-plugin-ryu-agent</systemitem>
also require that an OVS bridge named "br-int" exists
on each node. To create the bridge, run:</para>
<screen><prompt>$</prompt><userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</para>
</section>
</section>
<section xml:id="install_neutron_dhcp">
<title>Install DHCP Agent</title>
<para>The DHCP service agent is compatible with all existing
plugins and is required for all deployments where VMs should
automatically receive IP addresses via DHCP. <procedure>
<title>To install and configure the DHCP agent</title>
<step>
<para>You must configure the host running the
<systemitem>neutron-dhcp-agent</systemitem> as a "data
forwarding node" according to your plugin's requirements
(see <xref linkend="install_neutron_agent"/>).</para>
</step>
<step>
<para>Install the DHCP agent:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-dhcp-agent</userinput></screen>
</step>
<step>
<para>Finally, update any options in
<filename>/etc/neutron/dhcp_agent.ini</filename> that
depend on the plugin in use (see the sub-sections).
</para>
</step>
</procedure></para>
<section xml:id="dhcp_agent_ovs">
<title>DHCP Agent Setup: OVS Plugin</title>
<para>The following DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename> file for
the OVS plugin:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_nvp">
<title>DHCP Agent Setup: NVP Plugin</title>
<para>The following DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename> file for
the NVP plugin:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_ryu">
<title>DHCP Agent Setup: Ryu Plugin</title>
<para>The following DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename> file for
the Ryu plugin:</para>
<programlisting language="bash">[DEFAULT]
ovs_use_veth = True
use_namespace = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
</section>
<section xml:id="install_neutron-l3">
<title>Install L3 Agent</title>
<para>Neutron has a widely used API extension to allow
administrators and tenants to create "routers" that connect to
L2 networks.</para>
<para>Many plugins rely on the L3 service agent to implement the
L3 functionality. However, the following plugins already have
built-in L3 capabilities:</para>
<para>
<itemizedlist>
<listitem>
<para>Nicira NVP Plugin</para>
</listitem>
<listitem>
<para>Floodlight/BigSwitch Plugin, which supports both the
open source <link
xlink:href="http://www.projectfloodlight.org/floodlight/"
>Floodlight</link> controller and the proprietary
BigSwitch controller.</para>
<note>
<para>Only the proprietary BigSwitch controller
implements L3 functionality. When using Floodlight as
your OpenFlow controller, L3 functionality is not
available.</para>
</note>
</listitem>
<listitem>
<para>PLUMgrid Plugin</para>
</listitem>
</itemizedlist>
<warning>
<para>Do note configure or use
<filename>neutron-l3-agent</filename> if you use one of
these plugins.</para>
</warning>
</para>
<para>
<procedure>
<title>To install the L3 Agent for all other plugins</title>
<step>
<para>Install the
<systemitem>neutron-l3-agent</systemitem> binary on
the network node:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-l3-agent</userinput> </screen>
</step>
<step>
<para>To uplink the node that runs
<systemitem>neutron-l3-agent</systemitem> to the
external network, create a bridge named "br-ex" and
attach the NIC for the external network to this bridge.</para>
<para>For example, with Open vSwitch and NIC eth1
connected to the external network, run:</para>
<screen><prompt>$</prompt><userinput>sudo ovs-vsctl add-br br-ex</userinput>
<prompt>$</prompt><userinput>sudo ovs-vsctl add-port br-ex eth1</userinput></screen>
<para>Do not manually configure an IP address on the NIC
connected to the external network for the node running
<systemitem>neutron-l3-agent</systemitem>. Rather, you
must have a range of IP addresses from the external
network that can be used by OpenStack Networking for
routers that uplink to the external network. This range
must be large enough to have an IP address for each
router in the deployment, as well as each floating IP.
</para>
</step>
<step>
<para>The <systemitem>neutron-l3-agent</systemitem> uses
the Linux IP stack and iptables to perform L3 forwarding
and NAT. In order to support multiple routers with
potentially overlapping IP addresses,
<systemitem>neutron-l3-agent</systemitem> defaults to
using Linux network namespaces to provide isolated
forwarding contexts. As a result, the IP addresses of
routers will not be visible simply by running
<command>ip addr list</command> or
<command>ifconfig</command> on the node. Similarly,
you will not be able to directly <command>ping</command>
fixed IPs.</para>
<para>To do either of these things, you must run the
command within a particular router's network namespace.
The namespace will have the name "qrouter-&lt;UUID of
the router&gt;. The following commands are examples of
running commands in the namespace of a router with UUID
47af3868-0fa8-4447-85f6-1304de32153b:</para>
<screen><prompt>$</prompt><userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list</userinput>
<prompt>$</prompt><userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping &lt;fixed-ip&gt;</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="install_neutron-lbaas-agent">
<title>Install LBaaS Agent</title>
<para>If you use the reference implementation of
Load-Balancer-as-a-Service (LBaaS), you must run
<systemitem>neutron-lbaas-agent</systemitem> on the network
node. <procedure>
<title>To install the LBaas agent and configure the
node</title>
<step>
<para>Install the agent by running:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install neutron-lbaas-agent</userinput></screen>
</step>
<step>
<para>If you are using: <itemizedlist>
<listitem>
<para>An OVS-based plugin (OVS, NVP, Ryu, NEC,
BigSwitch/Floodlight), you must set:</para>
<screen><computeroutput>interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</computeroutput></screen>
</listitem>
<listitem>
<para>A plugin that uses LinuxBridge, you must
set:</para>
<screen><computeroutput>interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</computeroutput></screen>
</listitem>
</itemizedlist></para>
</step>
<step>
<para>To use the reference implementation, you must also
set:</para>
<screen><computeroutput>device_driver = neutron.plugins.services.agent_loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver</computeroutput></screen>
</step>
<step>
<para>Make sure to set the following parameter in
<filename>neutron.conf</filename> on the host that
runs <systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><computeroutput>service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPlugin</computeroutput></screen>
</step>
</procedure></para>
</section>
<section xml:id="install_neutron_client">
<title>Install OpenStack Networking CLI Client</title>
<para>Install the OpenStack Networking CLI client by
running:</para>
<screen><prompt>$</prompt><userinput>sudo apt-get install python-pyparsing python-cliff python-Neutronclient</userinput></screen>
</section>
<section xml:id="init_config">
<title>Initialization and File Locations</title>
<para>You can start and stop OpenStack Networking services using
the 'service' command. For example:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server stop</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server status</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server start</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
<para>Log files are in the
<systemitem>/var/log/neutron</systemitem> directory.</para>
<para>Configuration files are in the
<systemitem>/etc/neutron</systemitem> directory.</para>
</section>
</section>
<section xml:id="install_fedora">
<title>Installing Packages (Fedora)</title>
<para>You can retrieve the OpenStack packages for Fedora from:
<link
xlink:href="https://apps.fedoraproject.org/packages/s/openstack"
>https://apps.fedoraproject.org/packages/s/openstack</link>
</para>
<para>You can find additional information here: <link
xlink:href="https://fedoraproject.org/wiki/OpenStack"
>https://fedoraproject.org/wiki/OpenStack</link>
</para>
<section xml:id="fedora_rpc_setup">
<title xml:id="qpid_rpc_setup">RPC Setup</title>
<para>OpenStack Networking uses RPC to allow DHCP agents and any
plugin agents to communicate with the main <systemitem
class="service">neutron-server</systemitem> process. 
Typically, the agent can use the same RPC mechanism used by
other OpenStack components like Nova.</para>
<para>
<procedure>
<title>To use Qpid AMQP as the message bus for RPC</title>
<step>
<para>Ensure that Qpid is installed on a host reachable
via the management network (if this is already the case
because of deploying another service like Nova, the
existing Qpid setup is sufficient):</para>
<screen><prompt>$</prompt><userinput>sudo yum install qpid-cpp-server qpid-cpp-server-daemon</userinput>
<prompt>$</prompt><userinput>sudo chkconfig qpidd on</userinput>
<prompt>$</prompt><userinput>sudo service qpidd start</userinput> </screen>
</step>
<step>
<para>Update
<filename>/etc/neutron/neutron.conf</filename> with
the following values:</para>
<screen><computeroutput>rpc_backend = neutron.openstack.common.rpc.impl_qpid qpid_hostname = &lt;mgmt-IP-of-qpid-host&gt;</computeroutput></screen>
</step>
<step>
<para>Fedora packaging includes utility scripts that
configure all of the necessary configuration files, and
which can also be used to understand how each OpenStack
Networking service is configured. The scripts use the
package <systemitem>openstack-utils</systemitem>. To
install the package, execute:</para>
<screen><computeroutput>sudo yum install openstack-utils</computeroutput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="fedora_q_server">
<title>Install neutron-server and plugin</title>
<para>
<procedure>
<title>To install and configure the Neutron server and
plugin</title>
<step>
<para>Install the server and relevant plugin.</para>
<note>
<para>The client is installed as a dependency for the
OpenStack Networking service. Each plugin has its own
package, named openstack-neutron-&lt;plugin&gt;. A
complete list of the supported plugins can be seen at:
<link
xlink:href="https://fedoraproject.org/wiki/Neutron#Neutron_Plugins"
>https://fedoraproject.org/wiki/Neutron#Neutron_Plugins</link>.
</para>
</note>
<para>The following examples use the Open vSwitch
plugin:</para>
<screen><prompt>$</prompt><userinput>sudo yum install openstack-neutron</userinput>
<prompt>$</prompt><userinput>sudo yum install openstack-neutron-openvswitch</userinput></screen>
</step>
<step>
<para>Most plugins require that you install a database and
configure it in a plugin configuration file. The Fedora
packaging for OpenStack Networking include server-setup
utility scripts that will take care of this. For
example:</para>
<screen><prompt>$</prompt><userinput>sudo neutron-server-setup --plugin openvswitch</userinput></screen>
</step>
<step>
<para>Enable and start the service:</para>
<screen><prompt>$</prompt><userinput>sudo chkconfig neutron-server on</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server start</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="fedora_q_plugin">
<title>Install neutron-plugin-*-agent</title>
<para>Some plugins utilize an agent that is run on any node that
handles data packets. This includes any node running
<systemitem class="service">nova-compute</systemitem>, as
well as nodes running dedicated OpenStack Networking agents
like <systemitem>neutron-dhcp-agent</systemitem> and
<systemitem>neutron-l3-agent</systemitem> (see below). If
your plugin uses an agent, this section describes how to run
the agent for this plugin, as well as the basic configuration
options.</para>
<section xml:id="fedora_q_agent">
<title>Open vSwitch Agent</title>
<procedure>
<title>To install and configure the Open vSwitch
agent</title>
<step>
<para>Install the OVS agent:</para>
<screen><prompt>$</prompt><userinput>sudo yum install openstack-neutron-openvswitch</userinput></screen>
</step>
<step>
<para>Run the agent setup script:</para>
<screen><prompt>$</prompt><userinput>sudo neutron-node-setup --plugin openvswitch</userinput></screen>
</step>
<step>
<para>All hosts running
<systemitem>neutron-plugin-openvswitch-agent</systemitem>
require the OVS bridge named "br-int". To create the
bridge, run:</para>
<screen><prompt>$</prompt><userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
<step>
<para>Enable and start the agent:</para>
<screen><prompt>$</prompt><userinput>sudo chkconfig neutron-openvswitch-agent on</userinput>
<prompt>$</prompt><userinput>sudo service neutron-openvswitch-agent start</userinput>
<prompt>$</prompt><userinput>sudo chkconfig openvswitch on</userinput>
<prompt>$</prompt><userinput>sudo service openvswitch start</userinput> </screen>
</step>
<step>
<para>Enable the OVS cleanup utility:</para>
<screen><prompt>$</prompt><userinput>sudo chkconfig neutron-ovs-cleanup on</userinput></screen>
</step>
</procedure>
</section>
</section>
<section xml:id="fedora_q_dhcp">
<title>Install DHCP Agent</title>
<para>
<procedure>
<title>To install and configure the DHCP agent</title>
<step>
<para>The DHCP agent is part of the
<systemitem>openstack-neutron</systemitem> package;
install the package using:</para>
<screen><prompt>$</prompt><userinput>sudo yum install openstack-neutron</userinput></screen>
</step>
<step>
<para>Run the agent setup script:</para>
<screen><prompt>$</prompt><userinput>sudo neutron-dhcp-setup --plugin openvswitch</userinput></screen>
</step>
<step>
<para>Enable and start the agent:</para>
<screen><prompt>$</prompt><userinput>sudo chkconfig neutron-dhcp-agent on</userinput>
<prompt>$</prompt><userinput>sudo service neutron-dhcp-agent start</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="fedora_q_l3">
<title>Install L3 Agent</title>
<para>
<procedure>
<title>To install and configure the L3 agent</title>
<step>
<para>Create a bridge "br-ex" that will be used to uplink
this node running
<systemitem>neutron-l3-agent</systemitem> to the
external network, then attach the NIC attached to the
external network to this bridge. For example, with Open
vSwitch and NIC eth1 connected to the external network,
run:</para>
<screen><prompt>$</prompt><userinput>sudo ovs-vsctl add-br br-ex</userinput>
<prompt>$</prompt><userinput>sudo ovs-vsctl add-port br-ex eth1</userinput></screen>
<para>The node running neutron-l3-agent should not have an
IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of
IP addresses from the external network that can be used
by OpenStack Networking for routers that uplink to the
external network. This range must be large enough to
have an IP address for each router in the deployment, as
well as each floating IP.</para>
</step>
<step>
<para>The L3 agent is part of the
<systemitem>openstack-neutron</systemitem> package;
install the package using:</para>
<screen><prompt>$</prompt><userinput>sudo yum install openstack-neutron</userinput> </screen>
</step>
<step>
<para>Run the agent setup script:</para>
<screen><prompt>$</prompt><userinput>sudo neutron-l3-setup --plugin openvswitch</userinput> </screen>
</step>
<step>
<para>Enable and start the agent:</para>
<screen><prompt>$</prompt><userinput>sudo chkconfig enable neutron-l3-agent on</userinput>
<prompt>$</prompt><userinput>sudo service neutron-l3-agent start</userinput></screen>
</step>
<step>
<para>Enable and start the meta data agent:</para>
<screen><prompt>$</prompt><userinput>sudo chkconfig neutron-metadata-agent on</userinput>
<prompt>$</prompt><userinput>sudo service neutron-metadata-agent start</userinput></screen>
</step>
</procedure>
</para>
</section>
<section xml:id="fedora_q_client">
<title>Install OpenStack Networking CLI client</title>
<para>Install the OpenStack Networking CLI client:</para>
<screen><prompt>$</prompt><userinput>sudo yum install python-neutronclient</userinput></screen>
</section>
<section xml:id="fedora_misc">
<title>Initialization and File Locations</title>
<para>You can start and stop services by using the
<command>service</command> command. For example:</para>
<screen><prompt>$</prompt><userinput>sudo service neutron-server stop</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server status</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server start</userinput>
<prompt>$</prompt><userinput>sudo service neutron-server restart</userinput></screen>
<para>Log files are in the
<systemitem>/var/log/neutron</systemitem> directory.</para>
<para>Configuration files are in the
<systemitem>/etc/neutron</systemitem> directory.</para>
</section>
</section>
</chapter>