openstack-manuals/doc/admin-guide-cloud/section_networking_introduction.xml
Andreas Jaeger 39ac6cc258 Lowercase compute node
It's "compute node", not "Compute node" (similarly compute host).

Also, fix capitalization of "live migration".

Change-Id: I57ac46b845e217c2607cf99dfabcfaab25d84ea5
2014-03-06 09:06:07 +01:00

1354 lines
71 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<section xml:id="section_networking-intro"
xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0">
<title>Introduction to Networking</title>
<para>The Networking service, code-named Neutron, provides an API
that lets you define network connectivity and addressing in
the cloud. The Networking service enables operators to
leverage different networking technologies to power their
cloud networking. The Networking service also provides an API
to configure and manage a variety of network services ranging
from L3 forwarding and NAT to load balancing, edge firewalls,
and IPSEC VPN.</para>
<para>For a detailed description of the Networking API
abstractions and their attributes, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
><citetitle>OpenStack Networking API v2.0
Reference</citetitle></link>.</para>
<section xml:id="section_networking-api">
<title>Networking API</title>
<para>Networking is a virtual network service that provides a
powerful API to define the network connectivity and IP
addressing that devices from other services, such as
Compute, use.</para>
<para>The Compute API has a virtual server abstraction to
describe computing resources. Similarly, the Networking
API has virtual network, subnet, and port abstractions to
describe networking resources.</para>
<table rules="all">
<caption>Networking resources</caption>
<col width="10%"/>
<col width="90%"/>
<thead>
<tr>
<th>Resource</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Network</emphasis></td>
<td>An isolated L2 segment, analogous to VLAN in
the physical networking world.</td>
</tr>
<tr>
<td><emphasis role="bold">Subnet</emphasis></td>
<td>A block of v4 or v6 IP addresses and
associated configuration state.</td>
</tr>
<tr>
<td><emphasis role="bold">Port</emphasis></td>
<td>A connection point for attaching a single
device, such as the NIC of a virtual server,
to a virtual network. Also describes the
associated network configuration, such as the
MAC and IP addresses to be used on that
port.</td>
</tr>
</tbody>
</table>
<para>You can configure rich network topologies by creating
and configuring networks and subnets, and then instructing
other OpenStack services like Compute to attach virtual
devices to ports on these networks.</para>
<para>In particular, Networking supports each tenant having
multiple private networks, and allows tenants to choose
their own IP addressing scheme (even if those IP addresses
overlap with those that other tenants use). The Networking
service:</para>
<itemizedlist>
<listitem>
<para>Enables advanced cloud networking use cases,
such as building multi-tiered web applications and
enabling migration of applications to the cloud
without changing IP addresses.</para>
</listitem>
<listitem>
<para>Offers flexibility for the cloud administrator
to customize network offerings.</para>
</listitem>
<listitem>
<para>Enables developers to extend the Networking API.
Over time, the extended functionality becomes part
of the core Networking API.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="section_plugin-arch">
<title>Plug-in architecture</title>
<para>The original Compute network implementation assumed a
basic model of isolation through Linux VLANs and IP
tables. Networking introduces support for vendor
<glossterm>plug-in</glossterm>s, which offer a custom
back-end implementation of the Networking API. A plug-in
can use a variety of technologies to implement the logical
API requests. Some Networking plug-ins might use basic
Linux VLANs and IP tables, while others might use more
advanced technologies, such as L2-in-L3 tunneling or
OpenFlow, to provide similar benefits.</para>
<table rules="all">
<caption>Available networking plug-ins</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Plug-in</th>
<th>Documentation</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Big Switch Plug-in
(Floodlight REST Proxy)</emphasis></td>
<td>This guide and <link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin</link>
</td>
</tr>
<tr>
<td><emphasis role="bold">Brocade
Plug-in</emphasis></td>
<td>This guide</td>
</tr>
<tr>
<td><emphasis role="bold">Cisco</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/cisco-neutron"
>http://wiki.openstack.org/cisco-neutron</link></td>
</tr>
<tr>
<td><emphasis role="bold">Cloudbase Hyper-V
Plug-in</emphasis></td>
<td><link
xlink:href="http://www.cloudbase.it/quantum-hyper-v-plugin/"
>http://www.cloudbase.it/quantum-hyper-v-plugin/</link></td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge
Plug-in</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Mellanox
Plug-in</emphasis></td>
<td><link
xlink:href="https://wiki.openstack.org/wiki/Mellanox-Neutron/"
>https://wiki.openstack.org/wiki/Mellanox-Neutron/</link></td>
</tr>
<tr>
<td><emphasis role="bold">Midonet
Plug-in</emphasis></td>
<td><link xlink:href="http://www.midokura.com/"
>http://www.midokura.com/</link></td>
</tr>
<tr>
<td><emphasis role="bold">ML2 (Modular Layer 2)
Plug-in</emphasis></td>
<td><link
xlink:href="https://wiki.openstack.org/wiki/Neutron/ML2"
>https://wiki.openstack.org/wiki/Neutron/ML2</link></td>
</tr>
<tr>
<td><emphasis role="bold">NEC OpenFlow
Plug-in</emphasis></td>
<td><link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Open vSwitch
Plug-in</emphasis></td>
<td>This guide.</td>
</tr>
<tr>
<td><emphasis role="bold">PLUMgrid</emphasis></td>
<td>This guide and <link
xlink:href="https://wiki.openstack.org/wiki/PLUMgrid-Neutron"
>https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron</link></td>
</tr>
<tr>
<td><emphasis role="bold">Ryu
Plug-in</emphasis></td>
<td>This guide and <link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></td>
</tr>
<tr>
<!-- TODO: update support link, when available -->
<td><emphasis role="bold">VMware NSX
Plug-in</emphasis></td>
<td>This guide and <link
xlink:href="http://www.vmware.com/nsx"
>NSX Product Overview</link>, <link
xlink:href="http://www.nicira.com/support"
>NSX Product Support</link></td>
</tr>
</tbody>
</table>
<para>Plug-ins can have different properties for hardware
requirements, features, performance, scale, or operator
tools. Because Networking supports a large number of
plug-ins, the cloud administrator can weigh options to
decide on the right networking technology for the
deployment.</para>
<para>In the Havana release, OpenStack Networking introduces
the <glossterm
baseform="Modular Layer 2 (ML2) neutron plug-in">
Modular Layer 2 (ML2) plug-in</glossterm> that enables
the use of multiple concurrent mechanism drivers. This
capability aligns with the complex requirements typically
found in large heterogeneous environments. It currently
works with the existing Open vSwitch, Linux Bridge, and
Hyper-v L2 agents. The ML2 framework simplifies the
addition of support for new L2 technologies and reduces
the effort that is required to add and maintain them
compared to earlier large plug-ins.</para>
<note>
<title>Plug-in deprecation notice</title>
<para>The Open vSwitch and Linux Bridge plug-ins are
deprecated in the Havana release and will be removed
in the Icehouse release. The features in these
plug-ins are now part of the ML2 plug-in in the form
of mechanism drivers.</para>
</note>
<para>Not all Networking plug-ins are compatible with all
possible Compute drivers:</para>
<table rules="all">
<caption>Plug-in compatibility with Compute
drivers</caption>
<thead>
<tr>
<th>Plug-in</th>
<th>Libvirt (KVM/QEMU)</th>
<th>XenServer</th>
<th>VMware</th>
<th>Hyper-V</th>
<th>Bare-metal</th>
</tr>
</thead>
<tbody>
<tr>
<td>Big Switch / Floodlight</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Brocade</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cisco</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cloudbase Hyper-V</td>
<td/>
<td/>
<td/>
<td>Yes</td>
<td/>
</tr>
<tr>
<td>Linux Bridge</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Mellanox</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Midonet</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>ML2</td>
<td>Yes</td>
<td/>
<td/>
<td>Yes</td>
<td/>
</tr>
<tr>
<td>NEC OpenFlow</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Plumgrid</td>
<td>Yes</td>
<td/>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>Ryu</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>VMware NSX</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td/>
<td/>
</tr>
</tbody>
</table>
<section xml:id="section_plugin-config">
<title>Plug-in configurations</title>
<para>For configurations options, see <link
xlink:href="http://docs.openstack.org/havana/config-reference/content/section_networking-options-reference.html"
>Networking configuration options</link> in
<citetitle>Configuration Reference</citetitle>.
These sections explain how to configure specific
plug-ins.</para>
<section xml:id="bigswitch_floodlight_plugin">
<title>Configure Big Switch, Floodlight REST Proxy
plug-in</title>
<procedure>
<title>To use the REST Proxy plug-in with
OpenStack Networking</title>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename> file and add this line:</para>
<programlisting language="ini">core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2</programlisting>
</step>
<step>
<para>Edit the plug-in configuration file,
<filename>/etc/neutron/plugins/bigswitch/restproxy.ini</filename>,
and specify a comma-separated list of
<systemitem>controller_ip:port</systemitem>
pairs:</para>
<programlisting language="ini">server = &lt;controller-ip&gt;:&lt;port&gt;</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in
the <citetitle>Installation
Guide</citetitle> in the <link
xlink:href="http://docs.openstack.org"
>OpenStack Documentation index</link>.
(The link defaults to the Ubuntu
version.)</para>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to apply
the new settings:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="brocade_plugin">
<title>Configure Brocade plug-in</title>
<procedure>
<title>To use the Brocade plug-in with OpenStack
Networking</title>
<step>
<para>Install the Brocade-modified Python
netconf client (ncclient) library, which
is available at <link
xlink:href="https://github.com/brocade/ncclient"
>https://github.com/brocade/ncclient</link>:</para>
<screen><prompt>$</prompt> <userinput>git clone https://www.github.com/brocade/ncclient</userinput>
<prompt>$</prompt> <userinput>cd ncclient; sudo python ./setup.py install</userinput></screen>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/neutron.conf</filename>
file and set the following option:</para>
<programlisting language="ini">core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2</programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/brocade/brocade.ini</filename>
configuration file for the Brocade plug-in
and specify the admin user name, password,
and IP address of the Brocade
switch:</para>
<programlisting language="ini">[SWITCH]
username = <replaceable>admin</replaceable>
password = <replaceable>password</replaceable>
address = <replaceable>switch mgmt ip address</replaceable>
ostype = NOS</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in
any of the <citetitle>Installation
Guides</citetitle> in the <link
xlink:href="http://docs.openstack.org"
>OpenStack Documentation index</link>.
(The link defaults to the Ubuntu
version.)</para>
</step>
<step>
<para>Restart the
<systemitem class="service"
>neutron-server</systemitem>
service to apply the new settings:</para>
<screen><prompt>#</prompt> <userinput>service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="openvswitch_plugin">
<title>Configure OVS plug-in</title>
<para>If you use the Open vSwitch (OVS) plug-in in a
deployment with multiple hosts, you must use
either tunneling or vlans to isolate traffic from
multiple networks. Tunneling is easier to deploy
because it does not require configuring VLANs on
network switches.</para>
<para>This procedure uses tunneling:</para>
<procedure>
<title>To configure OpenStack Networking to use
the OVS plug-in</title>
<step>
<para>Edit
<filename>/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
</filename> to specify these values (for
database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in
<citetitle>Installation
Guide</citetitle>):</para>
<programlisting language="ini">enable_tunneling=True
tenant_network_type=gre
tunnel_id_ranges=1:1000
# only required for nodes running agents
local_ip=&lt;data-net-IP-address-of-node&gt;</programlisting>
</step>
<step>
<para>If you use the neutron DHCP agent, add
these lines to the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file:</para>
<programlisting language="ini">dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf</programlisting>
</step>
<step>
<para>Create
<filename>/etc/neutron/dnsmasq-neutron.conf</filename>,
and add these values to lower the MTU size
on instances and prevent packet
fragmentation over the GRE tunnel:</para>
<programlisting language="ini">dhcp-option-force=26,1400</programlisting>
</step>
<step>
<para>Restart to apply the new
settings:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="nsx_plugin">
<title>Configure NSX plug-in</title>
<procedure>
<title>To configure OpenStack Networking to use
the NSX plug-in</title>
<para>While the instructions in this section refer
to the VMware NSX platform, this is formerly
known as Nicira NVP.</para>
<step>
<para>Install the NSX plug-in, as
follows:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-plugin-vmware</userinput></screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.NsxPlugin</programlisting>
<para>Example
<filename>neutron.conf</filename> file
for NSX:</para>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.NsxPlugin
rabbit_host = 192.168.203.10
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>To configure the NSX controller cluster
for the OpenStack Networking Service,
locate the <literal>[default]</literal>
section in the
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file, and add the following entries (for
database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in
<citetitle>Installation
Guide</citetitle>):</para>
<itemizedlist>
<listitem>
<para>To establish and configure the
connection with the controller
cluster you must set some
parameters, including NSX API
endpoints, access credentials, and
settings for HTTP redirects and
retries in case of connection
failures:</para>
<programlisting language="ini">nsx_user = &lt;admin user name>
nsx_password = &lt;password for nsx_user>
req_timeout = &lt;timeout in seconds for NSX_requests> # default 30 seconds
http_timeout = &lt;tiemout in seconds for single HTTP request> # default 10 seconds
retries = &lt;number of HTTP request retries> # default 2
redirects = &lt;maximum allowed redirects for a HTTP request> # default 3
nsx_controllers = &lt;comma separated list of API endpoints></programlisting>
<para>To ensure correct operations,
the <literal>nsx_user</literal>
user must have administrator
credentials on the NSX
platform.</para>
<para>A controller API endpoint
consists of the IP address and port
for the controller; if you omit the
port, port 443 is used. If multiple
API endpoints are specified, it is
up to the user to ensure that all
these endpoints belong to the same
controller cluster. The OpenStack
Networking VMware NSX plug-in does
not perform this check, and results
might be unpredictable.</para>
<para>When you specify multiple API
endpoints, the plug-in
load-balances requests on the
various API endpoints.</para>
</listitem>
<listitem>
<para>The UUID of the NSX Transport
Zone that should be used by default
when a tenant creates a network.
You can get this value from the NSX
Manager's Transport Zones
page:</para>
<programlisting language="ini">default_tz_uuid = &lt;uuid_of_the_transport_zone&gt;</programlisting>
</listitem>
<listitem>
<programlisting language="ini">default_l3_gw_service_uuid = &lt;uuid_of_the_gateway_service&gt;</programlisting>
<warning>
<para>Ubuntu packaging currently
does not update the Neutron init
script to point to the NSX
configuration file. Instead, you
must manually update
<filename>/etc/default/neutron-server</filename>
to add this line:</para>
<programlisting language="ini">NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini</programlisting>
</warning>
</listitem>
</itemizedlist>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to apply
new settings:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
<para>Example <filename>nsx.ini</filename>
file:</para>
<programlisting language="ini">[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nsx_user=admin
nsx_password=changeme
nsx_controllers=10.127.0.100,10.127.0.200:8888</programlisting>
<note>
<para>To debug <filename>nsx.ini</filename>
configuration issues, run this command from
the host that runs <systemitem class="service"
>neutron-server</systemitem>:</para>
<screen><prompt>#</prompt> <userinput>neutron-check-nsx-config &lt;path/to/nsx.ini&gt;</userinput></screen>
<para>This command tests whether <systemitem
class="service"
>neutron-server</systemitem> can log into
all of the NSX Controllers and the SQL server,
and whether all UUID values are
correct.</para>
</note>
<section xml:id="LBaaS_and_FWaaS">
<title>Load Balancer-as-a-Service and
Firewall-as-a-Service</title>
<para>The NSX LBaaS and FWaaS services use the
standard OpenStack API with the exception of
requiring routed-insertion extension
support.</para>
<para>The main differences between the NSX
implementation and the community reference
implementation of these services are:</para>
<orderedlist>
<listitem>
<para>The NSX LBaaS and FWaaS plug-ins
require the routed-insertion
extension, which adds the
<code>router_id</code> attribute to
the VIP (Virtual IP address) and
firewall resources and binds these
services to a logical router.</para>
</listitem>
<listitem>
<para>The community reference
implementation of LBaaS only supports
a one-arm model, which restricts the
VIP to be on the same subnet as the
back-end servers. The NSX LBaaS
plug-in only supports a two-arm model
between north-south traffic, which
means that you can create the VIP on
only the external (physical)
network.</para>
</listitem>
<listitem>
<para>The community reference
implementation of FWaaS applies
firewall rules to all logical routers
in a tenant, while the NSX FWaaS
plug-in applies firewall rules only to
one logical router according to the
<code>router_id</code> of the
firewall entity.</para>
</listitem>
</orderedlist>
<procedure>
<title>To configure Load Balancer-as-a-Service
and Firewall-as-a-Service with
NSX:</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
file:</para>
<programlisting language="ini">core_plugin = neutron.plugins.vmware.NsxServicePlugin
# Note: comment out service_plug-ins. LBaaS &amp; FWaaS is supported by core_plugin NsxServicePlugin
# service_plugins = </programlisting>
</step>
<step>
<para>Edit
<filename>/etc/neutron/plugins/vmware/nsx.ini</filename>
file:</para>
<para>In addition to the original NSX
configuration, the
<code>default_l3_gw_service_uuid</code>
is required for the NSX Advanced
plug-in and you must add a <code>vcns</code>
section:</para>
<programlisting language="ini">[DEFAULT]
nsx_password = <replaceable>admin</replaceable>
nsx_user = <replaceable>admin</replaceable>
nsx_controllers = <replaceable>10.37.1.137:443</replaceable>
default_l3_gw_service_uuid = <replaceable>aae63e9b-2e4e-4efe-81a1-92cf32e308bf</replaceable>
default_tz_uuid = <replaceable>2702f27a-869a-49d1-8781-09331a0f6b9e</replaceable>
[vcns]
# VSM management URL
manager_uri = <replaceable>https://10.24.106.219</replaceable>
# VSM admin user name
user = <replaceable>admin</replaceable>
# VSM admin password
password = <replaceable>default</replaceable>
# UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type)
external_network = <replaceable>f2c023cf-76e2-4625-869b-d0dabcfcc638</replaceable>
# ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used
# deployment_container_id =
# task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec.
# task_status_check_interval =</programlisting>
</step>
</procedure>
</section>
</section>
<section xml:id="PLUMgridplugin">
<title>Configure PLUMgrid plug-in</title>
<procedure>
<title>To use the PLUMgrid plug-in with OpenStack
Networking</title>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2</programlisting>
</step>
<step>
<para>Edit
<filename>/etc/neutron/plugins/plumgrid/plumgrid.ini</filename>
under the
<systemitem>[PLUMgridDirector]</systemitem>
section, and specify the IP address, port,
admin user name, and password of the
PLUMgrid Director:</para>
<programlisting language="ini">[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"</programlisting>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in
the <citetitle>Installation
Guide</citetitle>.</para>
</step>
<step>
<para>Restart
<systemitem class="service"
>neutron-server</systemitem> to apply the new settings:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="ryu_plugin">
<title>Configure Ryu plug-in</title>
<procedure>
<title>To use the Ryu plug-in with OpenStack
Networking</title>
<step>
<para>Install the Ryu plug-in, as
follows:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-plugin-ryu</userinput> </screen>
</step>
<step>
<para>Edit
<filename>/etc/neutron/neutron.conf</filename>
and set:</para>
<programlisting language="ini">core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2</programlisting>
</step>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/ryu/ryu.ini</filename>
file and update these options in the
<systemitem>[ovs]</systemitem> section
for the
<systemitem>ryu-neutron-agent</systemitem>:</para>
<itemizedlist>
<listitem>
<para><systemitem>openflow_rest_api</systemitem>.
Defines where Ryu is listening for
REST API. Substitute
<systemitem>ip-address</systemitem>
and
<systemitem>port-no</systemitem>
based on your Ryu setup.</para>
</listitem>
<listitem>
<para><literal>ovsdb_interface</literal>.
Enables Ryu to access the
<systemitem>ovsdb-server</systemitem>.
Substitute <literal>eth0</literal>
based on your setup. The IP address
is derived from the interface name.
If you want to change this value
irrespective of the interface name,
you can specify
<systemitem>ovsdb_ip</systemitem>.
If you use a non-default port for
<systemitem>ovsdb-server</systemitem>,
you can specify
<systemitem>ovsdb_port</systemitem>.</para>
</listitem>
<listitem>
<para><systemitem>tunnel_interface</systemitem>.
Defines which IP address is used
for tunneling. If you do not use
tunneling, this value is ignored.
The IP address is derived from the
network interface name.</para>
</listitem>
</itemizedlist>
<para>For database configuration, see <link
xlink:href="http://docs.openstack.org/havana/install-guide/install/apt/content/neutron-install-network-node.html"
>Install Networking Services</link> in
<citetitle>Installation
Guide</citetitle>.</para>
<para>You can use the same configuration file
for many compute nodes by using a network
interface name with a different IP
address:</para>
<programlisting language="ini">openflow_rest_api = &lt;ip-address&gt;:&lt;port-no&gt; ovsdb_interface = &lt;eth0&gt; tunnel_interface = &lt;eth0&gt;</programlisting>
</step>
<step>
<para>Restart <systemitem class="service"
>neutron-server</systemitem> to apply
the new settings:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-server restart</userinput></screen>
</step>
</procedure>
</section>
</section>
</section>
<section xml:id="install_neutron_agent">
<title>Configure neutron agents</title>
<para>Plug-ins typically have requirements for particular
software that must be run on each node that handles data
packets. This includes any node that runs <systemitem
class="service">nova-compute</systemitem> and nodes
that run dedicated OpenStack Networking service agents
such as <systemitem>neutron-dhcp-agent</systemitem>,
<systemitem>neutron-l3-agent</systemitem>,
<systemitem>neutron-metering-agent</systemitem> or
<systemitem>neutron-lbaas-agent</systemitem>.</para>
<para>A data-forwarding node typically has a network interface
with an IP address on the “management network” and another
interface on the “data network”.</para>
<para>This section shows you how to install and configure a
subset of the available plug-ins, which might include the
installation of switching software (for example, Open
vSwitch) and as agents used to communicate with the
<systemitem class="service"
>neutron-server</systemitem> process running elsewhere
in the data center.</para>
<section xml:id="config_neutron_data_fwd_node">
<title>Configure data-forwarding nodes</title>
<section xml:id="install_neutron_agent_ovs">
<title>Node set up: OVS plug-in</title>
<para>
<note>
<para>This section also applies to the ML2
plug-in when Open vSwitch is used as a
mechanism driver.</para>
</note>If you use the Open vSwitch plug-in, you
must install Open vSwitch and the
<systemitem>neutron-plugin-openvswitch-agent</systemitem>
agent on each data-forwarding node:</para>
<warning>
<para>Do not install the
<package>openvswitch-brcompat</package>
package because it prevents the security group
functionality from operating correctly.</para>
</warning>
<procedure>
<title>To set up each node for the OVS
plug-in</title>
<step>
<para>Install the OVS agent package. This
action also installs the Open vSwitch
software as a dependency:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-plugin-openvswitch-agent</userinput></screen>
</step>
<step>
<para>On each node that runs the
<systemitem>neutron-plugin-openvswitch-agent</systemitem>, complete these steps:</para>
<itemizedlist>
<listitem>
<para>Replicate the
<filename>ovs_neutron_plugin.ini</filename>
file that you created on
the node.</para>
</listitem>
<listitem>
<para>If you use tunneling, update the
<filename>ovs_neutron_plugin.ini</filename>
file for the node with the
IP address that is configured on the
data network for the node by using the
<systemitem>local_ip</systemitem>
value.</para>
</listitem>
</itemizedlist>
</step>
<step>
<para>Restart Open vSwitch to properly load
the kernel module:</para>
<screen><prompt>$</prompt> <userinput>sudo service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-plugin-openvswitch-agent restart</userinput></screen>
</step>
<step>
<para>All nodes that run
<systemitem>neutron-plugin-openvswitch-agent</systemitem>
must have an OVS <literal>br-int</literal>
bridge. To create the bridge,
run:</para>
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_nsx">
<title>Node set up: NSX plug-in</title>
<para>If you use the NSX plug-in, you must also
install Open vSwitch on each data-forwarding node.
However, you do not need to install an additional
agent on each node.</para>
<warning>
<para>It is critical that you are running an Open
vSwitch version that is compatible with the
current version of the NSX Controller
software. Do not use the Open vSwitch version
that is installed by default on Ubuntu.
Instead, use the Open vSwitch version that is
provided on the VMware support portal for your
NSX Controller version.</para>
</warning>
<procedure>
<title>To set up each node for the NSX
plug-in</title>
<step>
<para>Ensure that each data-forwarding node has an
IP address on the management network,
and an IP address on the "data network"
that is used for tunneling data traffic.
For full details on configuring your
forwarding node, see the <citetitle>NSX
Administrator
Guide</citetitle>.</para>
</step>
<step>
<para>Use the <citetitle>NSX Administrator
Guide</citetitle> to add the node as a
Hypervisor by using the NSX Manager GUI.
Even if your forwarding node has no VMs
and is only used for services agents like
<systemitem>neutron-dhcp-agent</systemitem>
or
<systemitem>neutron-lbaas-agent</systemitem>,
it should still be added to NSX as a
Hypervisor.</para>
</step>
<step>
<para>After following the <citetitle>NSX
Administrator Guide</citetitle>, use
the page for this Hypervisor in the NSX
Manager GUI to confirm that the node is
properly connected to the NSX Controller
Cluster and that the NSX Controller
Cluster can see the
<literal>br-int</literal> integration
bridge.</para>
</step>
</procedure>
</section>
<section xml:id="install_neutron_agent_ryu">
<title>Node set up: Ryu plug-in</title>
<para>If you use the Ryu plug-in, you must install
both Open vSwitch and Ryu, in addition to the Ryu
agent package:</para>
<procedure>
<title>To set up each node for the Ryu
plug-in</title>
<step>
<para>Install Ryu (there isn't currently an
Ryu package for ubuntu):</para>
<screen><prompt>$</prompt> <userinput>sudo pip install ryu</userinput></screen>
</step>
<step>
<para>Install the Ryu agent and Open vSwitch
packages:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms</userinput></screen>
</step>
<step>
<para>Replicate the
<filename>ovs_ryu_plugin.ini</filename>
and <filename>neutron.conf</filename>
files created in the above step on all
nodes running
<systemitem>neutron-plugin-ryu-agent</systemitem>.</para>
</step>
<step>
<para>Restart Open vSwitch to properly load
the kernel module:</para>
<screen><prompt>$</prompt> <userinput>sudo service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Restart the agent:</para>
<screen><prompt>$</prompt> <userinput>sudo service neutron-plugin-ryu-agent restart</userinput> </screen>
</step>
<step>
<para>All nodes running
<systemitem>neutron-plugin-ryu-agent</systemitem>
also require that an OVS bridge named
"br-int" exists on each node. To create
the bridge, run:</para>
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</step>
</procedure>
</section>
</section>
<section xml:id="install_neutron_dhcp">
<title>Configure DHCP agent</title>
<para>The DHCP service agent is compatible with all
existing plug-ins and is required for all deployments
where VMs should automatically receive IP addresses
through DHCP.</para>
<procedure>
<title>To install and configure the DHCP agent</title>
<step>
<para>You must configure the host running the
<systemitem>neutron-dhcp-agent</systemitem>
as a "data forwarding node" according to the
requirements for your plug-in (see <xref
linkend="install_neutron_agent"/>).</para>
</step>
<step>
<para>Install the DHCP agent:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-dhcp-agent</userinput></screen>
</step>
<step>
<para>Finally, update any options in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file that depend on the plug-in in use (see
the sub-sections).</para>
</step>
</procedure>
<important>
<para>If you reboot a node that runs the DHCP agent,
you must run the
<command>neutron-ovs-cleanup</command> command
before the <systemitem class="service"
>neutron-dhcp-agent</systemitem> service
starts.</para>
<para>On Red Hat-based systems, the <systemitem
class="service">
neutron-ovs-cleanup</systemitem> service runs
the <command>neutron-ovs-cleanup</command>command
automatically. However, on Debian-based systems
such as Ubuntu, you must manually run this command
or write your own system script that runs on boot
before the <systemitem class="service">
neutron-dhcp-agent</systemitem> service
starts.</para>
</important>
<section xml:id="dhcp_agent_ovs">
<title>DHCP agent setup: OVS plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the OVS plug-in:</para>
<programlisting language="bash">[DEFAULT]
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_nsx">
<title>DHCP agent setup: NSX plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the NSX plug-in:</para>
<programlisting language="bash">[DEFAULT]
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
<section xml:id="dhcp_agent_ryu">
<title>DHCP agent setup: Ryu plug-in</title>
<para>These DHCP agent options are required in the
<filename>/etc/neutron/dhcp_agent.ini</filename>
file for the Ryu plug-in:</para>
<programlisting language="bash">[DEFAULT]
use_namespace = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</section>
</section>
<section xml:id="install_neutron-l3">
<title>Configure L3 agent</title>
<para>The OpenStack Networking Service has a widely used
API extension to allow administrators and tenants to
create routers to interconnect L2 networks, and
floating IPs to make ports on private networks
publicly accessible.</para>
<para>Many plug-ins rely on the L3 service agent to
implement the L3 functionality. However, the following
plug-ins already have built-in L3 capabilities:</para>
<itemizedlist>
<listitem>
<para>NSX plug-in</para>
</listitem>
<listitem>
<para>Big Switch/Floodlight plug-in, which
supports both the open source <link
xlink:href="http://www.projectfloodlight.org/floodlight/"
>Floodlight</link> controller and the
proprietary Big Switch controller.</para>
<note>
<para>Only the proprietary BigSwitch
controller implements L3 functionality.
When using Floodlight as your OpenFlow
controller, L3 functionality is not
available.</para>
</note>
</listitem>
<listitem>
<para>PLUMgrid plug-in</para>
</listitem>
</itemizedlist>
<warning>
<para>Do not configure or use
<filename>neutron-l3-agent</filename> if you
use one of these plug-ins.</para>
</warning>
<procedure>
<title>To install the L3 agent for all other
plug-ins</title>
<step>
<para>Install the
<systemitem>neutron-l3-agent</systemitem>
binary on the network node:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-l3-agent</userinput></screen>
</step>
<step>
<para>To uplink the node that runs
<systemitem>neutron-l3-agent</systemitem>
to the external network, create a bridge named
"br-ex" and attach the NIC for the external
network to this bridge.</para>
<para>For example, with Open vSwitch and NIC eth1
connected to the external network, run:</para>
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-ex</userinput></screen>
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-port br-ex eth1</userinput></screen>
<para>Do not manually configure an IP address on
the NIC connected to the external network for
the node running
<systemitem>neutron-l3-agent</systemitem>.
Rather, you must have a range of IP addresses
from the external network that can be used by
OpenStack Networking for routers that uplink
to the external network. This range must be
large enough to have an IP address for each
router in the deployment, as well as each
floating IP.</para>
</step>
<step>
<para>The
<systemitem>neutron-l3-agent</systemitem>
uses the Linux IP stack and iptables to
perform L3 forwarding and NAT. In order to
support multiple routers with potentially
overlapping IP addresses,
<systemitem>neutron-l3-agent</systemitem>
defaults to using Linux network namespaces to
provide isolated forwarding contexts. As a
result, the IP addresses of routers are not
visible simply by running the <command>ip addr
list</command> or
<command>ifconfig</command> command on the
node. Similarly, you cannot directly
<command>ping</command> fixed IPs.</para>
<para>To do either of these things, you must run
the command within a particular network
namespace for the router. The namespace has
the name "qrouter-&lt;UUID of the router&gt;.
These example commands run in the router
namespace with UUID
47af3868-0fa8-4447-85f6-1304de32153b:</para>
<screen><prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list</userinput></screen>
<screen><prompt>#</prompt> <userinput>ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping &lt;fixed-ip&gt;</userinput></screen>
</step>
</procedure>
<important>
<para>If you reboot a node that runs the L3 agent, you
must run the
<command>neutron-ovs-cleanup</command> command
before the <systemitem class="service"
>neutron-l3-agent</systemitem> service
starts.</para>
<para>On Red Hat-based systems, the <systemitem
class="service"
>neutron-ovs-cleanup</systemitem> service runs
the <command>neutron-ovs-cleanup</command> command
automatically. However, on Debian-based systems
such as Ubuntu, you must manually run this command
or write your own system script that runs on boot
before the <systemitem class="service"
>neutron-l3-agent</systemitem> service
starts.</para>
</important>
</section>
<section xml:id="install_neutron-metering-agent">
<title>Configure metering agent</title>
<para>Starting with the Havana release, the Neutron Metering resides beside
<systemitem>neutron-l3-agent</systemitem>.</para>
<procedure>
<title>To install the metering agent and configure the
node</title>
<step>
<para>Install the agent by running:</para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-plugin-metering-agent</userinput></screen>
</step>
<step>
<para>If you use one of the following plugins, you need to configure the metering agent with these lines as well:</para>
<itemizedlist>
<listitem>
<para>An OVS-based plug-in such as OVS,
NSX, Ryu, NEC,
BigSwitch/Floodlight:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>
<listitem>
<para>A plug-in that uses LinuxBridge:</para>
<programlisting language="ini">interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
</listitem>
</itemizedlist>
</step>
<step>
<para>To use the reference implementation, you
must set:</para>
<programlisting language="ini">driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver</programlisting>
</step>
<step>
<para>Set this parameter in the
<filename>neutron.conf</filename> file on
the host that runs <systemitem class="service">neutron-server</systemitem>:</para>
<programlisting language="ini">service_plugins = neutron.services.metering.metering_plugin.MeteringPlugin</programlisting>
</step>
</procedure>
</section>
<section xml:id="install_neutron-lbaas-agent">
<title>Configure Load Balancing as a Service (LBaaS)</title>
<para>Configure Load Balancing as a Service (LBaas) with the
Open vSwitch or Linux Bridge plug-in. The Open vSwitch LBaaS
driver is required when enabling LBaaS for OVS-based
plug-ins, including BigSwitch, Floodlight, NEC, NSX, and
Ryu.</para>
<orderedlist>
<listitem>
<para>Install the agent by running:</para>
<para>
<screen><prompt>$</prompt> <userinput>sudo apt-get install neutron-lbaas-agent</userinput></screen>
</para>
</listitem>
<listitem>
<para>Enable the <productname>HAProxy</productname>
plug-in using the <option>service_provider</option>
parameter in the <filename>/usr/share/neutron/neutron-dist.conf</filename>
file:</para>
<programlisting language="ini">
service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default</programlisting>
</listitem>
<listitem>
<para>Enable the load balancer plugin using <option>service_plugin</option> in
the <filename>/etc/neutron/neutron.conf</filename> file:</para>
<programlisting language="ini">
service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin</programlisting>
</listitem>
<listitem>
<para>Enable the <productname>HAProxy</productname> load
balancer in the <filename>/etc/neutron/lbaas_agent.ini</filename> file:</para>
<programlisting language="ini">
device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver</programlisting>
</listitem>
<listitem>
<para>Select the required driver in
the <filename>/etc/neutron/lbaas_agent.ini</filename> file:</para>
<para>Enable the Open vSwitch LBaaS driver:</para>
<para>
<programlisting language="ini">
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
Or enable the Linux Bridge LBaaS driver:
</para>
<para>
<programlisting language="ini">
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver</programlisting>
Apply the new settings by restarting the
<systemitem>neutron-server</systemitem> and
<systemitem>neutron-lbaas-agent</systemitem>
services.</para>
</listitem>
<listitem>
<para>Enable Load Balancing in the <guimenu>Project</guimenu>
section of the Dashboard user interface:</para>
<para>Change the <option>enable_lb</option> option to
<parameter>True</parameter> in the
<filename>/etc/openstack-dashboard/local_settings</filename>
file:</para>
<para>
<programlisting language="python">
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': True,</programlisting>
</para>
<para>Apply the new settings by restarting the
<systemitem>httpd</systemitem> service. You can
now view the Load Balancer management options in
dashboard's <guimenu>Project</guimenu> view.</para>
</listitem>
</orderedlist>
</section>
<section xml:id="install_neutron-fwaas-agent">
<title>Configure FWaaS agent</title>
<para>The Firewall-as-a-Service (FWaaS) agent is
co-located with the Neutron L3 agent and does not
require any additional packages apart from those
required for the Neutron L3 agent. You can enable the
FWaaS functionality by setting the configuration, as
follows.</para>
<procedure>
<title>To configure FWaaS service and agent</title>
<step>
<para>Set this parameter in the
<filename>neutron.conf</filename> file on
the host that runs <systemitem class="service"
>neutron-server</systemitem>:</para>
<programlisting language="ini">service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin</programlisting>
</step>
<step>
<para>To use the reference implementation, you
must also add a FWaaS driver configuration to
the <filename>neutron.conf</filename> file on
every node where the Neutron L3 agent is
deployed:</para>
<programlisting language="ini">[fwaas]
driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
enabled = True</programlisting>
</step>
</procedure>
</section>
</section>
</section>