openstack-manuals/doc/install-guide/section_neutron-network-node.xml
Pranav Salunke f1324d73f1 install: suse: Update neutron service name
Update the service names under neutron section. For suse based releases
the packages are named as 'openstack-$project-*.service'

Change-Id: I1b6ae0ae2ef39d50a89813930d4eda587c346808
2015-05-12 19:53:10 +02:00

576 lines
26 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="neutron-network-node">
<title>Install and configure network node</title>
<para>The network node primarily handles internal and external routing
and <glossterm>DHCP</glossterm> services for virtual networks.</para>
<procedure>
<title>To configure prerequisites</title>
<para>Before you install and configure OpenStack Networking, you
must configure certain kernel networking parameters.</para>
<step>
<para>Edit the <filename>/etc/sysctl.conf</filename> file to
contain the following parameters:</para>
<programlisting>net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0</programlisting>
</step>
<step>
<para>Implement the changes:</para>
<screen><prompt>#</prompt> <userinput>sysctl -p</userinput></screen>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To install the Networking components</title>
<step>
<screen os="ubuntu"><prompt>#</prompt> <userinput>apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent</userinput></screen>
<screen os="rhel;centos;fedora"><prompt>#</prompt> <userinput>yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch</userinput></screen>
<screen os="sles;opensuse"><prompt>#</prompt> <userinput>zypper install --no-recommends openstack-neutron-openvswitch-agent openstack-neutron-l3-agent \
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent ipset</userinput></screen>
<note os="sles;opensuse">
<para>SUSE does not use a separate ML2 plug-in package.</para>
</note>
</step>
</procedure>
<procedure os="debian">
<title>To install and configure the Networking components</title>
<step>
<screen><prompt>#</prompt> <userinput>apt-get install neutron-plugin-openvswitch-agent openvswitch-datapath-dkms \
neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent</userinput></screen>
<note>
<para>Debian does not use a separate ML2 plug-in package.</para>
</note>
</step>
<step>
<para>Respond to prompts for
<link linkend="debconf-dbconfig-common">database management</link>,
<link linkend="debconf-keystone_authtoken">Identity service
credentials</link>,
<link linkend="debconf-api-endpoints">service endpoint
registration</link>, and
<link linkend="debconf-rabbitmq">message queue
credentials</link>.</para>
</step>
<step>
<para>Select the ML2 plug-in:</para>
<informalfigure>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="figures/debconf-screenshots/neutron_1_plugin_selection.png"
/>
</imageobject>
</mediaobject>
</informalfigure>
<note>
<para>Selecting the ML2 plug-in also populates the
<option>service_plugins</option> and
<option>allow_overlapping_ips</option> options in the
<filename>/etc/neutron/neutron.conf</filename> file with the
appropriate values.</para>
</note>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Networking common components</title>
<para>The Networking common component configuration includes the
authentication mechanism, message queue, and plug-in.</para>
<note>
<para>Default configuration files vary by distribution. You might need
to add these sections and options rather than modifying existing
sections and options. Also, an ellipsis (...) in the configuration
snippets indicates potential default configuration options that you
should retain.</para>
</note>
<step>
<para>Edit the <filename>/etc/neutron/neutron.conf</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[database]</literal> section, comment out
any <literal>connection</literal> options because network nodes
do not directly access the database.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[oslo_messaging_rabbit]</literal> sections, configure
<application>RabbitMQ</application> message queue access:</para>
<programlisting language="ini">[DEFAULT]
...
rpc_backend = rabbit
[oslo_messaging_rabbit]
...
rabbit_host = <replaceable>controller</replaceable>
rabbit_userid = openstack
rabbit_password = <replaceable>RABBIT_PASS</replaceable></programlisting>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with the
password you chose for the <literal>openstack</literal> account
in <application>RabbitMQ</application>.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> and
<literal>[keystone_authtoken]</literal> sections,
configure Identity service access:</para>
<programlisting language="ini">[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://<replaceable>controller</replaceable>:5000
auth_url = http://<replaceable>controller</replaceable>:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose or the <literal>neutron</literal> user in the
Identity service.</para>
<note>
<para>Comment out or remove any other options in the
<literal>[keystone_authtoken]</literal> section.</para>
</note>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section,
enable the Modular Layer 2 (ML2) plug-in,
router service, and overlapping IP addresses:</para>
<programlisting language="ini">[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the Modular Layer 2 (ML2) plug-in</title>
<para>The ML2 plug-in uses the
<glossterm baseform="Open vSwitch">Open vSwitch (OVS)</glossterm>
mechanism (agent) to build the virtual networking framework for
instances.</para>
<step>
<para>Edit the
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>
file and complete the following actions:</para>
<substeps>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[ml2]</literal> section, enable the
<glossterm baseform="flat network">flat</glossterm>,
<glossterm baseform="VLAN network">VLAN</glossterm>,
<glossterm>generic routing encapsulation (GRE)</glossterm>, and
<glossterm>virtual extensible LAN (VXLAN)</glossterm>
network type drivers, GRE tenant networks, and the OVS
mechanism driver:</para>
<programlisting language="ini">[ml2]
...
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = gre
mechanism_drivers = openvswitch</programlisting>
</step>
<step>
<para>In the <literal>[ml2_type_flat]</literal> section,
configure the external flat provider network:</para>
<programlisting language="ini">[ml2_type_flat]
...
flat_networks = external</programlisting>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[ml2_type_gre]</literal> section,
configure the tunnel identifier (id) range:</para>
<programlisting language="ini">[ml2_type_gre]
...
tunnel_id_ranges = 1:1000</programlisting>
</step>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>In the <literal>[securitygroup]</literal>
section, enable security groups, enable
<glossterm>ipset</glossterm>, and configure
the OVS <glossterm>iptables</glossterm> firewall
driver:</para>
<programlisting language="ini">[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver</programlisting>
</step>
<step>
<para>In the <literal>[ovs]</literal> section, enable tunnels,
configure the local tunnel endpoint, and map the external flat
provider network to the <literal>br-ex</literal> external
network bridge:</para>
<programlisting language="ini">[ovs]
...
local_ip = <replaceable>INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS</replaceable>
bridge_mappings = external:br-ex</programlisting>
<para>Replace
<replaceable>INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS</replaceable>
with the IP address of the instance
tunnels network interface on your network node.</para>
</step>
<step>
<para>In the <literal>[agent]</literal> section, enable GRE
tunnels:</para>
<programlisting language="ini">[agent]
...
tunnel_types = gre</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure os="ubuntu;rhel;centos;fedora;sles;opensuse">
<title>To configure the Layer-3 (L3) agent</title>
<para>The <glossterm>Layer-3 (L3) agent</glossterm> provides
routing services for virtual networks.</para>
<step>
<para>Edit the <filename>/etc/neutron/l3_agent.ini</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the interface driver, external network bridge, and enable
deletion of defunct router namespaces:</para>
<programlisting language="ini">[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
external_network_bridge =
router_delete_namespaces = True</programlisting>
<note>
<para>The <literal>external_network_bridge</literal> option
intentionally lacks a value to enable multiple external
networks on a single agent.</para>
</note>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the DHCP agent</title>
<para>The <glossterm>DHCP agent</glossterm> provides DHCP
services for virtual networks.</para>
<step os="ubuntu;rhel;centos;fedora;sles;opensuse">
<para>Edit the
<filename>/etc/neutron/dhcp_agent.ini</filename> file
and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
the interface and DHCP drivers and enable deletion of defunct
DHCP namespaces:</para>
<programlisting language="ini">[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True</programlisting>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>(Optional)</para>
<para>Tunneling protocols such as GRE include additional packet
headers that increase overhead and decrease space available for the
payload or user data. Without knowledge of the virtual network
infrastructure, instances attempt to send packets using the default
Ethernet <glossterm>maximum transmission unit (MTU)</glossterm> of
1500 bytes. <glossterm>Internet protocol (IP)</glossterm> networks
contain the <glossterm>path MTU discovery (PMTUD)</glossterm>
mechanism to detect end-to-end MTU and adjust packet size
accordingly. However, some operating systems and networks block or
otherwise lack support for PMTUD causing performance degradation
or connectivity failure.</para>
<para>Ideally, you can prevent these problems by enabling
<glossterm baseform="jumbo frame">jumbo frames</glossterm> on the
physical network that contains your tenant virtual networks.
Jumbo frames support MTUs up to approximately 9000 bytes which
negates the impact of GRE overhead on virtual networks. However,
many network devices lack support for jumbo frames and OpenStack
administrators often lack control over network infrastructure.
Given the latter complications, you can also prevent MTU problems
by reducing the instance MTU to account for GRE overhead.
Determining the proper MTU value often takes experimentation,
but 1454 bytes works in most environments. You can configure the
DHCP server that assigns IP addresses to your instances to also
adjust the MTU.</para>
<note>
<para>Some cloud images ignore the DHCP MTU option in which case
you should configure it using metadata, a script, or another suitable
method.</para>
</note>
<substeps>
<step>
<para>Edit the <filename>/etc/neutron/dhcp_agent.ini</filename>
file and complete the following action:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, enable the
<glossterm>dnsmasq</glossterm> configuration file:</para>
<programlisting language="ini">[DEFAULT]
...
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf</programlisting>
</step>
</substeps>
</step>
<step>
<para>Create and edit the
<filename>/etc/neutron/dnsmasq-neutron.conf</filename> file and
complete the following action:</para>
<substeps>
<step>
<para>Enable the DHCP MTU option (26) and configure it to
1454 bytes:</para>
<programlisting language="ini">dhcp-option-force=26,1454</programlisting>
</step>
</substeps>
</step>
<step>
<para>Kill any existing
<systemitem role="process">dnsmasq</systemitem> processes:</para>
<screen><prompt>#</prompt> <userinput>pkill dnsmasq</userinput></screen>
</step>
</substeps>
</step>
</procedure>
<procedure>
<title>To configure the metadata agent</title>
<para>The <glossterm baseform="Metadata agent">metadata agent</glossterm>
provides configuration information such as credentials to
instances.</para>
<step>
<para>Edit the <filename>/etc/neutron/metadata_agent.ini</filename>
file and complete the following actions:</para>
<substeps>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure
access parameters:</para>
<programlisting language="ini">[DEFAULT]
...
auth_uri = http://<replaceable>controller</replaceable>:5000
auth_url = http://<replaceable>controller</replaceable>:35357
auth_region = RegionOne
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = <replaceable>NEUTRON_PASS</replaceable></programlisting>
<para>Replace <replaceable>NEUTRON_PASS</replaceable> with the
password you chose for the <literal>neutron</literal> user in
the Identity service.</para>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
metadata host:</para>
<programlisting language="ini">[DEFAULT]
...
nova_metadata_ip = <replaceable>controller</replaceable></programlisting>
</step>
<step>
<para>In the <literal>[DEFAULT]</literal> section, configure the
metadata proxy shared secret:</para>
<programlisting language="ini">[DEFAULT]
...
metadata_proxy_shared_secret = <replaceable>METADATA_SECRET</replaceable></programlisting>
<para>Replace <replaceable>METADATA_SECRET</replaceable> with a
suitable secret for the metadata proxy.</para>
</step>
<step>
<para>(Optional) To assist with troubleshooting,
enable verbose logging in the <literal>[DEFAULT]</literal>
section:</para>
<programlisting language="ini">[DEFAULT]
...
verbose = True</programlisting>
</step>
</substeps>
</step>
<step>
<para>On the <emphasis>controller</emphasis> node, edit the
<filename>/etc/nova/nova.conf</filename> file and complete the
following action:</para>
<substeps>
<step>
<para>In the <literal>[neutron]</literal> section, enable the
metadata proxy and configure the secret:</para>
<programlisting language="ini">[neutron]
...
service_metadata_proxy = True
metadata_proxy_shared_secret = <replaceable>METADATA_SECRET</replaceable></programlisting>
<para>Replace <replaceable>METADATA_SECRET</replaceable> with
the secret you chose for the metadata proxy.</para>
</step>
</substeps>
</step>
<step>
<para>On the <emphasis>controller</emphasis> node, restart the
Compute <glossterm>API</glossterm> service:</para>
<screen os="rhel;centos;fedora;sles;opensuse"><prompt>#</prompt> <userinput>systemctl restart openstack-nova-api.service</userinput></screen>
<screen os="ubuntu;debian"><prompt>#</prompt> <userinput>service nova-api restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>To configure the Open vSwitch (OVS) service</title>
<para>The OVS service provides the underlying virtual networking
framework for instances. The integration bridge
<literal>br-int</literal> handles internal instance network
traffic within OVS. The external bridge <literal>br-ex</literal>
handles external instance network traffic within OVS. The
external bridge requires a port on the physical external network
interface to provide instances with external network access. In
essence, this port connects the virtual and physical external
networks in your environment.</para>
<step os="rhel;centos;fedora;sles;opensuse">
<para>Start the OVS service and configure it to start when the
system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openvswitch.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openvswitch.service</userinput></screen>
</step>
<step os="debian;ubuntu">
<para>Restart the OVS service:</para>
<screen><prompt>#</prompt> <userinput>service openvswitch-switch restart</userinput></screen>
</step>
<step>
<para>Add the external bridge:</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-br br-ex</userinput></screen>
</step>
<step>
<para>Add a port to the external bridge that connects to the
physical external network interface:</para>
<para>Replace <replaceable>INTERFACE_NAME</replaceable> with the
actual interface name. For example, <emphasis>eth2</emphasis>
or <emphasis>ens256</emphasis>.</para>
<screen><prompt>#</prompt> <userinput>ovs-vsctl add-port br-ex <replaceable>INTERFACE_NAME</replaceable></userinput></screen>
<note>
<para>Depending on your network interface driver, you may need
to disable <glossterm>generic receive offload
(GRO)</glossterm> to achieve suitable throughput between
your instances and the external network.</para>
<para>To temporarily disable GRO on the external network
interface while testing your environment:</para>
<screen><prompt>#</prompt> <userinput>ethtool -K <replaceable>INTERFACE_NAME</replaceable> gro off</userinput></screen>
</note>
</step>
</procedure>
<procedure>
<title>To finalize the installation</title>
<step os="rhel;centos;fedora">
<para>The Networking service initialization scripts expect a
symbolic link <filename>/etc/neutron/plugin.ini</filename>
pointing to the ML2 plug-in configuration file,
<filename>/etc/neutron/plugins/ml2/ml2_conf.ini</filename>.
If this symbolic link does not exist, create it using the
following command:</para>
<screen><prompt>#</prompt> <userinput>ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini</userinput></screen>
<para>Due to a packaging bug, the Open vSwitch agent initialization
script explicitly looks for the Open vSwitch plug-in configuration
file rather than a symbolic link
<filename>/etc/neutron/plugin.ini</filename> pointing to the ML2
plug-in configuration file. Run the following commands to resolve this
issue:</para>
<screen><prompt>#</prompt> <userinput>cp /usr/lib/systemd/system/neutron-openvswitch-agent.service \
/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig</userinput>
<prompt>#</prompt> <userinput>sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' \
/usr/lib/systemd/system/neutron-openvswitch-agent.service</userinput></screen>
</step>
<step os="sles;opensuse">
<para>The Networking service initialization scripts expect the
variable <literal>NEUTRON_PLUGIN_CONF</literal> in the
<filename>/etc/sysconfig/neutron</filename> file to
reference the ML2 plug-in configuration file. Edit the
<filename>/etc/sysconfig/neutron</filename> file and add the
following:</para>
<programlisting>NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"</programlisting>
</step>
<step os="rhel;centos;fedora">
<para>Start the Networking services and configure them to start
when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service \
neutron-ovs-cleanup.service</userinput>
<prompt>#</prompt> <userinput>systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service \
neutron-dhcp-agent.service neutron-metadata-agent.service</userinput></screen>
<note>
<para>Do not explicitly start the
<systemitem class="service">neutron-ovs-cleanup</systemitem>
service.</para>
</note>
</step>
<step os="sles;opensuse">
<para>Start the Networking services and configure them to start
when the system boots:</para>
<screen><prompt>#</prompt> <userinput>systemctl enable openstack-neutron-openvswitch-agent.service openstack-neutron-l3-agent.service \
openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service \
openstack-neutron-ovs-cleanup.service</userinput>
<prompt>#</prompt> <userinput>systemctl start openstack-neutron-openvswitch-agent.service openstack-neutron-l3-agent.service \
openstack-neutron-dhcp-agent.service openstack-neutron-metadata-agent.service</userinput></screen>
<note>
<para>Do not explicitly start the
<systemitem class="service">neutron-ovs-cleanup</systemitem>
service.</para>
</note>
</step>
<step os="ubuntu;debian">
<para>Restart the Networking services:</para>
<screen><prompt>#</prompt> <userinput>service neutron-plugin-openvswitch-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-l3-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-dhcp-agent restart</userinput>
<prompt>#</prompt> <userinput>service neutron-metadata-agent restart</userinput></screen>
</step>
</procedure>
<procedure>
<title>Verify operation</title>
<note>
<para>Perform these commands on the controller node.</para>
</note>
<step>
<para>Source the <literal>admin</literal> credentials to gain access to
admin-only CLI commands:</para>
<screen><prompt>$</prompt> <userinput>source admin-openrc.sh</userinput></screen>
</step>
<step>
<para>List agents to verify successful launch of the
neutron agents:</para>
<screen><prompt>$</prompt> <userinput>neutron agent-list</userinput>
<computeroutput>+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent |
| 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent |
| 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+</computeroutput></screen>
</step>
</procedure>
</section>