Merge "[network-guide] Update provider LB scenario"

This commit is contained in:
Jenkins 2015-07-09 05:21:02 +00:00 committed by Gerrit Code Review
commit 7ed465afd0
28 changed files with 184 additions and 172 deletions

View File

@ -1,6 +1,6 @@
================================================
Scenario 4b: Provider networks with Linux bridge
================================================
=============================================
Scenario: Provider networks with Linux bridge
=============================================
This scenario describes a provider networks implementation of the
OpenStack Networking service using the ML2 plug-in with Linux bridge.
@ -8,7 +8,9 @@ OpenStack Networking service using the ML2 plug-in with Linux bridge.
Provider networks generally offer simplicity, performance, and reliability at
the cost of flexibility. Unlike other scenarios, only administrators can
manage provider networks because they require configuration of physical
network infrastructure.
network infrastructure. Also, provider networks lack the concept of fixed
and floating IP addresses because they only handle layer-2 connectivity for
instances.
In many cases, operators who are already familiar with virtual networking
architectures that rely on physical network infrastructure for layer-2,
@ -19,7 +21,7 @@ Networking service. Over time, operators can build on this minimal
deployment to enable more cloud networking features.
Before OpenStack Networking introduced Distributed Virtual Routers (DVR), all
network traffic traversed one or more dedicated network nodes, which limited
network traffic traversed one or more dedicated network nodes which limited
performance and reliability. Physical network infrastructures typically offer
better performance and reliability than general-purpose hosts that handle
various network operations in software.
@ -39,6 +41,9 @@ In comparison to provider networks with Open vSwitch (OVS), this scenario
relies completely on native Linux networking services which makes it the
simplest of all scenarios in this guide.
The example configuration creates a VLAN provider network. However, it also
supports flat (untagged or native) provider networks.
Prerequisites
~~~~~~~~~~~~~
@ -48,16 +53,24 @@ Networking service immediately depends on the Identity service, and the Compute
service immediately depends on the Networking service. These dependencies lack
services such as the Image service because the Networking service does not
immediately depend on it. However, the Compute service depends on the Image
service to launch an instance.
service to launch an instance. The example configuration in this scenario
assumes basic configuration knowledge of Networking service components.
For illustration purposes, the management network uses 10.0.0.0/24 and
provider networks use 192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
Infrastructure
--------------
#. One controller node with two network interfaces: management and
external (typically the Internet).
provider. The provider interface connects to a generic network that
physical network infrastructure switches/routes to external networks
(typically the Internet).
#. At least two compute nodes with two network interfaces: management
and external (typically the Internet).
and provider. The provider interface connects to a generic network that
physical network infrastructure switches/routes to external networks
(typically the Internet).
.. image:: figures/scenario-provider-hw.png
:alt: Hardware layout
@ -72,17 +85,13 @@ OpenStack services - controller node
------------------------------------
#. Operational SQL server with ``neutron`` database and appropriate
configuration in the :file:`neutron-server.conf` file.
configuration in the :file:`neutron.conf` file.
#. Operational message queue service with appropriate configuration in
the :file:`neutron-server.conf` file.
the :file:`neutron.conf` file.
#. Operational OpenStack Identity service with appropriate
configuration in the :file:`neutron-server.conf` file.
configuration in the :file:`neutron.conf` file.
#. Operational OpenStack Compute controller/management service with
appropriate configuration to use neutron in the :file:`nova.conf` file.
#. Neutron server service, ML2 plug-in, Linux bridge agent, DHCP agent,
and any dependencies.
@ -90,11 +99,9 @@ OpenStack services - compute nodes
----------------------------------
#. Operational OpenStack Identity service with appropriate
configuration in the :file:`neutron-server.conf` file.
configuration in the :file:`neutron.conf` file.
#. Operational OpenStack Compute controller/management service with
appropriate configuration to use neutron in the :file:`nova.conf` file.
#. ML2 plug-in, Linux bridge agent, and any dependencies.
Architecture
@ -124,7 +131,8 @@ The controller node contains the following network components:
:alt: Controller node components - connectivity
.. note::
The diagram contains two different provider networks.
For illustration purposes, the diagram contains two different provider
networks.
The compute nodes contain the following network components:
@ -139,14 +147,12 @@ The compute nodes contain the following network components:
:alt: Compute node components - connectivity
.. note::
The diagram contains two different provider networks.
For illustration purposes, the diagram contains two different provider
networks.
Packet flow
~~~~~~~~~~~
For all cases, the physical network infrastructure handles routing and
switching for *north-south* and *east-west* network traffic.
.. note::
*North-south* network traffic travels between an instance and
external network, typically the Internet. *East-west* network
@ -155,70 +161,52 @@ switching for *north-south* and *east-west* network traffic.
Case 1: North-south
-------------------
Instance 1 resides on compute node 1 and uses provider network 1.
The physical network infrastructure handles routing and potentially other
services between the provider and external network. In this case, *provider*
and *external* simply differentiate between a network available to instances
and a network only accessible via router, respectively, to illustrate that
the physical network infrastructure handles routing. However, provider
networks support direct connection to *external* networks such as the
Internet.
The instance sends a packet to a host on the external network.
The physical network infrastructure handles routing (and potentially SNAT/DNAT)
between the provider and external network. In this example, external network
1 contains a different IP network than the provider networks to illustrate
that the physical network infrastructure can handle routing. However, provider
networks also support switching to external networks.
* External network 1
* External network
* Network 203.0.113.0/24
* Gateway 203.0.113.1 with MAC address *EG1*
* Provider network 1
* Provider network (VLAN)
* Network 192.0.2.0/24
* Gateway 192.0.2.1 with MAC address *TG1*
* Gateway 192.0.2.1 with MAC address *TG*
* Compute node 1
* Instance 1 192.0.2.11 with MAC address *I1*
* Instance 1 resides on compute node 1 and uses a provider network.
* The instance sends a packet to a host on the external network.
The following steps involve compute node 1.
#. Upon launch, instance 1 gets an IP address from the DHCP server on the
controller node and gets metadata by using a configuration drive. After
initial configuration, only DHCP renewal traffic interacts with the
controller node.
.. note::
The lack of L3 agents in this scenario prevents operation of the
conventional metadata agent. You must use a configuration drive to
provide instance metadata.
#. The instance 1 ``tap`` interface (1) forwards the packet to the tunnel
bridge ``qbr``. The packet contains destination MAC address *TG1*
#. The instance 1 ``tap`` interface (1) forwards the packet to the VLAN
bridge ``qbr``. The packet contains destination MAC address *TG*
because the destination resides on another network.
#. Security group rules (2) on the provider bridge ``qbr`` handle state
tracking for the packet.
#. The provider bridge ``qbr`` forwards the packet to the logical VLAN
interface ``device.sid`` where *device* references the underlying
physical VLAN interface and *sid* contains the provider network
physical provider interface and *sid* contains the provider network
segmentation ID.
#. The logical VLAN interface ``device.sid`` forwards the packet to the
physical network via the physical VLAN interface.
physical network via the physical provider interface.
The following steps involve the physical network infrastructure:
#. A switch (3) handles any VLAN tag operations between provider network 1
#. A switch (3) handles any VLAN tag operations between the provider network
and the router (4).
#. A router (4) routes the packet from provider network 1 to the external
#. A router (4) routes the packet from the provider network to the external
network.
#. A switch (3) handles any VLAN tag operations between the router (4) and
the external network.
#. A switch (3) forwards the packet to the external network.
.. note::
@ -230,24 +218,17 @@ The following steps involve the physical network infrastructure:
Case 2: East-west for instances on different networks
-----------------------------------------------------
Instance 1 resides on compute node 1 and uses provider network 1. Instance 2
resides on compute node 2 and uses provider network 2.
Instance 1 sends a packet to instance 2.
The physical network infrastructure handles routing between the provider
networks.
* Provider network 1
* Network: 192.0.2.0/24
* Gateway: 192.0.2.1 with MAC address *TG1*
* Provider network 2
* Network: 198.51.100.0/2
* Network: 198.51.100.0/24
* Gateway: 198.51.100.1 with MAC address *TG2*
* Compute node 1
@ -258,49 +239,44 @@ networks.
* Instance 2: 198.51.100.11 with MAC address *I2*
* Instance 1 resides on compute node 1 and uses provider network 1.
* Instance 2 resides on compute node 2 and uses provider network 2.
* Instance 1 sends a packet to instance 2.
The following steps involve compute node 1:
#. The instance 1 ``tap`` interface forwards the packet to the VLAN
bridge ``qbr``. The packet contains destination MAC address *TG1*
because the destination resides on another network.
#. Security group rules on the provider bridge ``qbr`` handle state tracking
for the packet.
#. The provider bridge ``qbr`` forwards the packet to the logical VLAN
interface ``device.sid`` where *device* references the underlying
physical VLAN interface and *sid* contains the provider network
physical provider interface and *sid* contains the provider network
segmentation ID.
#. The logical VLAN interface ``device.sid`` forwards the packet to the
physical network infrastructure via the physical VLAN interface.
physical network infrastructure via the physical provider interface.
The following steps involve the physical network infrastructure:
#. A switch (3) handles any VLAN tag operations between provider network 1
and the router (4).
#. A router (4) routes the packet from provider network 1 to provider
network 2.
#. A switch (3) handles any VLAN tag operations between the router (4) and
provider network 2.
#. A switch (3) forwards the packet to compute node 2.
The following steps involve compute node 2:
#. The physical VLAN interface forwards the packet to the logical VLAN
#. The physical provider interface forwards the packet to the logical VLAN
interface ``device.sid`` where *device* references the underlying
physical VLAN interface and *sid* contains the provider network
physical provider interface and *sid* contains the provider network
segmentation ID.
#. The logical VLAN interface ``device.sid`` forwards the packet to the
provider bridge ``qbr``.
#. Security group rules (5) on the provider bridge ``qbr`` handle
firewalling and state tracking for the packet.
#. The provider bridge ``qbr`` forwards the packet to the ``tap`` interface (6)
on instance 2.
@ -312,12 +288,10 @@ The following steps involve compute node 2:
Case 3: East-west for instances on the same network
---------------------------------------------------
Instance 1 resides on compute node 1 and uses provider network 1. Instance
2 resides on compute node 2 and uses provider network 1. Instance 1 sends
a packet to instance 2. The physical network infrastructure handles switching
within the provider network.
The physical network infrastructure handles switching within the provider
network.
* Provider network 1
* Provider network
* Network: 192.0.2.0/24
@ -329,22 +303,24 @@ within the provider network.
* Instance 2: 192.0.2.12 with MAC address *I2*
* Instance 1 resides on compute node 1.
* Instance 2 resides on compute node 2.
* Both instances use the same provider network.
* Instance 1 sends a packet to instance 2.
The following steps involve compute node 1:
#. The instance 1 ``tap`` interface (1) forwards the packet to the VLAN
bridge ``qbr``. The packet contains destination MAC address *I2*
because the destination resides on the same network.
#. Security group rules (2) on the provider bridge ``qbr`` handle
state tracking for the packet.
#. The provider bridge ``qbr`` forwards the packet to the logical VLAN
interface ``device.sid`` where *device* references the underlying
physical VLAN interface and *sid* contains the provider network
physical provider interface and *sid* contains the provider network
segmentation ID.
#. The logical VLAN interface ``device.sid`` forwards the packet to the
physical network infrastructure via the physical VLAN interface.
physical network infrastructure via the physical provider interface.
The following steps involve the physical network infrastructure:
@ -352,17 +328,14 @@ The following steps involve the physical network infrastructure:
The following steps involve compute node 2:
#. The physical VLAN interface forwards the packet to the logical VLAN
#. The physical provider interface forwards the packet to the logical VLAN
interface ``device.sid`` where *device* references the underlying
physical VLAN interface and *sid* contains the provider network
physical provider interface and *sid* contains the provider network
segmentation ID.
#. The logical VLAN interface ``device.sid`` forwards the packet to the
provider bridge ``qbr``.
#. Security group rules (4) on the provider bridge ``qbr`` handle
firewalling and state tracking for the packet.
#. The provider bridge ``qbr`` forwards the packet to the instance 2 ``tap``
interface (5).
@ -378,22 +351,34 @@ Example configuration
Use the following example configuration as a template to deploy this
scenario in your environment.
.. note::
The lack of L3 agents in this scenario prevents operation of the
conventional metadata agent. You must use a configuration drive to
provide instance metadata.
Controller node
---------------
#. Configure the kernel to disable reverse path filtering. Edit the
:file:`/etc/sysctl.conf` file::
:file:`/etc/sysctl.conf` file:
.. code-block:: ini
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
#. Load the new kernel configuration::
#. Load the new kernel configuration:
.. code-block:: console
$ sysctl -p
#. Configure base options. Edit the :file:`/etc/neutron/neutron.conf` file::
#. Configure common options. Edit the :file:`/etc/neutron/neutron.conf` file:
.. code-block:: ini
[DEFAULT]
verbose = True
core_plugin = ml2
service_plugins =
@ -403,7 +388,9 @@ Controller node
routing.
#. Configure the ML2 plug-in. Edit the
:file:`/etc/neutron/plugins/ml2/ml2\_conf.ini` file::
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file:
.. code-block:: ini
[ml2]
type_drivers = flat,vlan
@ -411,28 +398,37 @@ Controller node
mechanism_drivers = linuxbridge
[ml2_type_flat]
flat_networks = external
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = external
network_vlan_ranges = provider
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE
[vxlan]
enable_vxlan = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True
enable_ipset = True
[linux_bridge]
physical_interface_mappings = external:EXTERNAL_NETWORK_INTERFACE
Replace ``EXTERNAL_NETWORK_INTERFACE`` with the respective
underlying network interface name. For example, ``eth1``.
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
that handles provider networks. For example, ``eth1``.`
.. note::
The ``tenant_network_types`` option contains no value because the
architecture does not support project (private) networks.
#. Configure the DHCP agent. Edit the :file:`/etc/neutron/dhcp\_agent.ini`
file::
.. note::
The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN
ID ranges to support use of arbitrary VLAN IDs.
#. Configure the DHCP agent. Edit the :file:`/etc/neutron/dhcp_agent.ini`
file:
.. code-block:: ini
[DEFAULT]
verbose = True
@ -442,73 +438,64 @@ Controller node
#. Start the following services:
- Server
- Linux bridge agent
- DHCP agent
* Server
* Linux bridge agent
* DHCP agent
Compute nodes
-------------
#. Configure the kernel to disable reverse path filtering. Edit the
:file:`/etc/sysctl.conf` file::
:file:`/etc/sysctl.conf` file:
.. code-block:: ini
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
#. Load the new kernel configuration::
#. Load the new kernel configuration:
.. code-block:: console
$ sysctl -p
#. Configure base options. Edit the :file:`/etc/neutron/neutron.conf` file::
#. Configure base options. Edit the :file:`/etc/neutron/neutron.conf` file:
.. code-block:: ini
[DEFAULT]
core_plugin = ml2
service_plugins =
verbose = True
.. note::
The ``service_plugins`` option contains no value because the
Networking service does not provide layer-3 services such as
routing.
#. Configure the Linux bridge agent. Edit the
:file:`/etc/neutron/plugins/ml2/ml2_conf.ini` file:
#. Configure the ML2 plug-in. Edit the
:file:`/etc/neutron/plugins/ml2/ml2\_conf.ini` file::
.. code-block:: ini
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
network_vlan_ranges = external
[vxlan]
enable_vxlan = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = True
enable_ipset = True
[linux_bridge]
physical_interface_mappings = external:EXTERNAL_NETWORK_INTERFACE
Replace ``EXTERNAL_NETWORK_INTERFACE`` with the respective
underlying network interface name. For example, ``eth1``.
.. note::
The ``tenant_network_types`` option contains no value because the
architecture does not support project (private) networks.
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
that handles provider networks. For example, ``eth1``.`
#. Start the following services:
- Linux bridge agent
* Linux bridge agent
Verify service operation
------------------------
#. Source the administrative project credentials.
#. Verify presence and operation of the agents:
#. Verify presence and operation of the agents::
.. code-block:: console
$ neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
@ -523,17 +510,17 @@ Verify service operation
Create initial networks
-----------------------
This example creates a provider network using VLAN 101 and IP network
203.0.113.0/24. Change the VLAN ID and IP network to values appropriate
for your environment.
This example creates a VLAN provider network. Change the VLAN ID and IP
address range to values suitable for your environment.
#. Source the administrative project credentials.
#. Create a provider network:
#. Create a provider network::
.. code-block:: console
$ neutron net-create provider-101 --shared \
--provider:physical_network external --provider:network_type vlan \
--provider:segmentation_id 101
--provider:physical_network provider --provider:network_type vlan \
--provider:segmentation_id 101
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
@ -542,7 +529,7 @@ for your environment.
| id | 572a3fc9-ad1f-4e54-a63a-4bf5047c1a4a |
| name | provider-101 |
| provider:network_type | vlan |
| provider:physical_network | external |
| provider:physical_network | provider |
| provider:segmentation_id | 101 |
| router:external | False |
| shared | True |
@ -554,9 +541,12 @@ for your environment.
.. note::
The ``shared`` option allows any project to use this network.
#. Create a subnet on the provider network::
#. Create a subnet on the provider network:
$ neutron subnet-create provider-101 203.0.113.0/24 --gateway 203.0.113.1
.. code-block:: console
$ neutron subnet-create provider-101 203.0.113.0/24 \
--name provider-101-subnet --gateway 203.0.113.1
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field | Value |
@ -571,15 +561,17 @@ for your environment.
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | |
| name | provider-101-subnet |
| network_id | 572a3fc9-ad1f-4e54-a63a-4bf5047c1a4a |
| tenant_id | e0bddbc9210d409795887175341b7098 |
+-------------------+--------------------------------------------------+
Verify operation
----------------
Verify network operation
------------------------
#. On the controller node, verify creation of the ``qdhcp`` namespace::
#. On the controller node, verify creation of the ``qdhcp`` namespace:
.. code-block:: console
$ ip netns
qdhcp-8b868082-e312-4110-8627-298109d4401c
@ -587,15 +579,34 @@ Verify operation
.. note::
The ``qdhcp`` namespace might not exist until launching an instance.
#. Source the credentials for a non-privileged project. The following
steps use the ``demo`` project.
#. Source the regular project credentials. The following steps use the
``demo`` project.
#. Create the appropriate security group rules to allow ping and SSH
access to the instance.
access to the instance. For example:
.. code-block:: console
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp | -1 | -1 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp | 22 | 22 | 0.0.0.0/0 | |
+-------------+-----------+---------+-----------+--------------+
#. Launch an instance with an interface on the provider network.
#. Determine the IP address of the instance. The following step uses
203.0.113.2.
#. On the controller node or any host with access to the provider network,
ping the IP address of the instance:
#. Test connectivity to the instance::
.. code-block:: console
$ ping -c 4 203.0.113.2
PING 203.0.113.2 (203.0.113.2) 56(84) bytes of data.
@ -609,8 +620,9 @@ Verify operation
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
#. Obtain access to the instance.
#. Test connectivity to the Internet:
#. Test connectivity to the Internet from the instance::
.. code-block:: console
$ ping -c 4 openstack.org
PING openstack.org (174.143.194.225) 56(84) bytes of data.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

After

Width:  |  Height:  |  Size: 93 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 47 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 71 KiB

After

Width:  |  Height:  |  Size: 70 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 54 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 17 KiB

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

After

Width:  |  Height:  |  Size: 63 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 117 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 104 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 123 KiB

After

Width:  |  Height:  |  Size: 122 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 38 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

After

Width:  |  Height:  |  Size: 63 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 19 KiB