[networking] Reorganize deployment examples
Major reorganization and rewriting of deployment examples to use "building blocks" that enable the audience to begin with a simple architecture and add complexity as necessary. Also reduces duplication of content. Configuration and commands reference Mitaka to ease backporting to Mitaka. After merge, I will submit another patch to update the configuration and commands for Newton. Macvtap content needs a significant rewrite to integrate better with the deployment examples. Will address in Ocata. Change-Id: I2ae95d726aa053a0f30507a3e68907ac56296e6b backport: mitaka
@ -64,7 +64,7 @@ The example configuration involves the following components:
|
||||
The example configuration assumes sufficient knowledge about the
|
||||
Networking service, routing, and BGP. For basic deployment of the
|
||||
Networking service, consult one of the
|
||||
:ref:`deployment-scenarios`. For more information on BGP, see
|
||||
:ref:`deploy`. For more information on BGP, see
|
||||
`RFC 4271 <https://tools.ietf.org/html/rfc4271>`_.
|
||||
|
||||
Controller node
|
||||
|
@ -1,20 +1,20 @@
|
||||
.. _config-dvr-snat-ha-ovs:
|
||||
|
||||
============================================
|
||||
Distributed Virtual Routing with VRRP (L3HA)
|
||||
============================================
|
||||
=====================================
|
||||
Distributed Virtual Routing with VRRP
|
||||
=====================================
|
||||
|
||||
:ref:`Distributed Virtual Routing <scenario-dvr-ovs>` supports augmentation
|
||||
using :ref:`VRRP (L3HA) <scenario-l3ha-ovs>`. Using this configuration,
|
||||
:ref:`deploy-ovs-ha-dvr` supports augmentation
|
||||
using Virtual Router Redundancy Protocol (VRRP). Using this configuration,
|
||||
virtual routers support both the ``--distributed`` and ``--ha`` options.
|
||||
|
||||
Similar to legacy HA routers, DVR/SNAT HA routers provide a quick fail over of
|
||||
the SNAT service to a backup DVR/SNAT router on an l3-agent running on a
|
||||
different node.
|
||||
|
||||
SNAT high availability is implemented in a manner similar to
|
||||
:ref:`scenario-l3ha-ovs`, where ``keepalived`` uses the Virtual Router
|
||||
Redundancy Protocol (VRRP) to provide a quick fail over of SNAT services.
|
||||
SNAT high availability is implemented in a manner similar to the
|
||||
:ref:`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where
|
||||
``keepalived`` uses VRRP to provide quick failover of SNAT services.
|
||||
|
||||
During normal operation, the master router periodically transmits *heartbeat*
|
||||
packets over a hidden project network that connects all HA routers for a
|
||||
@ -54,8 +54,6 @@ Controller node configuration
|
||||
max_l3_agents_per_router = 3
|
||||
min_l3_agents_per_router = 2
|
||||
|
||||
|
||||
|
||||
When the ``router_distributed = True`` flag is configured, routers created
|
||||
by all users are distributed. Without it, only privileged users can create
|
||||
distributed routers by using :option:`--distributed True`.
|
||||
|
178
doc/networking-guide/source/config-macvtap.rst
Normal file
@ -0,0 +1,178 @@
|
||||
.. _config-macvtap:
|
||||
|
||||
========================
|
||||
Macvtap mechanism driver
|
||||
========================
|
||||
|
||||
The Macvtap mechanism driver for the ML2 plug-in generally increases
|
||||
network performance of instances.
|
||||
|
||||
Consider the following attributes of this mechanism driver to determine
|
||||
practicality in your environment:
|
||||
|
||||
* Supports only instance ports. Ports for DHCP and layer-3 (routing)
|
||||
services must use another mechanism driver such as Linux bridge or
|
||||
Open vSwitch (OVS).
|
||||
|
||||
* Supports only untagged (flat) and tagged (VLAN) networks.
|
||||
|
||||
* Lacks support for security groups including basic (sanity) and
|
||||
anti-spoofing rules.
|
||||
|
||||
* Lacks support for layer-3 high-availability mechanisms such as
|
||||
Virtual Router Redundancy Protocol (VRRP) and Distributed Virtual
|
||||
Routing (DVR).
|
||||
|
||||
* Only compute resources can be attached via macvtap. Attaching other
|
||||
resources like DHCP, Routers and others is not supported. Therefore run
|
||||
either OVS or linux bridge in VLAN or flat mode on the controller node.
|
||||
|
||||
* Instance migration requires the same values for the
|
||||
``physical_interface_mapping`` configuration option on each compute node.
|
||||
For more information, see
|
||||
`<https://bugs.launchpad.net/neutron/+bug/1550400>`_.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
You can add this mechanism driver to an existing environment using either
|
||||
the Linux bridge or OVS mechanism drivers with only provider networks or
|
||||
provider and self-service networks. You can change the configuration of
|
||||
existing compute nodes or add compute nodes with the Macvtap mechanism
|
||||
driver. The example configuration assumes addition of compute nodes with
|
||||
the Macvtap mechanism driver to the :ref:`deploy-lb-selfservice` or
|
||||
:ref:`deploy-ovs-selfservice` deployment examples.
|
||||
|
||||
Add one or more compute nodes with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Macvtap layer-2 agent and any dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
To support integration with the deployment examples, this content
|
||||
configures the Macvtap mechanism driver to use the overlay network
|
||||
for untagged (flat) or tagged (VLAN) networks in addition to overlay
|
||||
networks such as VXLAN. Your physical network infrastructure
|
||||
must support VLAN (802.1q) tagging on the overlay network.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Macvtap mechanism driver only applies to compute nodes. Otherwise,
|
||||
the environment resembles the prerequisite deployment example.
|
||||
|
||||
.. image:: figures/config-macvtap-compute1.png
|
||||
:alt: Macvtap mechanism driver - compute node components
|
||||
|
||||
.. image:: figures/config-macvtap-compute2.png
|
||||
:alt: Macvtap mechanism driver - compute node connectivity
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
the Macvtap mechanism driver to an existing operational environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``macvtap`` to mechanism drivers.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = macvtap
|
||||
|
||||
* Configure network mappings.
|
||||
|
||||
.. code-block: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider,macvtap
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider,macvtap:VLAN_ID_START:VLAN_ID_END
|
||||
|
||||
.. note::
|
||||
|
||||
Use of ``macvtap`` is arbitrary. Only the self-service deployment
|
||||
examples require VLAN ID ranges. Replace ``VLAN_ID_START`` and
|
||||
``VLAN_ID_END`` with appropriate numerical values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service Macvtap layer-2 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``macvtap_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[macvtap]
|
||||
physical_interface_mappings = macvtap:MACVTAP_INTERFACE
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = noop
|
||||
|
||||
Replace ``MACVTAP_INTERFACE`` with the name of the underlying
|
||||
interface that handles Macvtap mechanism driver interfaces.
|
||||
If using a prerequisite deployment example, replace
|
||||
``MACVTAP_INTERFACE`` with the name of the underlying interface
|
||||
that handles overlay networks. For example, ``eth1``.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Macvtap agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+---------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 7af923a4-8be6-11e6-afc3-3762f3c3cf6e | Macvtap agent | compute1 | | :-) | True | neutron-macvtap-agent |
|
||||
| 80af6934-8be6-11e6-a046-7b842f93bb23 | Macvtap agent | compute2 | | :-) | True | neutron-macvtap-agent |
|
||||
+--------------------------------------+---------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
This mechanism driver simply changes the virtual network interface driver
|
||||
for instances. Thus, you can reference the ``Create initial networks``
|
||||
content for the prerequisite deployment example.
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
This mechanism driver simply changes the virtual network interface driver
|
||||
for instances. Thus, you can reference the ``Verify network operation``
|
||||
content for the prerequisite deployment example.
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This mechanism driver simply removes the Linux bridge handling security
|
||||
groups on the compute nodes. Thus, you can reference the network traffic
|
||||
flow scenarios for the prerequisite deployment example.
|
@ -19,8 +19,9 @@ Configuration
|
||||
config-ipam
|
||||
config-ipv6
|
||||
config-lbaas
|
||||
config-macvtap
|
||||
config-mtu
|
||||
config-ovs-dpdk.rst
|
||||
config-ovs-dpdk
|
||||
config-ovsfwdriver
|
||||
config-qos
|
||||
config-rbac
|
||||
|
173
doc/networking-guide/source/deploy-lb-ha-vrrp.rst
Normal file
@ -0,0 +1,173 @@
|
||||
.. _deploy-lb-ha-vrrp:
|
||||
|
||||
==========================================
|
||||
Linux bridge: High availability using VRRP
|
||||
==========================================
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp.txt
|
||||
|
||||
.. warning::
|
||||
|
||||
This high-availability mechanism is not compatible with the layer-2
|
||||
population mechanism. You must disable layer-2 population in the
|
||||
``linuxbridge_agent.ini`` file and restart the Linux bridge agent
|
||||
on all existing network and compute nodes prior to deploying the example
|
||||
configuration.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network nodes.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-ha-vrrp-overview.png
|
||||
:alt: High-availability using Linux bridge with VRRP - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. The master router resides on network
|
||||
node 1. In this particular case, the instance resides on the same compute
|
||||
node as the DHCP agent for the network. If the DHCP agent resides on another
|
||||
compute node, the latter only contains a DHCP namespace and Linux bridge
|
||||
with a port on the overlay physical network interface.
|
||||
|
||||
.. image:: figures/deploy-lb-ha-vrrp-compconn1.png
|
||||
:alt: High-availability using Linux bridge with VRRP - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using VRRP to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable VRRP.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
l3_ha = True
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node 1
|
||||
--------------
|
||||
|
||||
No changes.
|
||||
|
||||
Network node 2
|
||||
--------------
|
||||
|
||||
#. Install the Networking service Linux bridge layer-2 agent and layer-3
|
||||
agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
| 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent |
|
||||
| f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| 670e5805-340b-4182-9825-fa8319c99f23 | Linux bridge agent | network2 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| 96224e89-7c15-42e9-89c4-8caac7abdd54 | L3 agent | network2 | nova | :-) | True | neutron-l3-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt
|
||||
|
||||
Verify failover operation
|
||||
-------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This high-availability mechanism simply augments :ref:`deploy-lb-selfservice`
|
||||
with failover of layer-3 services to another router if the master router
|
||||
fails. Thus, you can reference :ref:`Self-service network traffic flow
|
||||
<deploy-lb-selfservice-networktrafficflow>` for normal operation.
|
363
doc/networking-guide/source/deploy-lb-provider.rst
Normal file
@ -0,0 +1,363 @@
|
||||
.. _deploy-lb-provider:
|
||||
|
||||
===============================
|
||||
Linux bridge: Provider networks
|
||||
===============================
|
||||
|
||||
The provider networks architecture example provides layer-2 connectivity
|
||||
between instances and the physical network infrastructure using VLAN
|
||||
(802.1q) tagging. It supports one untagged (flat) network and and up to
|
||||
4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends
|
||||
on the physical network infrastructure. For more information on provider
|
||||
networks, see :ref:`intro-os-networking-provider`.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
One controller node with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking server service and ML2 plug-in.
|
||||
|
||||
Two compute nodes with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking Linux bridge layer-2 agent, DHCP agent, metadata agent,
|
||||
and any dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
Larger deployments typically deploy the DHCP and metadata agents on a
|
||||
subset of compute nodes to increase performance and redundancy. However,
|
||||
too many agents can overwhelm the message bus. Also, to further simplify
|
||||
any deployment, you can omit the metadata agent and use a configuration
|
||||
drive to provide metadata to instances.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-provider-overview.png
|
||||
:alt: Provider networks using Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one untagged
|
||||
(flat) network. In this particular case, the instance resides on the
|
||||
same compute node as the DHCP agent for the network. If the DHCP agent
|
||||
resides on another compute node, the latter only contains a DHCP namespace
|
||||
and Linux bridge with a port on the provider physical network interface.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-compconn1.png
|
||||
:alt: Provider networks using Linux bridge - components and connectivity - one network
|
||||
|
||||
The following figure describes virtual connectivity among components for
|
||||
two tagged (VLAN) networks. Essentially, each network uses a separate
|
||||
bridge that contains a port on the VLAN sub-interface on the provider
|
||||
physical network interface. Similar to the single untagged network case,
|
||||
the DHCP agent may reside on a different compute node.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-compconn2.png
|
||||
:alt: Provider networks using Linux bridge - components and connectivity - multiple networks
|
||||
|
||||
.. note::
|
||||
|
||||
These figures omit the controller node because it does not handle instance
|
||||
network traffic.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to deploy provider
|
||||
networks in your environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. Install the Networking service components that provides the
|
||||
``neutron-server`` service and ML2 plug-in.
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
* Disable service plug-ins because provider networks do not require
|
||||
any. However, this breaks portions of the dashboard that manage
|
||||
the Networking service. See the
|
||||
`Installation Guide <http://docs.openstack.org>`__ for more
|
||||
information.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins =
|
||||
|
||||
* Enable two DHCP agents per network so both compute nodes can
|
||||
provide DHCP service provider networks.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dhcp_agents_per_network = 2
|
||||
|
||||
* If necessary, :ref:`configure MTU <config-mtu>`.
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Configure drivers and network types:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan
|
||||
tenant_network_types =
|
||||
mechanism_drivers = linuxbridge
|
||||
extension_drivers = port_security
|
||||
|
||||
* Configure network mappings:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider
|
||||
|
||||
.. note::
|
||||
|
||||
The ``tenant_network_types`` option contains no value because the
|
||||
architecture does not support self-service networks.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN
|
||||
ID ranges to support use of arbitrary VLAN IDs.
|
||||
|
||||
* Configure the security group driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
#. Populate the database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service Linux bridge layer-2 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
#. In the ``dhcp_agent.ini`` file, configure the DHCP agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
enable_isolated_metadata = True
|
||||
|
||||
#. In the ``metadata_agent.ini`` file, configure the metadata agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
nova_metadata_ip = controller
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
The value of ``METADATA_SECRET`` must match the value of the same option
|
||||
in the ``[neutron]`` section of the ``nova.conf`` file.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* DHCP agent
|
||||
* Metadata agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-provider-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-provider-verifynetworkoperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-provider-networktrafficflow.txt
|
||||
|
||||
North-south scenario: Instance with a fixed IP address
|
||||
------------------------------------------------------
|
||||
|
||||
* The instance resides on compute node 1 and uses provider network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1.
|
||||
|
||||
#. The instance interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (7).
|
||||
#. The router routes the packet from the provider network (8) to the
|
||||
external network (9) and forwards the packet to the switch (10).
|
||||
#. The switch forwards the packet to the external network (11).
|
||||
#. The external network (12) receives the packet.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowns1.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - north/south
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances on the same network communicate directly between compute nodes
|
||||
containing those instances.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 2 and uses provider network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch forwards the packet from compute node 1 to compute node 2 (7).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The physical network interface (8) removes VLAN tag 101 from the packet
|
||||
and forwards it to the VLAN sub-interface port (9) on the provider bridge.
|
||||
#. Security group rules (10) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The provider bridge instance port (11) forwards the packet to
|
||||
the instance 2 interface (12) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowew1.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances communicate via router on the physical network infrastructure.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 1 and uses provider network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VLAN
|
||||
tagging enables multiple logical layer-2 networks to use the same
|
||||
physical layer-2 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (7).
|
||||
#. The router routes the packet from provider network 1 (8) to provider
|
||||
network 2 (9).
|
||||
#. The router forwards the packet to the switch (10).
|
||||
#. The switch adds VLAN tag 102 to the packet and forwards it to compute
|
||||
node 1 (11).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network interface (12) removes VLAN tag 102 from the packet
|
||||
and forwards it to the VLAN sub-interface port (13) on the provider bridge.
|
||||
#. Security group rules (14) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The provider bridge instance port (15) forwards the packet to
|
||||
the instance 2 interface (16) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowew2.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
422
doc/networking-guide/source/deploy-lb-selfservice.rst
Normal file
@ -0,0 +1,422 @@
|
||||
.. _deploy-lb-selfservice:
|
||||
|
||||
===================================
|
||||
Linux bridge: Self-service networks
|
||||
===================================
|
||||
|
||||
This architecture example augments :ref:`deploy-lb-provider` to support
|
||||
a nearly limitless quantity of entirely virtual networks. Although the
|
||||
Networking service supports VLAN self-service networks, this example
|
||||
focuses on VXLAN self-service networks. For more information on
|
||||
self-service networks, see :ref:`intro-os-networking-selfservice`.
|
||||
|
||||
.. note::
|
||||
|
||||
The Linux bridge agent lacks support for other overlay protocols such
|
||||
as GRE and Geneve.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Linux bridge layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Add one network interface: overlay.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network node.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-overview.png
|
||||
:alt: Self-service networks using Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) provider network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace and Linux bridge with a port on the overlay physical network
|
||||
interface.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-compconn1.png
|
||||
:alt: Self-service networks using Linux bridge - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
self-service networks to an existing operational environment that supports
|
||||
provider networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable routing and allow overlapping IP address ranges.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``vxlan`` to type drivers and project network types.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan
|
||||
tenant_network_types = vxlan
|
||||
|
||||
* Enable the layer-2 population mechanism driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = linuxbridge,l2population
|
||||
|
||||
* Configure the VXLAN network ID (VNI) range.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
vni_ranges = VNI_START:VNI_END
|
||||
|
||||
Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical
|
||||
values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. Install the Networking service layer-3 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
l2_population = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, enable VXLAN support including
|
||||
layer-2 population.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
l2_population = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
| 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent |
|
||||
| f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | | :-) | True | neutron-linuxbridge-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-verifynetworkoperation.txt
|
||||
|
||||
.. _deploy-lb-selfservice-networktrafficflow:
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
For instances with a fixed IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from self-service to external networks
|
||||
such as the Internet. For instances with a fixed IPv6 address, the network
|
||||
node performs conventional routing of traffic between self-service and
|
||||
external networks.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the self-service
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle
|
||||
firewalling and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (6).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. The self-service bridge router port (9) forwards the packet to the
|
||||
self-service network interface (10) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs SNAT on the packet which changes the
|
||||
source IP address to the router IP address on the provider network
|
||||
and sends it to the gateway IP address on the provider network via
|
||||
the gateway interface on the provider network (11).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the provider network, via the
|
||||
provider gateway interface (11).
|
||||
|
||||
#. The router forwards the packet to the provider bridge router
|
||||
port (12).
|
||||
#. The VLAN sub-interface port (13) on the provider bridge forwards
|
||||
the packet to the provider physical network interface (14).
|
||||
#. The provider physical network interface (14) adds VLAN tag 101 to the packet
|
||||
and forwards it to the Internet via physical network infrastructure (15).
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse. However, without a
|
||||
floating IPv4 address, hosts on the provider or external networks cannot
|
||||
originate connections to instances on the self-service network.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowns1.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
|
||||
Thus, the network node routes IPv6 traffic in this scenario.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface removes VLAN tag 101 and forwards
|
||||
the packet to the VLAN sub-interface on the provider bridge.
|
||||
#. The provider bridge forwards the packet to the self-service
|
||||
router gateway port on the provider network (5).
|
||||
|
||||
* For IPv4, the router performs DNAT on the packet which changes the
|
||||
destination IP address to the instance IP address on the self-service
|
||||
network and sends it to the gateway IP address on the self-service
|
||||
network via the self-service interface (6).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the self-service network, via
|
||||
the self-service interface (6).
|
||||
|
||||
#. The router forwards the packet to the self-service bridge router
|
||||
port (7).
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (8)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (11) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (12) which unwraps the packet.
|
||||
#. Security group rules (13) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (14) forwards the packet to
|
||||
the instance interface (15) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Egress instance traffic flows similar to north-south scenario 1, except SNAT
|
||||
changes the source IP address of the packet to the floating IPv4 address
|
||||
rather than the router IP address on the provider network.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowns2.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 2
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances with a fixed IPv4/IPv6 or floating IPv4 address on the same network
|
||||
communicate directly between compute nodes containing those instances.
|
||||
|
||||
By default, the VXLAN protocol lacks knowledge of target location
|
||||
and uses multicast to discover it. After discovery, it stores the
|
||||
location in the local forwarding database. In large deployments,
|
||||
the discovery process can generate a significant amount of network
|
||||
that all nodes must process. To eliminate the latter and generally
|
||||
increase efficiency, the Networking service includes the layer-2
|
||||
population mechanism driver that automatically populates the
|
||||
forwarding database for VXLAN interfaces. The example configuration
|
||||
enables this driver. For more information, see :ref:`config-plugin-ml2`.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 2 and uses self-service network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the
|
||||
self-service bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to compute node 2 via the overlay network (6).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. Security group rules (9) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (10) forwards the packet to
|
||||
the instance 1 interface (11) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowew1.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 1
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate
|
||||
via router on the network node. The self-service networks must reside on the
|
||||
same router.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 1 and uses self-service network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VXLAN
|
||||
enables multiple overlays to use the same layer-3 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the self-service
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle
|
||||
firewalling and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (6).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. The self-service bridge router port (9) forwards the packet to the
|
||||
self-service network 1 interface (10) in the router namespace.
|
||||
#. The router sends the packet to the next-hop IP address, typically the
|
||||
gateway IP address on self-service network 2, via the self-service
|
||||
network 2 interface (11).
|
||||
#. The router forwards the packet to the self-service network 2 bridge router
|
||||
port (12).
|
||||
#. The self-service network 2 bridge forwards the packet to the VXLAN
|
||||
interface (13) which wraps the packet using VNI 102.
|
||||
#. The physical network interface (14) for the VXLAN interface sends the
|
||||
packet to the compute node via the overlay network (15).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (16) for the VXLAN interface sends
|
||||
the packet to the VXLAN interface (17) which unwraps the packet.
|
||||
#. Security group rules (18) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (19) forwards the packet to
|
||||
the instance 2 interface (20) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowew2.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 2
|
17
doc/networking-guide/source/deploy-lb.rst
Normal file
@ -0,0 +1,17 @@
|
||||
.. _deploy-lb:
|
||||
|
||||
=============================
|
||||
Linux bridge mechanism driver
|
||||
=============================
|
||||
|
||||
The Linux bridge mechanism driver uses only Linux bridges and ``veth`` pairs
|
||||
as interconnection devices. A layer-2 agent manages Linux bridges on each
|
||||
compute node and any other node that provides layer-3 (routing), DHCP,
|
||||
metadata, or other network services.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-lb-provider
|
||||
deploy-lb-selfservice
|
||||
deploy-lb-ha-vrrp
|
535
doc/networking-guide/source/deploy-ovs-ha-dvr.rst
Normal file
@ -0,0 +1,535 @@
|
||||
.. _deploy-ovs-ha-dvr:
|
||||
|
||||
=========================================
|
||||
Open vSwitch: High availability using DVR
|
||||
=========================================
|
||||
|
||||
This architecture example augments the self-service deployment example
|
||||
with the Distributed Virtual Router (DVR) high-availability mechanism that
|
||||
provides connectivity between self-service and provider networks on compute
|
||||
nodes rather than network nodes for specific scenarios. For instances with a
|
||||
floating IPv4 address, routing between self-service and provider networks
|
||||
resides completely on the compute nodes to eliminate single point of
|
||||
failure and performance issues with network nodes. Routing also resides
|
||||
completely on the compute nodes for instances with a fixed or floating IPv4
|
||||
address using self-service networks on the same distributed virtual router.
|
||||
However, instances with a fixed IP address still rely on the network node for
|
||||
routing and SNAT services between self-service and provider networks.
|
||||
|
||||
Consider the following attributes of this high-availability mechanism to
|
||||
determine practicality in your environment:
|
||||
|
||||
* Only provides connectivity to an instance via the compute node on which
|
||||
the instance resides if the instance resides on a self-service network
|
||||
with a floating IPv4 address. Instances on self-service networks with
|
||||
only an IPv6 address or both IPv4 and IPv6 addresses rely on the network
|
||||
node for IPv6 connectivity.
|
||||
|
||||
* The instance of a router on each compute node consumes an IPv4 address
|
||||
on the provider network on which it contains a gateway.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Install the OpenStack Networking layer-3 agent.
|
||||
|
||||
.. note::
|
||||
|
||||
Consider adding at least one additional network node to provide
|
||||
high-availability for instances with a fixed IP address. See
|
||||
See :ref:`config-dvr-snat-ha-ovs` for more information.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-overview.png
|
||||
:alt: High-availability using Open vSwitch with DVR - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace with a port on the OVS integration bridge.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-compconn1.png
|
||||
:alt: High-availability using Open vSwitch with DVR - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using DVR to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable distributed routing by default for all routers.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
router_distributed = true
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. In the ``openswitch_agent.ini`` file, enable distributed routing.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enable_distributed_routing = true
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent to provide
|
||||
SNAT services.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
agent_mode = dvr_snat
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service layer-3 agent.
|
||||
|
||||
#. In the ``openswitch_agent.ini`` file, enable distributed routing.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enable_distributed_routing = true
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
agent_mode = dvr
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
+--------------------------------------+--------------------+-------------+----------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 05d980f2-a4fc-4815-91e7-a7f7e118c0db | L3 agent | compute1 | nova | :-) | True | neutron-l3-agent |
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
| 2a2e9a90-51b8-4163-a7d6-3e199ba2374b | L3 agent | compute2 | nova | :-) | True | neutron-l3-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent |
|
||||
| 513caa68-0391-4e53-a530-082e2c23e819 | Linux bridge agent | compute1 | | :-) | True | neutron-linuxbridge-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
Similar to the self-service deployment example, this configuration supports
|
||||
multiple VXLAN self-service networks. After enabling high-availability, all
|
||||
additional routers use distributed routing. The following procedure creates
|
||||
an additional self-service network and router. The Networking service also
|
||||
supports adding distributed routing to existing routers.
|
||||
|
||||
#. Source a regular (non-administrative) project credentials.
|
||||
#. Create a self-service network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-create selfservice2
|
||||
Created a new network:
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| description | |
|
||||
| id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 |
|
||||
| ipv4_address_scope | |
|
||||
| ipv6_address_scope | |
|
||||
| mtu | 1450 |
|
||||
| name | selfservice2 |
|
||||
| port_security_enabled | True |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | |
|
||||
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Create a IPv4 subnet on the self-service network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --name selfservice2-v4 --ip-version 4 \
|
||||
--dns-nameserver 8.8.4.4 selfservice2 192.168.2.0/24
|
||||
Created a new subnet:
|
||||
+-------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------+
|
||||
| allocation_pools | {"start": "192.168.2.2", "end": "192.168.2.254"} |
|
||||
| cidr | 192.168.2.0/24 |
|
||||
| description | |
|
||||
| dns_nameservers | 8.8.4.4 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 192.168.2.1 |
|
||||
| host_routes | |
|
||||
| id | 12a41804-18bf-4cec-bde8-174cbdbf1573 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | selfservice2-v4 |
|
||||
| network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
|
||||
+-------------------+--------------------------------------------------+
|
||||
|
||||
#. Create a IPv6 subnet on the self-service network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --name selfservice2-v6 --ip-version 6 \
|
||||
--ipv6-address-mode slaac --ipv6-ra-mode slaac \
|
||||
--dns-nameserver 2001:4860:4860::8844 selfservice2 \
|
||||
fd00:192:168:2::/64
|
||||
Created a new subnet:
|
||||
+-------------------+-----------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+-----------------------------------------------------------------------------+
|
||||
| allocation_pools | {"start": "fd00:192:168:2::2", "end": "fd00:192:168:2:ffff:ffff:ffff:ffff"} |
|
||||
| cidr | fd00:192:168:2::/64 |
|
||||
| description | |
|
||||
| dns_nameservers | 2001:4860:4860::8844 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | fd00:192:168:2::1 |
|
||||
| host_routes | |
|
||||
| id | b0f122fe-0bf9-4f31-975d-a47e58aa88e3 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | slaac |
|
||||
| name | selfservice2-v6 |
|
||||
| network_id | 7ebc353c-6c8f-461f-8ada-01b9f14beb18 |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
|
||||
+-------------------+-----------------------------------------------------------------------------+
|
||||
|
||||
#. Create a router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-create router2
|
||||
Created a new router:
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| description | |
|
||||
| external_gateway_info | |
|
||||
| id | b6206312-878e-497c-8ef7-eb384f8add96 |
|
||||
| name | router2 |
|
||||
| routes | |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | f986edf55ae945e2bef3cb4bfd589928 |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. Add the IPv4 and IPv6 subnets as interfaces on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-interface-add router2 selfservice2-v4
|
||||
Added interface da3504ad-ba70-4b11-8562-2e6938690878 to router router2.
|
||||
|
||||
$ neutron router-interface-add router2 selfservice2-v6
|
||||
Added interface 442e36eb-fce3-4cb5-b179-4be6ace595f0 to router router2.
|
||||
|
||||
#. Add the provider network as a gateway on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-gateway-set router2 provider1
|
||||
Set gateway for router router2
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify distributed routing on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-show router2
|
||||
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| distributed | True |
|
||||
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
#. On each compute node, verify creation of a ``qrouter`` namespace with
|
||||
the same ID.
|
||||
|
||||
Compute node 1:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
|
||||
Compute node 2:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
|
||||
#. On the network node, verify creation of the ``snat`` and ``qrouter``
|
||||
namespaces with the same ID.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
snat-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
|
||||
.. note::
|
||||
|
||||
The namespace for router 1 from :ref:`deploy-ovs-selfservice` should
|
||||
also appear on network node 1 because of creation prior to enabling
|
||||
distributed routing.
|
||||
|
||||
#. Launch an instance with an interface on the addtional self-service network.
|
||||
For example, a CirrOS image using flavor ID 1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance2
|
||||
|
||||
Replace ``NETWORK_ID`` with the ID of the additional self-service
|
||||
network.
|
||||
|
||||
#. Determine the IPv4 and IPv6 addresses of the instance.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
|
||||
| ID | Name | Status | Networks |
|
||||
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
|
||||
| bde64b00-77ae-41b9-b19a-cd8e378d9f8b | selfservice-instance2 | ACTIVE | selfservice2=fd00:192:168:2:f816:3eff:fe71:e93e, 192.168.2.4 |
|
||||
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
|
||||
|
||||
#. Create a floating IPv4 address on the provider network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack ip floating create provider1
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| fixed_ip | None |
|
||||
| id | 0174056a-fa56-4403-b1ea-b5151a31191f |
|
||||
| instance_id | None |
|
||||
| ip | 203.0.113.17 |
|
||||
| pool | provider1 |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
#. Associate the floating IPv4 address with the instance.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack ip floating add 203.0.113.17 selfservice-instance2
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
#. On the compute node containing the instance, verify creation of the
|
||||
``fip`` namespace with the same ID as the provider network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
fip-4bfa3075-b4b2-4f7d-b88e-df1113942d43
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
This section only contains flow scenarios that benefit from distributed
|
||||
virtual routing or that differ from conventional operation. For other
|
||||
flow scenarios, see :ref:`deploy-ovs-selfservice-networktrafficflow`.
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
Similar to :ref:`deploy-ovs-selfservice-networktrafficflow-ns1`, except
|
||||
the router namespace on the network node becomes the SNAT namespace. The
|
||||
network node still contains the router namespace, but it serves no purpose
|
||||
in this case.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-flowns1.png
|
||||
:alt: High-availability using Open vSwitch with DVR - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address using a self-service network
|
||||
on a distributed router, the compute node containing the instance performs
|
||||
SNAT on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to
|
||||
IPv6. Thus, the network node routes IPv6 traffic in this scenario.
|
||||
north-south traffic passing between the instance and external networks
|
||||
such as the Internet.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface forwards the packet to the
|
||||
OVS provider bridge provider network port (3).
|
||||
#. The OVS provider bridge swaps actual VLAN tag 101 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS provider bridge ``phy-br-provider`` port (4) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` port (5).
|
||||
#. The OVS integration bridge port for the provider network (6) removes
|
||||
the internal VLAN tag and forwards the packet to the provider network
|
||||
interface (7) in the floating IP namespace. This interface responds
|
||||
to any ARP requests for the instance floating IPv4 address.
|
||||
#. The floating IP namespace routes the packet (8) to the distributed
|
||||
router namespace (9) using a pair of IP addresses on the DVR internal
|
||||
network. This namespace contains the instance floating IPv4 address.
|
||||
#. The router performs DNAT on the packet which changes the destination
|
||||
IP address to the instance IP address on the self-service network via
|
||||
the self-service network interface (10).
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
the self-service network (11).
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (12) forwards the packet
|
||||
to the security group bridge OVS port (13) via ``veth`` pair.
|
||||
#. Security group rules (14) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (15) forwards the packet to the
|
||||
instance interface (16) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-flowns2.png
|
||||
:alt: High-availability using Open vSwitch with DVR - network traffic flow - north/south scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Egress traffic follows similar steps in reverse, except SNAT changes
|
||||
the source IPv4 address of the packet to the floating IPv4 address.
|
||||
|
||||
East-west scenario 1: Instances on different networks on the same router
|
||||
------------------------------------------------------------------------
|
||||
|
||||
Instances with fixed IPv4/IPv6 address or floating IPv4 address on the
|
||||
same compute node communicate via router on the compute node. Instances
|
||||
on different compute nodes communicate via an instance of the router on
|
||||
each compute node.
|
||||
|
||||
.. note::
|
||||
|
||||
This scenario places the instances on different compute nodes to
|
||||
show the most complex situation.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge port for self-service network 1 (6) removes the
|
||||
internal VLAN tag and forwards the packet to the self-service network 1
|
||||
interface in the distributed router namespace (6).
|
||||
#. The distributed router namespace routes the packet to self-service network
|
||||
2.
|
||||
#. The self-service network 2 interface in the distributed router namespace
|
||||
(8) forwards the packet to the OVS integration bridge port for
|
||||
self-service network 2 (9).
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an
|
||||
internal tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` port (10) forwards the packet
|
||||
to the OVS tunnel bridge ``patch-int`` port (11).
|
||||
#. The OVS tunnel bridge (12) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (13) for overlay networks forwards
|
||||
the packet to compute node 2 via the overlay network (14).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (15) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (16).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (18).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (19) forwards the packet
|
||||
to the security group bridge OVS port (20) via ``veth`` pair.
|
||||
#. Security group rules (21) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (22) forwards the packet to the
|
||||
instance 2 interface (23) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Routing between self-service networks occurs on the compute node containing
|
||||
the instance sending the packet. In this scenario, routing occurs on
|
||||
compute node 1 for packets from instance 1 to instance 2 and on compute
|
||||
node 2 for packets from instance 2 to instance 1.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-flowew1.png
|
||||
:alt: High-availability using Open vSwitch with DVR - network traffic flow - east/west scenario 2
|
174
doc/networking-guide/source/deploy-ovs-ha-vrrp.rst
Normal file
@ -0,0 +1,174 @@
|
||||
.. _deploy-ovs-ha-vrrp:
|
||||
|
||||
==========================================
|
||||
Open vSwitch: High availability using VRRP
|
||||
==========================================
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp.txt
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network nodes.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-vrrp-overview.png
|
||||
:alt: High-availability using VRRP with Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. The master router resides on network
|
||||
node 1. In this particular case, the instance resides on the same compute
|
||||
node as the DHCP agent for the network. If the DHCP agent resides on another
|
||||
compute node, the latter only contains a DHCP namespace and Linux bridge
|
||||
with a port on the overlay physical network interface.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-vrrp-compconn1.png
|
||||
:alt: High-availability using VRRP with Linux bridge - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using VRRP to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable VRRP.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
l3_ha = True
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node 1
|
||||
--------------
|
||||
|
||||
No changes.
|
||||
|
||||
Network node 2
|
||||
--------------
|
||||
|
||||
#. Install the Networking service OVS layer-2 agent and layer-3 agent.
|
||||
|
||||
#. Install OVS.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS
|
||||
|
||||
#. Create the OVS provider bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-br br-provider
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:br-provider
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = true
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent |
|
||||
| 7f00d759-f2c9-494a-9fbf-fd9118104d03 | Open vSwitch agent | network2 | | :-) | True | neutron-openvswitch-agent |
|
||||
| b28d8818-9e32-4888-930b-29addbdd2ef9 | L3 agent | network2 | nova | :-) | True | neutron-l3-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt
|
||||
|
||||
Verify failover operation
|
||||
-------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This high-availability mechanism simply augments :ref:`deploy-ovs-selfservice`
|
||||
with failover of layer-3 services to another router if the master router
|
||||
fails. Thus, you can reference :ref:`Self-service network traffic flow
|
||||
<deploy-ovs-selfservice-networktrafficflow>` for normal operation.
|
425
doc/networking-guide/source/deploy-ovs-provider.rst
Normal file
@ -0,0 +1,425 @@
|
||||
.. _deploy-ovs-provider:
|
||||
|
||||
===============================
|
||||
Open vSwitch: Provider networks
|
||||
===============================
|
||||
|
||||
This architecture example provides layer-2 connectivity between instances
|
||||
and the physical network infrastructure using VLAN (802.1q) tagging. It
|
||||
supports one untagged (flat) network and up to 4095 tagged (VLAN) networks.
|
||||
The actual quantity of VLAN networks depends on the physical network
|
||||
infrastructure. For more information on provider networks, see
|
||||
:ref:`intro-os-networking-provider`.
|
||||
|
||||
.. warning::
|
||||
|
||||
Linux distributions often package older releases of Open vSwitch that can
|
||||
introduce issues during operation with the Networking service. We recommend
|
||||
using at least the latest long-term stable (LTS) release of Open vSwitch
|
||||
for the best experience and support from Open vSwitch. See
|
||||
`<http://www.openvswitch.org>`__ for available releases and the
|
||||
`installation instructions
|
||||
<https://github.com/openvswitch/ovs/blob/master/INSTALL.md>`__ for
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
One controller node with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking server service and ML2 plug-in.
|
||||
|
||||
Two compute nodes with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking Open vSwitch (OVS) layer-2 agent, DHCP agent, metadata
|
||||
agent, and any dependencies including OVS.
|
||||
|
||||
.. note::
|
||||
|
||||
Larger deployments typically deploy the DHCP and metadata agents on a
|
||||
subset of compute nodes to increase performance and redundancy. However,
|
||||
too many agents can overwhelm the message bus. Also, to further simplify
|
||||
any deployment, you can omit the metadata agent and use a configuration
|
||||
drive to provide metadata to instances.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-overview.png
|
||||
:alt: Provider networks using OVS - overview
|
||||
|
||||
The following figure shows components and connectivity for one untagged
|
||||
(flat) network. In this particular case, the instance resides on the
|
||||
same compute node as the DHCP agent for the network. If the DHCP agent
|
||||
resides on another compute node, the latter only contains a DHCP namespace
|
||||
with a port on the OVS integration bridge.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-compconn1.png
|
||||
:alt: Provider networks using OVS - components and connectivity - one network
|
||||
|
||||
The following figure describes virtual connectivity among components for
|
||||
two tagged (VLAN) networks. Essentially, all networks use a single OVS
|
||||
integration bridge with different internal VLAN tags. The internal VLAN
|
||||
tags almost always differ from the network VLAN assignment in the Networking
|
||||
service. Similar to the untagged network case, the DHCP agent may reside on
|
||||
a different compute node.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-compconn2.png
|
||||
:alt: Provider networks using OVS - components and connectivity - multiple networks
|
||||
|
||||
.. note::
|
||||
|
||||
These figures omit the controller node because it does not handle instance
|
||||
network traffic.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to deploy provider
|
||||
networks in your environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. Install the Networking service components that provide the
|
||||
``neutron-server`` service and ML2 plug-in.
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
* Disable service plug-ins because provider networks do not require
|
||||
any. However, this breaks portions of the dashboard that manage
|
||||
the Networking service. See the
|
||||
`Installation Guide <http://docs.openstack.org>`__ for more
|
||||
information.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins =
|
||||
|
||||
* Enable two DHCP agents per network so both compute nodes can
|
||||
provide DHCP service provider networks.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dhcp_agents_per_network = 2
|
||||
|
||||
* If necessary, :ref:`configure MTU <config-mtu>`.
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Configure drivers and network types:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan
|
||||
tenant_network_types =
|
||||
mechanism_drivers = openvswitch
|
||||
extension_drivers = port_security
|
||||
|
||||
* Configure network mappings:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider
|
||||
|
||||
.. note::
|
||||
|
||||
The ``tenant_network_types`` option contains no value because the
|
||||
architecture does not support self-service networks.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN
|
||||
ID ranges to support use of arbitrary VLAN IDs.
|
||||
|
||||
* Configure the security group driver:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
#. Populate the database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service OVS layer-2 agent, DHCP agent, and
|
||||
metadata agent.
|
||||
|
||||
#. Install OVS.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, configure the OVS agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:br-provider
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
#. In the ``dhcp_agent.ini`` file, configure the DHCP agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
enable_isolated_metadata = True
|
||||
|
||||
#. In the ``metadata_agent.ini`` file, configure the metadata agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
nova_metadata_ip = controller
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
The value of ``METADATA_SECRET`` must match the value of the same option
|
||||
in the ``[neutron]`` section of the ``nova.conf`` file.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS
|
||||
|
||||
#. Create the OVS provider bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-br br-provider
|
||||
|
||||
#. Add the provider network interface as a port on the OVS provider
|
||||
bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS agent
|
||||
* DHCP agent
|
||||
* Metadata agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-provider-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-provider-verifynetworkoperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-provider-networktrafficflow.txt
|
||||
|
||||
North-south
|
||||
-----------
|
||||
|
||||
* The instance resides on compute node 1 and uses provider network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1.
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (7).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (8) forwards the packet to
|
||||
the physical network interface (9).
|
||||
#. The physical network interface forwards the packet to the physical
|
||||
network infrastructure switch (10).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (11).
|
||||
#. The router routes the packet from the provider network (12) to the
|
||||
external network (13) and forwards the packet to the switch (14).
|
||||
#. The switch forwards the packet to the external network (15).
|
||||
#. The external network (16) receives the packet.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-flowns1.png
|
||||
:alt: Provider networks using Open vSwitch - network traffic flow - north/south
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances on the same network communicate directly between compute nodes
|
||||
containing those instances.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 2 and uses provider network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (7).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (8) forwards the packet to
|
||||
the physical network interface (9).
|
||||
#. The physical network interface forwards the packet to the physical
|
||||
network infrastructure switch (10).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch forwards the packet from compute node 1 to compute node 2 (11).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The physical network interface (12) forwards the packet to the OVS
|
||||
provider bridge provider network port (13).
|
||||
#. The OVS provider bridge ``phy-br-provider`` patch port (14) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` patch port (15).
|
||||
#. The OVS integration bridge swaps the actual VLAN tag 101 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS integration bridge security group port (16) forwards the packet
|
||||
to the security group bridge OVS port (17).
|
||||
#. Security group rules (18) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (19) forwards the packet to the
|
||||
instance 2 interface (20) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-flowew1.png
|
||||
:alt: Provider networks using Open vSwitch - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances communicate via router on the physical network infrastructure.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 1 and uses provider network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VLAN
|
||||
tagging enables multiple logical layer-2 networks to use the same
|
||||
physical layer-2 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (7).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (8) forwards the packet to
|
||||
the physical network interface (9).
|
||||
#. The physical network interface forwards the packet to the physical
|
||||
network infrastructure switch (10).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (11).
|
||||
#. The router routes the packet from provider network 1 (12) to provider
|
||||
network 2 (13).
|
||||
#. The router forwards the packet to the switch (14).
|
||||
#. The switch adds VLAN tag 102 to the packet and forwards it to compute
|
||||
node 1 (15).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network interface (16) forwards the packet to the OVS
|
||||
provider bridge provider network port (17).
|
||||
#. The OVS provider bridge ``phy-br-provider`` patch port (18) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` patch port (19).
|
||||
#. The OVS integration bridge swaps the actual VLAN tag 102 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS integration bridge security group port (20) removes the internal
|
||||
VLAN tag and forwards the packet to the security group bridge OVS port
|
||||
(21).
|
||||
#. Security group rules (22) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (23) forwards the packet to the
|
||||
instance 2 interface (24) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-flowew2.png
|
||||
:alt: Provider networks using Open vSwitch - network traffic flow - east/west scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
507
doc/networking-guide/source/deploy-ovs-selfservice.rst
Normal file
@ -0,0 +1,507 @@
|
||||
.. _deploy-ovs-selfservice:
|
||||
|
||||
===================================
|
||||
Open vSwitch: Self-service networks
|
||||
===================================
|
||||
|
||||
This architecture example augments :ref:`deploy-ovs-provider` to support
|
||||
a nearly limitless quantity of entirely virtual networks. Although the
|
||||
Networking service supports VLAN self-service networks, this example
|
||||
focuses on VXLAN self-service networks. For more information on
|
||||
self-service networks, see :ref:`intro-os-networking-selfservice`.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Open vSwitch (OVS) layer-2 agent, layer-3 agent, and
|
||||
any including OVS.
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Add one network interface: overlay.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network node.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-overview.png
|
||||
:alt: Self-service networks using OVS - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) provider network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace and with a port on the OVS integration bridge.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-compconn1.png
|
||||
:alt: Self-service networks using OVS - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
self-service networks to an existing operational environment that supports
|
||||
provider networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable routing and allow overlapping IP address ranges.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``vxlan`` to type drivers and project network types.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan
|
||||
tenant_network_types = vxlan
|
||||
|
||||
* Enable the layer-2 population mechanism driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = linuxbridge,l2population
|
||||
|
||||
* Configure the VXLAN network ID (VNI) range.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
vni_ranges = VNI_START:VNI_END
|
||||
|
||||
Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical
|
||||
values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. Install the Networking service OVS layer-2 agent and layer-3 agent.
|
||||
|
||||
#. Install OVS.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS
|
||||
|
||||
#. Create the OVS provider bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-br br-provider
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:br-provider
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = true
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, enable VXLAN support including
|
||||
layer-2 population.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = true
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | :-) | True | neutron-metadata-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | :-) | True | neutron-openvswitch-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | :-) | True | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | :-) | True | neutron-openvswitch-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | :-) | True | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | :-) | True | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | :-) | True | neutron-openvswitch-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-verifynetworkoperation.txt
|
||||
|
||||
.. _deploy-ovs-selfservice-networktrafficflow:
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
.. _deploy-ovs-selfservice-networktrafficflow-ns1:
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
For instances with a fixed IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from self-service to external networks
|
||||
such as the Internet. For instances with a fixed IPv6 address, the network
|
||||
node performs conventional routing of traffic between self-service and
|
||||
external networks.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge patch port (6) forwards the packet to the
|
||||
OVS tunnel bridge patch port (7).
|
||||
#. The OVS tunnel bridge (8) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for overlay networks forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (11) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (12).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge patch port (13) forwards the packet to the OVS
|
||||
integration bridge patch port (14).
|
||||
#. The OVS integration bridge port for the self-service network (15)
|
||||
removes the internal VLAN tag and forwards the packet to the self-service
|
||||
network interface (16) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs SNAT on the packet which changes the
|
||||
source IP address to the router IP address on the provider network
|
||||
and sends it to the gateway IP address on the provider network via
|
||||
the gateway interface on the provider network (17).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the provider network, via the
|
||||
provider gateway interface (17).
|
||||
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
the provider network (18).
|
||||
#. The OVS integration bridge adds the internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (19) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (20).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (21) forwards the packet to
|
||||
the physical network interface (22).
|
||||
#. The physical network interface forwards the packet to the Internet via
|
||||
physical network infrastructure (23).
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse. However, without a
|
||||
floating IPv4 address, hosts on the provider or external networks cannot
|
||||
originate connections to instances on the self-service network.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowns1.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
|
||||
Thus, the network node routes IPv6 traffic in this scenario.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface forwards the packet to the
|
||||
OVS provider bridge provider network port (3).
|
||||
#. The OVS provider bridge swaps actual VLAN tag 101 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS provider bridge ``phy-br-provider`` port (4) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` port (5).
|
||||
#. The OVS integration bridge port for the provider network (6) removes
|
||||
the internal VLAN tag and forwards the packet to the provider network
|
||||
interface (6) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs DNAT on the packet which changes the
|
||||
destination IP address to the instance IP address on the self-service
|
||||
network and sends it to the gateway IP address on the self-service
|
||||
network via the self-service interface (7).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the self-service network, via
|
||||
the self-service interface (8).
|
||||
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
the self-service network (9).
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` patch port (10) forwards the
|
||||
packet to the OVS tunnel bridge ``patch-int`` patch port (11).
|
||||
#. The OVS tunnel bridge (12) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (13) for overlay networks forwards
|
||||
the packet to the network node via the overlay network (14).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (15) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (16).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (18).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (19) forwards the packet
|
||||
to the security group bridge OVS port (20) via ``veth`` pair.
|
||||
#. Security group rules (21) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (22) forwards the packet to the
|
||||
instance interface (23) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowns2.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - north/south scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Egress instance traffic flows similar to north-south scenario 1, except SNAT
|
||||
changes the source IP address of the packet to the floating IPv4 address
|
||||
rather than the router IP address on the provider network.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances with a fixed IPv4/IPv6 address or floating IPv4 address on the
|
||||
same network communicate directly between compute nodes containing those
|
||||
instances.
|
||||
|
||||
By default, the VXLAN protocol lacks knowledge of target location
|
||||
and uses multicast to discover it. After discovery, it stores the
|
||||
location in the local forwarding database. In large deployments,
|
||||
the discovery process can generate a significant amount of network
|
||||
that all nodes must process. To eliminate the latter and generally
|
||||
increase efficiency, the Networking service includes the layer-2
|
||||
population mechanism driver that automatically populates the
|
||||
forwarding database for VXLAN interfaces. The example configuration
|
||||
enables this driver. For more information, see :ref:`config-plugin-ml2`.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 2 and uses self-service network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge patch port (6) forwards the packet to the
|
||||
OVS tunnel bridge patch port (7).
|
||||
#. The OVS tunnel bridge (8) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for overlay networks forwards
|
||||
the packet to compute node 2 via the overlay network (10).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (11) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (12).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (14).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (15) forwards the packet
|
||||
to the security group bridge OVS port (16) via ``veth`` pair.
|
||||
#. Security group rules (17) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (18) forwards the packet to the
|
||||
instance 2 interface (19) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowew1.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate
|
||||
via router on the network node. The self-service networks must reside on the
|
||||
same router.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 1 and uses self-service network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VXLAN
|
||||
enables multiple overlays to use the same layer-3 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` patch port (6) forwards the
|
||||
packet to the OVS tunnel bridge ``patch-int`` patch port (7).
|
||||
#. The OVS tunnel bridge (8) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for overlay networks forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (11) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (12).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet to
|
||||
the OVS integration bridge ``patch-tun`` patch port (14).
|
||||
#. The OVS integration bridge port for self-service network 1 (15)
|
||||
removes the internal VLAN tag and forwards the packet to the self-service
|
||||
network 1 interface (16) in the router namespace.
|
||||
#. The router sends the packet to the next-hop IP address, typically the
|
||||
gateway IP address on self-service network 2, via the self-service
|
||||
network 2 interface (17).
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
self-service network 2 (18).
|
||||
#. The OVS integration bridge adds the internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` patch port (19) forwards the
|
||||
packet to the OVS tunnel bridge ``patch-int`` patch port (20).
|
||||
#. The OVS tunnel bridge (21) wraps the packet using VNI 102.
|
||||
#. The underlying physical interface (22) for overlay networks forwards
|
||||
the packet to the compute node via the overlay network (23).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (24) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (25).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel
|
||||
ID to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (26) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (27).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (28) forwards the packet
|
||||
to the security group bridge OVS port (29) via ``veth`` pair.
|
||||
#. Security group rules (30) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (31) forwards the packet to the
|
||||
instance interface (32) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowew2.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - east/west scenario 2
|
21
doc/networking-guide/source/deploy-ovs.rst
Normal file
@ -0,0 +1,21 @@
|
||||
.. _deploy-ovs:
|
||||
|
||||
=============================
|
||||
Open vSwitch mechanism driver
|
||||
=============================
|
||||
|
||||
The Open vSwitch (OVS) mechanism driver uses a combination of OVS and Linux
|
||||
bridges as interconnection devices. However, optionally enabling the OVS
|
||||
native implementation of security groups removes the dependency on Linux
|
||||
bridges.
|
||||
|
||||
We recommend using Open vSwitch version 2.4 or higher. Optional features
|
||||
may require a higher minimum version.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-ovs-provider
|
||||
deploy-ovs-selfservice
|
||||
deploy-ovs-ha-vrrp
|
||||
deploy-ovs-ha-dvr
|
@ -1,17 +1,137 @@
|
||||
.. _deployment-scenarios:
|
||||
.. _deploy:
|
||||
|
||||
====================
|
||||
Deployment scenarios
|
||||
====================
|
||||
===================
|
||||
Deployment examples
|
||||
===================
|
||||
|
||||
The following deployment examples provide building blocks of increasing
|
||||
architectural complexity using the Networking service reference architecture
|
||||
which implements the Modular Layer 2 (ML2) plug-in and either the Open
|
||||
vSwitch (OVS) or Linux bridge mechanism drivers. Both mechanism drivers support
|
||||
the same basic features such as provider networks, self-service networks,
|
||||
and routers. However, more complex features often require a particular
|
||||
mechanism driver. Thus, you should consider the requirements (or goals) of
|
||||
your cloud before choosing a mechanism driver.
|
||||
|
||||
After choosing a :ref:`mechanism driver <deploy-mechanism-drivers>`, the
|
||||
deployment examples generally include the following building blocks:
|
||||
|
||||
#. Provider (public/external) networks using IPv4 and IPv6
|
||||
|
||||
#. Self-service (project/private/internal) networks including routers using
|
||||
IPv4 and IPv6
|
||||
|
||||
#. High-availability features
|
||||
|
||||
#. Other features such as BGP dynamic routing
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Prerequisites, typically hardware requirements, generally increase with each
|
||||
building block. Each building block depends on proper deployment and operation
|
||||
of prior building blocks. For example, the first building block (provider
|
||||
networks) only requires one controller and two compute nodes, the second
|
||||
building block (self-service networks) adds a network node, and the
|
||||
high-availability building blocks typically add a second network node for a
|
||||
total of five nodes. Each building block could also require additional
|
||||
infrastructure or changes to existing infrastructure such as networks.
|
||||
|
||||
For basic configuration of prerequisites, see the
|
||||
`Installation Guide <http://docs.openstack.org>`_ for your OpenStack release.
|
||||
|
||||
Nodes
|
||||
-----
|
||||
|
||||
The deployment examples refer one or more of the following nodes:
|
||||
|
||||
* Controller: Contains control plane components of OpenStack services
|
||||
and their dependencies.
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* Operational SQL server with databases necessary for each OpenStack
|
||||
service.
|
||||
* Operational message queue service.
|
||||
* Operational OpenStack Identity (keystone) service.
|
||||
* Operational OpenStack Image Service (glance).
|
||||
* Operational management components of the OpenStack Compute (nova) service
|
||||
with appropriate configuration to use the Networking service.
|
||||
* OpenStack Networking (neutron) server service and ML2 plug-in.
|
||||
|
||||
* Network: Contains the OpenStack Networking service layer-3 (routing)
|
||||
component. High availability options may include additional components.
|
||||
|
||||
* Three network interfaces: management, overlay, and provider.
|
||||
* Openstack Networking layer-2 (switching) agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
* Compute: Contains the hypervisor component of the OpenStack Compute service
|
||||
and the OpenStack Networking layer-2, DHCP, and metadata components.
|
||||
High-availability options may include additional components.
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* Operational hypervisor components of the OpenStack Compute (nova) service
|
||||
with appropriate configuration to use the Networking service.
|
||||
* OpenStack Networking layer-2 agent, DHCP agent, metadata agent, and any
|
||||
dependencies.
|
||||
|
||||
Each building block defines the quantity and types of nodes including the
|
||||
components on each node.
|
||||
|
||||
.. note::
|
||||
|
||||
You can virtualize these nodes for demonstration, training, or
|
||||
proof-of-concept purposes. However, you must use physical hosts for
|
||||
for evaluation of performance or scaling.
|
||||
|
||||
Networks and network interfaces
|
||||
-------------------------------
|
||||
|
||||
The deployment examples refer to one or more of the following networks
|
||||
and network interfaces:
|
||||
|
||||
* Management: Handles API requests from clients and control plane traffic for
|
||||
OpenStack services including their dependencies.
|
||||
* Overlay: Handles self-service networks using an overlay protocol such as
|
||||
VXLAN or GRE.
|
||||
* Provider: Connects virtual and physical networks at layer-2. Typically
|
||||
uses physical network infrastructure for switching/routing traffic to
|
||||
external networks such as the Internet.
|
||||
|
||||
.. note::
|
||||
|
||||
For best performance, 10+ Gbps physical network infrastructure should
|
||||
support jumbo frames.
|
||||
|
||||
For illustration purposes, the configuration examples typically reference
|
||||
the following IP address ranges:
|
||||
|
||||
* Management network: 10.0.0.0/24
|
||||
* Overlay (tunnel) network: 10.0.1.0/24
|
||||
* Provider network 1:
|
||||
|
||||
* IPv4: 203.0.113.0/24
|
||||
* IPv6: fd00:203:0:113::/64
|
||||
|
||||
* Provider network 2:
|
||||
|
||||
* IPv4: 192.0.2.0/24
|
||||
* IPv6: fd00:192:0:2::/64
|
||||
|
||||
* Self-service networks:
|
||||
|
||||
* IPv4: 192.168.0.0/16 in /24 segments
|
||||
* IPv6: fd00:192:168::/48 in /64 segments
|
||||
|
||||
You may change them to work with your particular network infrastructure.
|
||||
|
||||
.. _deploy-mechanism-drivers:
|
||||
|
||||
Mechanism drivers
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:maxdepth: 1
|
||||
|
||||
scenario-classic-ovs
|
||||
scenario-classic-lb
|
||||
scenario-classic-mt
|
||||
scenario-dvr-ovs
|
||||
scenario-l3ha-ovs
|
||||
scenario-l3ha-lb
|
||||
scenario-provider-ovs
|
||||
scenario-provider-lb
|
||||
deploy-lb
|
||||
deploy-ovs
|
||||
|
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 212 KiB |
After Width: | Height: | Size: 55 KiB |
After Width: | Height: | Size: 176 KiB |
After Width: | Height: | Size: 43 KiB |
After Width: | Height: | Size: 84 KiB |
After Width: | Height: | Size: 23 KiB |
After Width: | Height: | Size: 118 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 96 KiB |
After Width: | Height: | Size: 29 KiB |
After Width: | Height: | Size: 102 KiB |
After Width: | Height: | Size: 33 KiB |
After Width: | Height: | Size: 72 KiB |
After Width: | Height: | Size: 26 KiB |
After Width: | Height: | Size: 115 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 140 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 74 KiB |
After Width: | Height: | Size: 24 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 103 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 103 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 179 KiB |
After Width: | Height: | Size: 43 KiB |
After Width: | Height: | Size: 255 KiB |
After Width: | Height: | Size: 79 KiB |
After Width: | Height: | Size: 166 KiB |
After Width: | Height: | Size: 51 KiB |
After Width: | Height: | Size: 146 KiB |
After Width: | Height: | Size: 46 KiB |
After Width: | Height: | Size: 108 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 231 KiB |
After Width: | Height: | Size: 53 KiB |
After Width: | Height: | Size: 278 KiB |
After Width: | Height: | Size: 79 KiB |
After Width: | Height: | Size: 201 KiB |
After Width: | Height: | Size: 47 KiB |
After Width: | Height: | Size: 108 KiB |
After Width: | Height: | Size: 32 KiB |
After Width: | Height: | Size: 142 KiB |
After Width: | Height: | Size: 47 KiB |
After Width: | Height: | Size: 133 KiB |
After Width: | Height: | Size: 44 KiB |
After Width: | Height: | Size: 120 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 93 KiB |
After Width: | Height: | Size: 34 KiB |
After Width: | Height: | Size: 119 KiB |
After Width: | Height: | Size: 31 KiB |
After Width: | Height: | Size: 183 KiB |
After Width: | Height: | Size: 55 KiB |