import the networking guide content from openstack-manuals
Change-Id: Ibcedc9389dbea4a5810f2cecf890f6ba9887a07b
538
doc/source/admin/config-address-scopes.rst
Normal file
@ -0,0 +1,538 @@
|
||||
.. _config-address-scopes:
|
||||
|
||||
==============
|
||||
Address scopes
|
||||
==============
|
||||
|
||||
Address scopes build from subnet pools. While subnet pools provide a mechanism
|
||||
for controlling the allocation of addresses to subnets, address scopes show
|
||||
where addresses can be routed between networks, preventing the use of
|
||||
overlapping addresses in any two subnets. Because all addresses allocated in
|
||||
the address scope do not overlap, neutron routers do not NAT between your
|
||||
projects' network and your external network. As long as the addresses within
|
||||
an address scope match, the Networking service performs simple routing
|
||||
between networks.
|
||||
|
||||
Accessing address scopes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Anyone with access to the Networking service can create their own address
|
||||
scopes. However, network administrators can create shared address scopes,
|
||||
allowing other projects to create networks within that address scope.
|
||||
|
||||
Access to addresses in a scope are managed through subnet pools.
|
||||
Subnet pools can either be created in an address scope, or updated to belong
|
||||
to an address scope.
|
||||
|
||||
With subnet pools, all addresses in use within the address
|
||||
scope are unique from the point of view of the address scope owner. Therefore,
|
||||
add more than one subnet pool to an address scope if the
|
||||
pools have different owners, allowing for delegation of parts of the
|
||||
address scope. Delegation prevents address overlap across the
|
||||
whole scope. Otherwise, you receive an error if two pools have the same
|
||||
address ranges.
|
||||
|
||||
Each router interface is associated with an address scope by looking at
|
||||
subnets connected to the network. When a router connects
|
||||
to an external network with matching address scopes, network traffic routes
|
||||
between without Network address translation (NAT).
|
||||
The router marks all traffic connections originating from each interface
|
||||
with its corresponding address scope. If traffic leaves an interface in the
|
||||
wrong scope, the router blocks the traffic.
|
||||
|
||||
Backwards compatibility
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Networks created before the Mitaka release do not
|
||||
contain explicitly named address scopes, unless the network contains
|
||||
subnets from a subnet pool that belongs to a created or updated
|
||||
address scope. The Networking service preserves backwards compatibility with
|
||||
pre-Mitaka networks through special address scope properties so that
|
||||
these networks can perform advanced routing:
|
||||
|
||||
#. Unlimited address overlap is allowed.
|
||||
#. Neutron routers, by default, will NAT traffic from internal networks
|
||||
to external networks.
|
||||
#. Pre-Mitaka address scopes are not visible through the API. You cannot
|
||||
list address scopes or show details. Scopes exist
|
||||
implicitly as a catch-all for addresses that are not explicitly scoped.
|
||||
|
||||
Create shared address scopes as an administrative user
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section shows how to set up shared address scopes to
|
||||
allow simple routing for project networks with the same subnet pools.
|
||||
|
||||
.. note:: Irrelevant fields have been trimmed from the output of
|
||||
these commands for brevity.
|
||||
|
||||
#. Create IPv6 and IPv4 address scopes:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack address scope create --share --ip-version 6 address-scope-ip6
|
||||
|
||||
+------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+--------------------------------------+
|
||||
| headers | |
|
||||
| id | 28424dfc-9abd-481b-afa3-1da97a8fead7 |
|
||||
| ip_version | 6 |
|
||||
| name | address-scope-ip6 |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| shared | True |
|
||||
+------------+--------------------------------------+
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack address scope create --share --ip-version 4 address-scope-ip4
|
||||
|
||||
+------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+--------------------------------------+
|
||||
| headers | |
|
||||
| id | 3193bd62-11b5-44dc-acf8-53180f21e9f2 |
|
||||
| ip_version | 4 |
|
||||
| name | address-scope-ip4 |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| shared | True |
|
||||
+------------+--------------------------------------+
|
||||
|
||||
|
||||
#. Create subnet pools specifying the name (or UUID) of the address
|
||||
scope that the subnet pool belongs to. If you have existing
|
||||
subnet pools, use the :command:`openstack subnet pool set` command to put
|
||||
them in a new address scope:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool create --address-scope address-scope-ip6 \
|
||||
--share --pool-prefix 2001:db8:a583::/48 --default-prefix-length 64 \
|
||||
subnet-pool-ip6
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | 28424dfc-9abd-481b-afa3-1da97a8fead7 |
|
||||
| created_at | 2016-12-13T22:53:30Z |
|
||||
| default_prefixlen | 64 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| id | a59ff52b-0367-41ff-9781-6318b927dd0e |
|
||||
| ip_version | 6 |
|
||||
| is_default | False |
|
||||
| max_prefixlen | 128 |
|
||||
| min_prefixlen | 64 |
|
||||
| name | subnet-pool-ip6 |
|
||||
| prefixes | 2001:db8:a583::/48 |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 1 |
|
||||
| shared | True |
|
||||
| updated_at | 2016-12-13T22:53:30Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool create --address-scope address-scope-ip4 \
|
||||
--share --pool-prefix 203.0.113.0/24 --default-prefix-length 26 \
|
||||
subnet-pool-ip4
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | 3193bd62-11b5-44dc-acf8-53180f21e9f2 |
|
||||
| created_at | 2016-12-13T22:55:09Z |
|
||||
| default_prefixlen | 26 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| id | d02af70b-d622-426f-8e60-ed9df2a8301f |
|
||||
| ip_version | 4 |
|
||||
| is_default | False |
|
||||
| max_prefixlen | 32 |
|
||||
| min_prefixlen | 8 |
|
||||
| name | subnet-pool-ip4 |
|
||||
| prefixes | 203.0.113.0/24 |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 1 |
|
||||
| shared | True |
|
||||
| updated_at | 2016-12-13T22:55:09Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
#. Make sure that subnets on an external network are created
|
||||
from the subnet pools created above:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet show ipv6-public-subnet
|
||||
+-------------------+------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+------------------------------------------+
|
||||
| allocation_pools | 2001:db8:a583::2-2001:db8:a583:0:ffff:ff |
|
||||
| | ff:ffff:ffff |
|
||||
| cidr | 2001:db8:a583::/64 |
|
||||
| created_at | 2016-12-10T21:36:04Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 2001:db8:a583::1 |
|
||||
| host_routes | |
|
||||
| id | b333bf5a-758c-4b3f-97ec-5f12d9bfceb7 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | ipv6-public-subnet |
|
||||
| network_id | 05a8d31e-330b-4d96-a3fa-884b04abfa4c |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 2 |
|
||||
| segment_id | None |
|
||||
| service_types | |
|
||||
| subnetpool_id | a59ff52b-0367-41ff-9781-6318b927dd0e |
|
||||
| updated_at | 2016-12-10T21:36:04Z |
|
||||
+-------------------+------------------------------------------+
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet show public-subnet
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 203.0.113.2-203.0.113.62 |
|
||||
| cidr | 203.0.113.0/26 |
|
||||
| created_at | 2016-12-10T21:35:52Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | False |
|
||||
| gateway_ip | 203.0.113.1 |
|
||||
| host_routes | |
|
||||
| id | 7fd48240-3acc-4724-bc82-16c62857edec |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | public-subnet |
|
||||
| network_id | 05a8d31e-330b-4d96-a3fa-884b04abfa4c |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 2 |
|
||||
| segment_id | None |
|
||||
| service_types | |
|
||||
| subnetpool_id | d02af70b-d622-426f-8e60-ed9df2a8301f |
|
||||
| updated_at | 2016-12-10T21:35:52Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
Routing with address scopes for non-privileged users
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This section shows how non-privileged users can use address scopes to
|
||||
route straight to an external network without NAT.
|
||||
|
||||
#. Create a couple of networks to host subnets:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create network1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-13T23:21:01Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 1bcf3fe9-a0cb-4d88-a067-a4d7f8e635f0 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | network1 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 94 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-13T23:21:01Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create network2
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-13T23:21:45Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 6c583603-c097-4141-9c5c-288b0e49c59f |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | network2 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 81 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-13T23:21:45Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create a subnet not associated with a subnet pool or
|
||||
an address scope:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network network1 --subnet-range \
|
||||
198.51.100.0/26 subnet-ip4-1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 198.51.100.2-198.51.100.62 |
|
||||
| cidr | 198.51.100.0/26 |
|
||||
| created_at | 2016-12-13T23:24:16Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 198.51.100.1 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | 66874039-d31b-4a27-85d7-14c89341bbb7 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | subnet-ip4-1 |
|
||||
| network_id | 1bcf3fe9-a0cb-4d88-a067-a4d7f8e635f0 |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | None |
|
||||
| updated_at | 2016-12-13T23:24:16Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --network network1 --ipv6-ra-mode slaac \
|
||||
--ipv6-address-mode slaac --ip-version 6 --subnet-range \
|
||||
2001:db8:80d2:c4d3::/64 subnet-ip6-1
|
||||
+-------------------+-----------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+-----------------------------------------+
|
||||
| allocation_pools | 2001:db8:80d2:c4d3::2-2001:db8:80d2:c4d |
|
||||
| | 3:ffff:ffff:ffff:ffff |
|
||||
| cidr | 2001:db8:80d2:c4d3::/64 |
|
||||
| created_at | 2016-12-13T23:28:28Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 2001:db8:80d2:c4d3::1 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | a7551b23-2271-4a88-9c41-c84b048e0722 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | slaac |
|
||||
| name | subnet-ip6-1 |
|
||||
| network_id | 1bcf3fe9-a0cb-4d88-a067-a4d7f8e635f0 |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | None |
|
||||
| updated_at | 2016-12-13T23:28:28Z |
|
||||
+-------------------+-----------------------------------------+
|
||||
|
||||
|
||||
#. Create a subnet using a subnet pool associated with an address scope
|
||||
from an external network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --subnet-pool subnet-pool-ip4 \
|
||||
--network network2 subnet-ip4-2
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 203.0.113.2-203.0.113.62 |
|
||||
| cidr | 203.0.113.0/26 |
|
||||
| created_at | 2016-12-13T23:32:12Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 203.0.113.1 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | 12be8e8f-5871-4091-9e9e-4e0651b9677e |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | subnet-ip4-2 |
|
||||
| network_id | 6c583603-c097-4141-9c5c-288b0e49c59f |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | d02af70b-d622-426f-8e60-ed9df2a8301f |
|
||||
| updated_at | 2016-12-13T23:32:12Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --ip-version 6 --ipv6-ra-mode slaac \
|
||||
--ipv6-address-mode slaac --subnet-pool subnet-pool-ip6 \
|
||||
--network network2 subnet-ip6-2
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 2001:db8:a583::2-2001:db8:a583:0:fff |
|
||||
| | f:ffff:ffff:ffff |
|
||||
| cidr | 2001:db8:a583::/64 |
|
||||
| created_at | 2016-12-13T23:31:17Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 2001:db8:a583::1 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | b599c2be-e3cd-449c-ba39-3cfcc744c4be |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | slaac |
|
||||
| name | subnet-ip6-2 |
|
||||
| network_id | 6c583603-c097-4141-9c5c-288b0e49c59f |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | a59ff52b-0367-41ff-9781-6318b927dd0e |
|
||||
| updated_at | 2016-12-13T23:31:17Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
By creating subnets from scoped subnet pools, the network is
|
||||
associated with the address scope.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network show network2
|
||||
+---------------------------+------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | nova |
|
||||
| created_at | 2016-12-13T23:21:45Z |
|
||||
| description | |
|
||||
| id | 6c583603-c097-4141-9c5c- |
|
||||
| | 288b0e49c59f |
|
||||
| ipv4_address_scope | 3193bd62-11b5-44dc- |
|
||||
| | acf8-53180f21e9f2 |
|
||||
| ipv6_address_scope | 28424dfc-9abd-481b- |
|
||||
| | afa3-1da97a8fead7 |
|
||||
| mtu | 1450 |
|
||||
| name | network2 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 098429d072d34d3596c88b7dbf7e |
|
||||
| | 91b6 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 81 |
|
||||
| revision_number | 10 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | 12be8e8f-5871-4091-9e9e- |
|
||||
| | 4e0651b9677e, b599c2be-e3cd- |
|
||||
| | 449c-ba39-3cfcc744c4be |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-13T23:32:12Z |
|
||||
+---------------------------+------------------------------+
|
||||
|
||||
#. Connect a router to each of the project subnets that have been created, for
|
||||
example, using a router called ``router1``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add subnet router1 subnet-ip4-1
|
||||
$ openstack router add subnet router1 subnet-ip4-2
|
||||
$ openstack router add subnet router1 subnet-ip6-1
|
||||
$ openstack router add subnet router1 subnet-ip6-2
|
||||
|
||||
Checking connectivity
|
||||
---------------------
|
||||
|
||||
This example shows how to check the connectivity between networks
|
||||
with address scopes.
|
||||
|
||||
#. Launch two instances, ``instance1`` on ``network1`` and
|
||||
``instance2`` on ``network2``. Associate a floating IP address to both
|
||||
instances.
|
||||
|
||||
#. Adjust security groups to allow pings and SSH (both IPv4 and IPv6):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
+--------------+-----------+---------------------------------------------------------------------------+------------+
|
||||
| ID | Name | Networks | Image Name |
|
||||
+--------------+-----------+---------------------------------------------------------------------------+------------+
|
||||
| 97e49c8e-... | instance1 | network1=2001:db8:80d2:c4d3:f816:3eff:fe52:b69f, 198.51.100.3, 203.0.113.3| cirros |
|
||||
| ceba9638-... | instance2 | network2=203.0.113.3, 2001:db8:a583:0:f816:3eff:fe42:1eeb, 203.0.113.4 | centos |
|
||||
+--------------+-----------+---------------------------------------------------------------------------+------------+
|
||||
|
||||
Regardless of address scopes, the floating IPs can be pinged from the
|
||||
external network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ping -c 1 203.0.113.3
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
$ ping -c 1 203.0.113.4
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
|
||||
You can now ping ``instance2`` directly because ``instance2`` shares the
|
||||
same address scope as the external network:
|
||||
|
||||
.. note:: BGP routing can be used to automatically set up a static
|
||||
route for your instances.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip route add 203.0.113.0/26 via 203.0.113.2
|
||||
$ ping -c 1 203.0.113.3
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip route add 2001:db8:a583::/64 via 2001:db8::1
|
||||
$ ping6 -c 1 2001:db8:a583:0:f816:3eff:fe42:1eeb
|
||||
1 packets transmitted, 1 received, 0% packet loss, time 0ms
|
||||
|
||||
You cannot ping ``instance1`` directly because the address scopes do not
|
||||
match:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip route add 198.51.100.0/26 via 203.0.113.2
|
||||
$ ping -c 1 198.51.100.3
|
||||
1 packets transmitted, 0 received, 100% packet loss, time 0ms
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip route add 2001:db8:80d2:c4d3::/64 via 2001:db8::1
|
||||
$ ping6 -c 1 2001:db8:80d2:c4d3:f816:3eff:fe52:b69f
|
||||
1 packets transmitted, 0 received, 100% packet loss, time 0ms
|
||||
|
||||
If the address scopes match between
|
||||
networks then pings and other traffic route directly through. If the
|
||||
scopes do not match between networks, the router either drops the
|
||||
traffic or applies NAT to cross scope boundaries.
|
243
doc/source/admin/config-auto-allocation.rst
Normal file
@ -0,0 +1,243 @@
|
||||
.. _config-auto-allocation:
|
||||
|
||||
==========================================
|
||||
Automatic allocation of network topologies
|
||||
==========================================
|
||||
|
||||
The auto-allocation feature introduced in Mitaka simplifies the procedure of
|
||||
setting up an external connectivity for end-users, and is also known as **Get
|
||||
Me A Network**.
|
||||
|
||||
Previously, a user had to configure a range of networking resources to boot
|
||||
a server and get access to the Internet. For example, the following steps
|
||||
are required:
|
||||
|
||||
* Create a network
|
||||
* Create a subnet
|
||||
* Create a router
|
||||
* Uplink the router on an external network
|
||||
* Downlink the router on the previously created subnet
|
||||
|
||||
These steps need to be performed on each logical segment that a VM needs to
|
||||
be connected to, and may require networking knowledge the user might not
|
||||
have.
|
||||
|
||||
This feature is designed to automate the basic networking provisioning for
|
||||
projects. The steps to provision a basic network are run during instance
|
||||
boot, making the networking setup hands-free.
|
||||
|
||||
To make this possible, provide a default external network and default
|
||||
subnetpools (one for IPv4, or one for IPv6, or one of each) so that the
|
||||
Networking service can choose what to do in lieu of input. Once these are in
|
||||
place, users can boot their VMs without specifying any networking details.
|
||||
The Compute service will then use this feature automatically to wire user
|
||||
VMs.
|
||||
|
||||
Enabling the deployment for auto-allocation
|
||||
-------------------------------------------
|
||||
|
||||
To use this feature, the neutron service must have the following extensions
|
||||
enabled:
|
||||
|
||||
* ``auto-allocated-topology``
|
||||
* ``subnet_allocation``
|
||||
* ``external-net``
|
||||
* ``router``
|
||||
|
||||
Before the end-user can use the auto-allocation feature, the operator must
|
||||
create the resources that will be used for the auto-allocated network
|
||||
topology creation. To perform this task, proceed with the following steps:
|
||||
|
||||
#. Set up a default external network
|
||||
|
||||
Setting up an external network is described in
|
||||
`OpenStack Administrator Guide
|
||||
<https://docs.openstack.org/admin-guide/networking-adv-features.html>`_.
|
||||
Assuming the external network to be used for the auto-allocation feature
|
||||
is named ``public``, make it the ``default`` external network
|
||||
with the following command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network set public --default
|
||||
|
||||
.. note::
|
||||
|
||||
The flag ``--default`` (and ``--no-default`` flag) is only effective
|
||||
with external networks and has no effects on regular (or internal)
|
||||
networks.
|
||||
|
||||
#. Create default subnetpools
|
||||
|
||||
The auto-allocation feature requires at least one default
|
||||
subnetpool. One for IPv4, or one for IPv6, or one of each.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool create --share --default \
|
||||
--pool-prefix 192.0.2.0/24 --default-prefix-length 26 \
|
||||
shared-default
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | None |
|
||||
| created_at | 2017-01-12T15:10:34Z |
|
||||
| default_prefixlen | 26 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | b41b7b9c-de57-4c19-b1c5-731985bceb7f |
|
||||
| ip_version | 4 |
|
||||
| is_default | True |
|
||||
| max_prefixlen | 32 |
|
||||
| min_prefixlen | 8 |
|
||||
| name | shared-default |
|
||||
| prefixes | 192.0.2.0/24 |
|
||||
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
|
||||
| revision_number | 1 |
|
||||
| shared | True |
|
||||
| updated_at | 2017-01-12T15:10:34Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack subnet pool create --share --default \
|
||||
--pool-prefix 2001:db8:8000::/48 --default-prefix-length 64 \
|
||||
default-v6
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | None |
|
||||
| created_at | 2017-01-12T15:14:35Z |
|
||||
| default_prefixlen | 64 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 6f387016-17f0-4564-96ad-e34775b6ea14 |
|
||||
| ip_version | 6 |
|
||||
| is_default | True |
|
||||
| max_prefixlen | 128 |
|
||||
| min_prefixlen | 64 |
|
||||
| name | default-v6 |
|
||||
| prefixes | 2001:db8:8000::/48 |
|
||||
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
|
||||
| revision_number | 1 |
|
||||
| shared | True |
|
||||
| updated_at | 2017-01-12T15:14:35Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
Get Me A Network
|
||||
----------------
|
||||
|
||||
In a deployment where the operator has set up the resources as described above,
|
||||
they can get their auto-allocated network topology as follows:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network auto allocated topology create --or-show
|
||||
+------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+--------------------------------------+
|
||||
| id | a380c780-d6cd-4510-a4c0-1a6ec9b85a29 |
|
||||
| name | None |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
+------------+--------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
When the ``--or-show`` option is used the command returns the topology
|
||||
information if it already exists.
|
||||
|
||||
Operators (and users with admin role) can get the auto-allocated topology for a
|
||||
project by specifying the project ID:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network auto allocated topology create --project \
|
||||
cfd1889ac7d64ad891d4f20aef9f8d7c --or-show
|
||||
+------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+--------------------------------------+
|
||||
| id | a380c780-d6cd-4510-a4c0-1a6ec9b85a29 |
|
||||
| name | None |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
+------------+--------------------------------------+
|
||||
|
||||
The ID returned by this command is a network which can be used for booting
|
||||
a VM.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --flavor m1.small --image \
|
||||
cirros-0.3.5-x86_64-uec --nic \
|
||||
net-id=8b835bfb-cae2-4acc-b53f-c16bb5f9a7d0 vm1
|
||||
|
||||
The auto-allocated topology for a user never changes. In practice, when a user
|
||||
boots a server omitting the ``--nic`` option, and there is more than one
|
||||
network available, the Compute service will invoke the API behind
|
||||
``auto allocated topology create``, fetch the network UUID, and pass it on
|
||||
during the boot process.
|
||||
|
||||
Validating the requirements for auto-allocation
|
||||
-----------------------------------------------
|
||||
|
||||
To validate that the required resources are correctly set up for
|
||||
auto-allocation, without actually provisioning anything, use
|
||||
the ``--check-resources`` option:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network auto allocated topology create --check-resources
|
||||
Deployment error: No default router:external network.
|
||||
|
||||
$ openstack network set public --default
|
||||
|
||||
$ openstack network auto allocated topology create --check-resources
|
||||
Deployment error: No default subnetpools defined.
|
||||
|
||||
$ openstack subnet pool set shared-default --default
|
||||
|
||||
$ openstack network auto allocated topology create --check-resources
|
||||
+---------+-------+
|
||||
| Field | Value |
|
||||
+---------+-------+
|
||||
| dry-run | pass |
|
||||
+---------+-------+
|
||||
|
||||
The validation option behaves identically for all users. However, it
|
||||
is considered primarily an admin or service utility since it is the
|
||||
operator who must set up the requirements.
|
||||
|
||||
Project resources created by auto-allocation
|
||||
--------------------------------------------
|
||||
|
||||
The auto-allocation feature creates one network topology in every project
|
||||
where it is used. The auto-allocated network topology for a project contains
|
||||
the following resources:
|
||||
|
||||
+--------------------+------------------------------+
|
||||
|Resource |Name |
|
||||
+====================+==============================+
|
||||
|network |``auto_allocated_network`` |
|
||||
+--------------------+------------------------------+
|
||||
|subnet (IPv4) |``auto_allocated_subnet_v4`` |
|
||||
+--------------------+------------------------------+
|
||||
|subnet (IPv6) |``auto_allocated_subnet_v6`` |
|
||||
+--------------------+------------------------------+
|
||||
|router |``auto_allocated_router`` |
|
||||
+--------------------+------------------------------+
|
||||
|
||||
Compatibility notes
|
||||
-------------------
|
||||
|
||||
Nova uses the ``auto allocated topology`` feature with API micro
|
||||
version 2.37 or later. This is because, unlike the neutron feature
|
||||
which was implemented in the Mitaka release, the integration for
|
||||
nova was completed during the Newton release cycle. Note that
|
||||
the CLI option ``--nic`` can be omitted regardless of the microversion
|
||||
used as long as there is no more than one network available to the
|
||||
project, in which case nova fails with a 400 error because it
|
||||
does not know which network to use. Furthermore, nova does not start
|
||||
using the feature, regardless of whether or not a user requests
|
||||
micro version 2.37 or later, unless all of the ``nova-compute``
|
||||
services are running Newton-level code.
|
378
doc/source/admin/config-az.rst
Normal file
@ -0,0 +1,378 @@
|
||||
.. _config-az:
|
||||
|
||||
==================
|
||||
Availability zones
|
||||
==================
|
||||
|
||||
An availability zone groups network nodes that run services like DHCP, L3, FW,
|
||||
and others. It is defined as an agent's attribute on the network node. This
|
||||
allows users to associate an availability zone with their resources so that the
|
||||
resources get high availability.
|
||||
|
||||
|
||||
Use case
|
||||
--------
|
||||
|
||||
An availability zone is used to make network resources highly available. The
|
||||
operators group the nodes that are attached to different power sources under
|
||||
separate availability zones and configure scheduling for resources with high
|
||||
availability so that they are scheduled on different availability zones.
|
||||
|
||||
|
||||
Required extensions
|
||||
-------------------
|
||||
|
||||
The core plug-in must support the ``availability_zone`` extension. The core
|
||||
plug-in also must support the ``network_availability_zone`` extension to
|
||||
schedule a network according to availability zones. The ``Ml2Plugin`` supports
|
||||
it. The router service plug-in must support the ``router_availability_zone``
|
||||
extension to schedule a router according to the availability zones. The
|
||||
``L3RouterPlugin`` supports it.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension list --network -c Alias -c Name
|
||||
+---------------------------+---------------------------+
|
||||
| Name | Alias |
|
||||
+---------------------------+---------------------------+
|
||||
...
|
||||
| Network Availability Zone | network_availability_zone |
|
||||
...
|
||||
| Availability Zone | availability_zone |
|
||||
...
|
||||
| Router Availability Zone | router_availability_zone |
|
||||
...
|
||||
+---------------------------+---------------------------+
|
||||
|
||||
|
||||
Availability zone of agents
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ``availability_zone`` attribute can be defined in ``dhcp-agent`` and
|
||||
``l3-agent``. To define an availability zone for each agent, set the
|
||||
value into ``[AGENT]`` section of ``/etc/neutron/dhcp_agent.ini`` or
|
||||
``/etc/neutron/l3_agent.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[AGENT]
|
||||
availability_zone = zone-1
|
||||
|
||||
To confirm the agent's availability zone:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent show 116cc128-4398-49af-a4ed-3e95494cd5fc
|
||||
+---------------------+---------------------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+---------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| agent_type | DHCP agent |
|
||||
| alive | True |
|
||||
| availability_zone | zone-1 |
|
||||
| binary | neutron-dhcp-agent |
|
||||
| configurations | dhcp_driver='neutron.agent.linux.dhcp.Dnsmasq', |
|
||||
| | dhcp_lease_duration='86400', |
|
||||
| | log_agent_heartbeats='False', networks='2', |
|
||||
| | notifies_port_ready='True', ports='6', subnets='4 |
|
||||
| created_at | 2016-12-14 00:25:54 |
|
||||
| description | None |
|
||||
| heartbeat_timestamp | 2016-12-14 06:20:24 |
|
||||
| host | ankur-desktop |
|
||||
| id | 116cc128-4398-49af-a4ed-3e95494cd5fc |
|
||||
| started_at | 2016-12-14 00:25:54 |
|
||||
| topic | dhcp_agent |
|
||||
+---------------------+---------------------------------------------------+
|
||||
|
||||
$ openstack network agent show 9632309a-2aa4-4304-8603-c4de02c4a55f
|
||||
+---------------------+-------------------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+-------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| agent_type | L3 agent |
|
||||
| alive | True |
|
||||
| availability_zone | zone-1 |
|
||||
| binary | neutron-l3-agent |
|
||||
| configurations | agent_mode='legacy', ex_gw_ports='2', |
|
||||
| | external_network_bridge='', floating_ips='0', |
|
||||
| | gateway_external_network_id='', |
|
||||
| | handle_internal_only_routers='True', |
|
||||
| | interface_driver='openvswitch', interfaces='4', |
|
||||
| | log_agent_heartbeats='False', routers='2' |
|
||||
| created_at | 2016-12-14 00:25:58 |
|
||||
| description | None |
|
||||
| heartbeat_timestamp | 2016-12-14 06:20:28 |
|
||||
| host | ankur-desktop |
|
||||
| id | 9632309a-2aa4-4304-8603-c4de02c4a55f |
|
||||
| started_at | 2016-12-14 00:25:58 |
|
||||
| topic | l3_agent |
|
||||
+---------------------+-------------------------------------------------+
|
||||
|
||||
|
||||
Availability zone related attributes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following attributes are added into network and router:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 25 10 10 10 50
|
||||
|
||||
* - Attribute name
|
||||
- Access
|
||||
- Required
|
||||
- Input type
|
||||
- Description
|
||||
|
||||
* - availability_zone_hints
|
||||
- RW(POST only)
|
||||
- No
|
||||
- list of string
|
||||
- availability zone candidates for the resource
|
||||
|
||||
* - availability_zones
|
||||
- RO
|
||||
- N/A
|
||||
- list of string
|
||||
- availability zones for the resource
|
||||
|
||||
Use ``availability_zone_hints`` to specify the zone in which the resource is
|
||||
hosted:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create --availability-zone-hint zone-1 \
|
||||
--availability-zone-hint zone-2 net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | zone-1 |
|
||||
| | zone-2 |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-14T06:23:36Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | ad88e059-e7fa-4cf7-8857-6731a2a3a554 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | net1 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 77 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-14T06:23:37Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router create --ha --availability-zone-hint zone-1 \
|
||||
--availability-zone-hint zone-2 router1
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | zone-1 |
|
||||
| | zone-2 |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-14T06:25:40Z |
|
||||
| description | |
|
||||
| distributed | False |
|
||||
| external_gateway_info | null |
|
||||
| flavor_id | None |
|
||||
| ha | False |
|
||||
| headers | |
|
||||
| id | ced10262-6cfe-47c1-8847-cd64276a868c |
|
||||
| name | router1 |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| revision_number | 3 |
|
||||
| routes | |
|
||||
| status | ACTIVE |
|
||||
| updated_at | 2016-12-14T06:25:40Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
|
||||
|
||||
Availability zone is selected from ``default_availability_zones`` in
|
||||
``/etc/neutron/neutron.conf`` if a resource is created without
|
||||
``availability_zone_hints``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
default_availability_zones = zone-1,zone-2
|
||||
|
||||
To confirm the availability zone defined by the system:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack availability zone list
|
||||
+-----------+-------------+
|
||||
| Zone Name | Zone Status |
|
||||
+-----------+-------------+
|
||||
| zone-1 | available |
|
||||
| zone-2 | available |
|
||||
| zone-1 | available |
|
||||
| zone-2 | available |
|
||||
+-----------+-------------+
|
||||
|
||||
Look at the ``availability_zones`` attribute of each resource to confirm in
|
||||
which zone the resource is hosted:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network show net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | zone-1 |
|
||||
| | zone-2 |
|
||||
| availability_zones | zone-1 |
|
||||
| | zone-2 |
|
||||
| created_at | 2016-12-14T06:23:36Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | ad88e059-e7fa-4cf7-8857-6731a2a3a554 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | net1 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 77 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-14T06:23:37Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router show router1
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | zone-1 |
|
||||
| | zone-2 |
|
||||
| availability_zones | zone-1 |
|
||||
| | zone-2 |
|
||||
| created_at | 2016-12-14T06:25:40Z |
|
||||
| description | |
|
||||
| distributed | False |
|
||||
| external_gateway_info | null |
|
||||
| flavor_id | None |
|
||||
| ha | False |
|
||||
| headers | |
|
||||
| id | ced10262-6cfe-47c1-8847-cd64276a868c |
|
||||
| name | router1 |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| revision_number | 3 |
|
||||
| routes | |
|
||||
| status | ACTIVE |
|
||||
| updated_at | 2016-12-14T06:25:40Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
The ``availability_zones`` attribute does not have a value until the
|
||||
resource is scheduled. Once the Networking service schedules the resource
|
||||
to zones according to ``availability_zone_hints``, ``availability_zones``
|
||||
shows in which zone the resource is hosted practically. The
|
||||
``availability_zones`` may not match ``availability_zone_hints``. For
|
||||
example, even if you specify a zone with ``availability_zone_hints``, all
|
||||
agents of the zone may be dead before the resource is scheduled. In
|
||||
general, they should match, unless there are failures or there is no
|
||||
capacity left in the zone requested.
|
||||
|
||||
|
||||
Availability zone aware scheduler
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Network scheduler
|
||||
-----------------
|
||||
|
||||
Set ``AZAwareWeightScheduler`` to ``network_scheduler_driver`` in
|
||||
``/etc/neutron/neutron.conf`` so that the Networking service schedules a
|
||||
network according to the availability zone:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.AZAwareWeightScheduler
|
||||
dhcp_load_type = networks
|
||||
|
||||
The Networking service schedules a network to one of the agents within the
|
||||
selected zone as with ``WeightScheduler``. In this case, scheduler refers to
|
||||
``dhcp_load_type`` as well.
|
||||
|
||||
|
||||
Router scheduler
|
||||
----------------
|
||||
|
||||
Set ``AZLeastRoutersScheduler`` to ``router_scheduler_driver`` in file
|
||||
``/etc/neutron/neutron.conf`` so that the Networking service schedules a router
|
||||
according to the availability zone:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.AZLeastRoutersScheduler
|
||||
|
||||
The Networking service schedules a router to one of the agents within the
|
||||
selected zone as with ``LeastRouterScheduler``.
|
||||
|
||||
|
||||
Achieving high availability with availability zone
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Although, the Networking service provides high availability for routers and
|
||||
high availability and fault tolerance for networks' DHCP services, availability
|
||||
zones provide an extra layer of protection by segmenting a Networking service
|
||||
deployment in isolated failure domains. By deploying HA nodes across different
|
||||
availability zones, it is guaranteed that network services remain available in
|
||||
face of zone-wide failures that affect the deployment.
|
||||
|
||||
This section explains how to get high availability with the availability zone
|
||||
for L3 and DHCP. You should naturally set above configuration options for the
|
||||
availability zone.
|
||||
|
||||
L3 high availability
|
||||
--------------------
|
||||
|
||||
Set the following configuration options in file ``/etc/neutron/neutron.conf``
|
||||
so that you get L3 high availability.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
l3_ha = True
|
||||
max_l3_agents_per_router = 3
|
||||
|
||||
HA routers are created on availability zones you selected when creating the
|
||||
router.
|
||||
|
||||
DHCP high availability
|
||||
----------------------
|
||||
|
||||
Set the following configuration options in file ``/etc/neutron/neutron.conf``
|
||||
so that you get DHCP high availability.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
dhcp_agents_per_network = 2
|
||||
|
||||
DHCP services are created on availability zones you selected when creating the
|
||||
network.
|
880
doc/source/admin/config-bgp-dynamic-routing.rst
Normal file
@ -0,0 +1,880 @@
|
||||
.. _config-bgp-dynamic-routing:
|
||||
|
||||
===================
|
||||
BGP dynamic routing
|
||||
===================
|
||||
|
||||
BGP dynamic routing enables advertisement of self-service (private) network
|
||||
prefixes to physical network devices that support BGP such as routers, thus
|
||||
removing the conventional dependency on static routes. The feature relies
|
||||
on :ref:`address scopes <config-address-scopes>` and requires knowledge of
|
||||
their operation for proper deployment.
|
||||
|
||||
BGP dynamic routing consists of a service plug-in and an agent. The service
|
||||
plug-in implements the Networking service extension and the agent manages BGP
|
||||
peering sessions. A cloud administrator creates and configures a BGP speaker
|
||||
using the CLI or API and manually schedules it to one or more hosts running
|
||||
the agent. Agents can reside on hosts with or without other Networking
|
||||
service agents. Prefix advertisement depends on the binding of external
|
||||
networks to a BGP speaker and the address scope of external and internal
|
||||
IP address ranges or subnets.
|
||||
|
||||
.. image:: figures/bgp-dynamic-routing-overview.png
|
||||
:alt: BGP dynamic routing overview
|
||||
|
||||
.. note::
|
||||
|
||||
Although self-service networks generally use private IP address ranges
|
||||
(RFC1918) for IPv4 subnets, BGP dynamic routing can advertise any IPv4
|
||||
address ranges.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The example configuration involves the following components:
|
||||
|
||||
* One BGP agent.
|
||||
|
||||
* One address scope containing IP address range 203.0.113.0/24 for
|
||||
provider networks, and IP address ranges 192.0.2.0/25 and 192.0.2.128/25
|
||||
for self-service networks.
|
||||
|
||||
* One provider network using IP address range 203.0.113.0/24.
|
||||
|
||||
* Three self-service networks.
|
||||
|
||||
* Self-service networks 1 and 2 use IP address ranges inside of
|
||||
the address scope.
|
||||
|
||||
* Self-service network 3 uses a unique IP address range 198.51.100.0/24 to
|
||||
demonstrate that the BGP speaker does not advertise prefixes outside
|
||||
of address scopes.
|
||||
|
||||
* Three routers. Each router connects one self-service network to the
|
||||
provider network.
|
||||
|
||||
* Router 1 contains IP addresses 203.0.113.11 and 192.0.2.1
|
||||
|
||||
* Router 2 contains IP addresses 203.0.113.12 and 192.0.2.129
|
||||
|
||||
* Router 3 contains IP addresses 203.0.113.13 and 198.51.100.1
|
||||
|
||||
.. note::
|
||||
|
||||
The example configuration assumes sufficient knowledge about the
|
||||
Networking service, routing, and BGP. For basic deployment of the
|
||||
Networking service, consult one of the
|
||||
:ref:`deploy`. For more information on BGP, see
|
||||
`RFC 4271 <https://tools.ietf.org/html/rfc4271>`_.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
* In the ``neutron.conf`` file, enable the conventional layer-3 and BGP
|
||||
dynamic routing service plug-ins:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
|
||||
|
||||
Agent nodes
|
||||
-----------
|
||||
|
||||
* In the ``bgp_dragent.ini`` file:
|
||||
|
||||
* Configure the driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[BGP]
|
||||
bgp_speaker_driver = neutron_dynamic_routing.services.bgp.agent.driver.ryu.driver.RyuBgpDriver
|
||||
|
||||
.. note::
|
||||
|
||||
The agent currently only supports the Ryu BGP driver.
|
||||
|
||||
* Configure the router ID.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[BGP]
|
||||
bgp_router_id = ROUTER_ID
|
||||
|
||||
Replace ``ROUTER_ID`` with a suitable unique 32-bit number, typically an
|
||||
IPv4 address on the host running the agent. For example, 192.0.2.2.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of each BGP dynamic routing agent.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list --agent-type="BGP dynamic routing agent"
|
||||
+--------------------------------------+---------------------------+------------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+---------------------------+------------+-------------------+-------+----------------+---------------------------+
|
||||
| 37729181-2224-48d8-89ef-16eca8e2f77e | BGP dynamic routing agent | controller | | :-) | True | neutron-bgp-dragent |
|
||||
+--------------------------------------+---------------------------+------------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
Create the address scope and subnet pools
|
||||
-----------------------------------------
|
||||
|
||||
#. Create an address scope. The provider (external) and self-service networks
|
||||
must belong to the same address scope for the agent to advertise those
|
||||
self-service network prefixes.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack address scope create --share --ip-version 4 bgp
|
||||
|
||||
+------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+--------------------------------------+
|
||||
| headers | |
|
||||
| id | f71c958f-dbe8-49a2-8fb9-19c5f52a37f1 |
|
||||
| ip_version | 4 |
|
||||
| name | bgp |
|
||||
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
|
||||
| shared | True |
|
||||
+------------+--------------------------------------+
|
||||
|
||||
#. Create subnet pools. The provider and self-service networks use different
|
||||
pools.
|
||||
|
||||
* Create the provider network pool.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool create --pool-prefix 203.0.113.0/24 \
|
||||
--address-scope bgp provider
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | f71c958f-dbe8-49a2-8fb9-19c5f52a37f1 |
|
||||
| created_at | 2017-01-12T14:58:57Z |
|
||||
| default_prefixlen | 8 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 63532225-b9a0-445a-9935-20a15f9f68d1 |
|
||||
| ip_version | 4 |
|
||||
| is_default | False |
|
||||
| max_prefixlen | 32 |
|
||||
| min_prefixlen | 8 |
|
||||
| name | provider |
|
||||
| prefixes | 203.0.113.0/24 |
|
||||
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
|
||||
| revision_number | 1 |
|
||||
| shared | False |
|
||||
| updated_at | 2017-01-12T14:58:57Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
* Create the self-service network pool.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool create --pool-prefix 192.0.2.0/25 \
|
||||
--pool-prefix 192.0.2.128/25 --address-scope bgp \
|
||||
--share selfservice
|
||||
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | f71c958f-dbe8-49a2-8fb9-19c5f52a37f1 |
|
||||
| created_at | 2017-01-12T15:02:31Z |
|
||||
| default_prefixlen | 8 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 8d8270b1-b194-4b7e-914c-9c741dcbd49b |
|
||||
| ip_version | 4 |
|
||||
| is_default | False |
|
||||
| max_prefixlen | 32 |
|
||||
| min_prefixlen | 8 |
|
||||
| name | selfservice |
|
||||
| prefixes | 192.0.2.0/25, 192.0.2.128/25 |
|
||||
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
|
||||
| revision_number | 1 |
|
||||
| shared | True |
|
||||
| updated_at | 2017-01-12T15:02:31Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
Create the provider and self-service networks
|
||||
---------------------------------------------
|
||||
|
||||
#. Create the provider network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create provider --external --provider-physical-network \
|
||||
provider --provider-network-type flat
|
||||
Created a new network:
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-21T08:47:41Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 190ca651-2ee3-4a4b-891f-dedda47974fe |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | False |
|
||||
| mtu | 1450 |
|
||||
| name | provider |
|
||||
| port_security_enabled | True |
|
||||
| project_id | c961a8f6d3654657885226378ade8220 |
|
||||
| provider:network_type | flat |
|
||||
| provider:physical_network | provider |
|
||||
| provider:segmentation_id | 66 |
|
||||
| revision_number | 3 |
|
||||
| router:external | External |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-21T08:47:41Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create a subnet on the provider network using an IP address range from
|
||||
the provider subnet pool.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --name provider --subnetpool provider \
|
||||
--prefixlen 24 --allocation-pool start=203.0.113.11,end=203.0.113.254 \
|
||||
--gateway 203.0.113.1 provider
|
||||
Created a new subnet:
|
||||
+-------------------+---------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+---------------------------------------------------+
|
||||
| allocation_pools | {"start": "203.0.113.11", "end": "203.0.113.254"} |
|
||||
| cidr | 203.0.113.0/24 |
|
||||
| created_at | 2016-03-17T23:17:16 |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 203.0.113.1 |
|
||||
| host_routes | |
|
||||
| id | 8ed65d41-2b2a-4f3a-9f92-45adb266e01a |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | provider |
|
||||
| network_id | 68ec148c-181f-4656-8334-8f4eb148689d |
|
||||
| subnetpool_id | 3771c0e7-7096-46d3-a3bd-699c58e70259 |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| updated_at | 2016-03-17T23:17:16 |
|
||||
+-------------------+---------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
The IP address allocation pool starting at ``.11`` improves clarity of
|
||||
the diagrams. You can safely omit it.
|
||||
|
||||
#. Create the self-service networks.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create selfservice1
|
||||
Created a new network:
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-21T08:49:38Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 9d842606-ef3d-4160-9ed9-e03fa63aed96 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | selfservice1 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | c961a8f6d3654657885226378ade8220 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 106 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-21T08:49:38Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
$ openstack network create selfservice2
|
||||
Created a new network:
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-21T08:50:05Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | f85639e1-d23f-438e-b2b1-f40570d86b1c |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | selfservice2 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | c961a8f6d3654657885226378ade8220 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 21 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-21T08:50:05Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
$ openstack network create selfservice3
|
||||
Created a new network:
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2016-12-21T08:50:35Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | eeccdb82-5cf4-4999-8ab3-e7dc99e7d43b |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | selfservice3 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | c961a8f6d3654657885226378ade8220 |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 86 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2016-12-21T08:50:35Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create a subnet on the first two self-service networks using an IP address
|
||||
range from the self-service subnet pool.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --name selfservice1 --subnetpool selfservice \
|
||||
--prefixlen 25 selfservice1
|
||||
Created a new subnet:
|
||||
+-------------------+----------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------------------------------+
|
||||
| allocation_pools | {"start": "192.0.2.2", "end": "192.0.2.127"} |
|
||||
| cidr | 192.0.2.0/25 |
|
||||
| created_at | 2016-03-17T23:20:20 |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 198.51.100.1 |
|
||||
| host_routes | |
|
||||
| id | 8edd3dc2-df40-4d71-816e-a4586d61c809 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | selfservice1 |
|
||||
| network_id | be79de1e-5f56-11e6-9dfb-233e41cec48c |
|
||||
| subnetpool_id | c7e9737a-cfd3-45b5-a861-d1cee1135a92 |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| updated_at | 2016-03-17T23:20:20 |
|
||||
+-------------------+----------------------------------------------------+
|
||||
|
||||
$ neutron subnet-create --name selfservice2 --subnetpool selfservice \
|
||||
--prefixlen 25 selfservice2
|
||||
Created a new subnet:
|
||||
+-------------------+------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+------------------------------------------------+
|
||||
| allocation_pools | {"start": "192.0.2.130", "end": "192.0.2.254"} |
|
||||
| cidr | 192.0.2.128/25 |
|
||||
| created_at | 2016-03-17T23:20:20 |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 192.0.2.129 |
|
||||
| host_routes | |
|
||||
| id | 8edd3dc2-df40-4d71-816e-a4586d61c809 |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | selfservice2 |
|
||||
| network_id | c1fd9846-5f56-11e6-a8ac-0f998d9cc0a2 |
|
||||
| subnetpool_id | c7e9737a-cfd3-45b5-a861-d1cee1135a92 |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| updated_at | 2016-03-17T23:20:20 |
|
||||
+-------------------+------------------------------------------------+
|
||||
|
||||
#. Create a subnet on the last self-service network using an IP address
|
||||
range outside of the address scope.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --name subnet3 selfservice3 198.51.100.0/24
|
||||
Created a new subnet:
|
||||
+-------------------+----------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------------------------------+
|
||||
| allocation_pools | {"start": "198.51.100.2", "end": "198.51.100.254"} |
|
||||
| cidr | 198.51.100.0/24 |
|
||||
| created_at | 2016-03-17T23:20:20 |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 198.51.100.1 |
|
||||
| host_routes | |
|
||||
| id | cd9f9156-5f59-11e6-aeec-172ec7ee939a |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | |
|
||||
| ipv6_ra_mode | |
|
||||
| name | selfservice3 |
|
||||
| network_id | c283dc1c-5f56-11e6-bfb6-efc30e1eb73b |
|
||||
| subnetpool_id | |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| updated_at | 2016-03-17T23:20:20 |
|
||||
+-------------------+----------------------------------------------------+
|
||||
|
||||
Create and configure the routers
|
||||
--------------------------------
|
||||
|
||||
#. Create the routers.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router create router1
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-10T13:15:19Z |
|
||||
| description | |
|
||||
| distributed | False |
|
||||
| external_gateway_info | null |
|
||||
| flavor_id | None |
|
||||
| ha | False |
|
||||
| headers | |
|
||||
| id | 3f6f4ef8-63be-11e6-bbb3-2fbcef363ab8 |
|
||||
| name | router1 |
|
||||
| project_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| revision_number | 1 |
|
||||
| routes | |
|
||||
| status | ACTIVE |
|
||||
| updated_at | 2017-01-10T13:15:19Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
$ openstack router create router2
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-10T13:15:19Z |
|
||||
| description | |
|
||||
| distributed | False |
|
||||
| external_gateway_info | null |
|
||||
| flavor_id | None |
|
||||
| ha | False |
|
||||
| headers | |
|
||||
| id | 3fd21a60-63be-11e6-9c95-5714c208c499 |
|
||||
| name | router2 |
|
||||
| project_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| revision_number | 1 |
|
||||
| routes | |
|
||||
| status | ACTIVE |
|
||||
| updated_at | 2017-01-10T13:15:19Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
$ openstack router create router3
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-10T13:15:19Z |
|
||||
| description | |
|
||||
| distributed | False |
|
||||
| external_gateway_info | null |
|
||||
| flavor_id | None |
|
||||
| ha | False |
|
||||
| headers | |
|
||||
| id | 40069a4c-63be-11e6-9ecc-e37c1eaa7e84 |
|
||||
| name | router3 |
|
||||
| project_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
| revision_number | 1 |
|
||||
| routes | |
|
||||
| status | ACTIVE |
|
||||
| updated_at | 2017-01-10T13:15:19Z |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
#. For each router, add one self-service subnet as an interface on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-interface-add router1 selfservice1
|
||||
Added interface 90e3880a-5f5c-11e6-914c-9f3e20c8c151 to router router1.
|
||||
|
||||
$ neutron router-interface-add router2 selfservice2
|
||||
Added interface 91628362-5f5c-11e6-826a-7322fb03a821 to router router2.
|
||||
|
||||
$ neutron router-interface-add router3 selfservice3
|
||||
Added interface 91d51044-5f5c-11e6-bf55-ffd180541cc2 to router router3.
|
||||
|
||||
#. Add the provider network as a gateway on each router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-gateway-set router1 provider
|
||||
Set gateway for router router1
|
||||
|
||||
$ neutron router-gateway-set router2 provider
|
||||
Set gateway for router router2
|
||||
|
||||
$ neutron router-gateway-set router3 provider
|
||||
Set gateway for router router3
|
||||
|
||||
Create and configure the BGP speaker
|
||||
------------------------------------
|
||||
|
||||
The BGP speaker advertises the next-hop IP address for eligible self-service
|
||||
networks and floating IP addresses for instances using those networks.
|
||||
|
||||
#. Create the BGP speaker.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-create --ip-version 4 \
|
||||
--local-as LOCAL_AS bgpspeaker
|
||||
Created a new bgp_speaker:
|
||||
+-----------------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------------------+--------------------------------------+
|
||||
| advertise_floating_ip_host_routes | True |
|
||||
| advertise_tenant_networks | True |
|
||||
| id | 5f227f14-4f46-4eca-9524-fc5a1eabc358 |
|
||||
| ip_version | 4 |
|
||||
| local_as | 1234 |
|
||||
| name | bgpspeaker |
|
||||
| networks | |
|
||||
| peers | |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
+-----------------------------------+--------------------------------------+
|
||||
|
||||
Replace ``LOCAL_AS`` with an appropriate local autonomous system number.
|
||||
The example configuration uses AS 1234.
|
||||
|
||||
#. A BGP speaker requires association with a provider network to determine
|
||||
eligible prefixes. The association builds a list of all virtual routers
|
||||
with gateways on provider and self-service networks in the same address
|
||||
scope so the BGP speaker can advertise self-service network prefixes with
|
||||
the corresponding router as the next-hop IP address. Associate the BGP
|
||||
speaker with the provider network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-network-add bgpspeaker provider
|
||||
Added network provider to BGP speaker bgpspeaker.
|
||||
|
||||
#. Verify association of the provider network with the BGP speaker.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-show bgpspeaker
|
||||
+-----------------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------------------+--------------------------------------+
|
||||
| advertise_floating_ip_host_routes | True |
|
||||
| advertise_tenant_networks | True |
|
||||
| id | 5f227f14-4f46-4eca-9524-fc5a1eabc358 |
|
||||
| ip_version | 4 |
|
||||
| local_as | 1234 |
|
||||
| name | bgpspeaker |
|
||||
| networks | 68ec148c-181f-4656-8334-8f4eb148689d |
|
||||
| peers | |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
+-----------------------------------+--------------------------------------+
|
||||
|
||||
#. Verify the prefixes and next-hop IP addresses that the BGP speaker
|
||||
advertises.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-advertiseroute-list bgpspeaker
|
||||
+-----------------+--------------+
|
||||
| destination | next_hop |
|
||||
+-----------------+--------------+
|
||||
| 192.0.2.0/25 | 203.0.113.11 |
|
||||
| 192.0.2.128/25 | 203.0.113.12 |
|
||||
+-----------------+--------------+
|
||||
|
||||
#. Create a BGP peer.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-peer-create --peer-ip 192.0.2.1 \
|
||||
--remote-as REMOTE_AS bgppeer
|
||||
Created a new bgp_peer:
|
||||
+-----------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------+--------------------------------------+
|
||||
| auth_type | none |
|
||||
| id | 35c89ca0-ac5a-4298-a815-0b073c2362e9 |
|
||||
| name | bgppeer |
|
||||
| peer_ip | 192.0.2.1 |
|
||||
| remote_as | 4321 |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
+-----------+--------------------------------------+
|
||||
|
||||
Replace ``REMOTE_AS`` with an appropriate remote autonomous system number.
|
||||
The example configuration uses AS 4321 which triggers EBGP peering.
|
||||
|
||||
.. note::
|
||||
|
||||
The host containing the BGP agent must have layer-3 connectivity to
|
||||
the provider router.
|
||||
|
||||
#. Add a BGP peer to the BGP speaker.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-peer-add bgpspeaker bgppeer
|
||||
Added BGP peer bgppeer to BGP speaker bgpspeaker.
|
||||
|
||||
#. Verify addition of the BGP peer to the BGP speaker.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-show bgpspeaker
|
||||
+-----------------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------------------+--------------------------------------+
|
||||
| advertise_floating_ip_host_routes | True |
|
||||
| advertise_tenant_networks | True |
|
||||
| id | 5f227f14-4f46-4eca-9524-fc5a1eabc358 |
|
||||
| ip_version | 4 |
|
||||
| local_as | 1234 |
|
||||
| name | bgpspeaker |
|
||||
| networks | 68ec148c-181f-4656-8334-8f4eb148689d |
|
||||
| peers | 35c89ca0-ac5a-4298-a815-0b073c2362e9 |
|
||||
| tenant_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
|
||||
+-----------------------------------+--------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
After creating a peering session, you cannot change the local or remote
|
||||
autonomous system numbers.
|
||||
|
||||
Schedule the BGP speaker to an agent
|
||||
------------------------------------
|
||||
|
||||
#. Unlike most agents, BGP speakers require manual scheduling to an agent.
|
||||
BGP speakers only form peering sessions and begin prefix advertisement
|
||||
after scheduling to an agent. Schedule the BGP speaker to agent
|
||||
``37729181-2224-48d8-89ef-16eca8e2f77e``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-dragent-speaker-add 37729181-2224-48d8-89ef-16eca8e2f77e bgpspeaker
|
||||
Associated BGP speaker bgpspeaker to the Dynamic Routing agent.
|
||||
|
||||
#. Verify scheduling of the BGP speaker to the agent.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-dragent-list-hosting-speaker bgpspeaker
|
||||
+--------------------------------------+------------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+------------+----------------+-------+
|
||||
| 37729181-2224-48d8-89ef-16eca8e2f77e | controller | True | :-) |
|
||||
+--------------------------------------+------------+----------------+-------+
|
||||
|
||||
$ neutron bgp-speaker-list-on-dragent 37729181-2224-48d8-89ef-16eca8e2f77e
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
| id | name | local_as | ip_version |
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
| 5f227f14-4f46-4eca-9524-fc5a1eabc358 | bgpspeaker | 1234 | 4 |
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
|
||||
Prefix advertisement
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
BGP dynamic routing advertises prefixes for self-service networks and host
|
||||
routes for floating IP addresses.
|
||||
|
||||
Advertisement of a self-service network requires satisfying the following
|
||||
conditions:
|
||||
|
||||
* The external and self-service network reside in the same address scope.
|
||||
|
||||
* The router contains an interface on the self-service subnet and a gateway
|
||||
on the external network.
|
||||
|
||||
* The BGP speaker associates with the external network that provides a
|
||||
gateway on the router.
|
||||
|
||||
* The BGP speaker has the ``advertise_tenant_networks`` attribute set to
|
||||
``True``.
|
||||
|
||||
.. image:: figures/bgp-dynamic-routing-example1.png
|
||||
:alt: Example of prefix advertisements with self-service networks
|
||||
|
||||
Advertisement of a floating IP address requires satisfying the following
|
||||
conditions:
|
||||
|
||||
* The router with the floating IP address binding contains a gateway on
|
||||
an external network with the BGP speaker association.
|
||||
|
||||
* The BGP speaker has the ``advertise_floating_ip_host_routes`` attribute
|
||||
set to ``True``.
|
||||
|
||||
.. image:: figures/bgp-dynamic-routing-example2.png
|
||||
:alt: Example of prefix advertisements with floating IP addresses
|
||||
|
||||
Operation with Distributed Virtual Routers (DVR)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In deployments using DVR, the BGP speaker advertises floating IP
|
||||
addresses and self-service networks differently. For floating IP
|
||||
addresses, the BGP speaker advertises the floating IP agent gateway
|
||||
on the corresponding compute node as the next-hop IP address. For
|
||||
self-service networks using SNAT, the BGP speaker advertises the
|
||||
DVR SNAT node as the next-hop IP address.
|
||||
|
||||
For example, consider the following components:
|
||||
|
||||
#. A provider network using IP address range 203.0.113.0/24, and supporting
|
||||
floating IP addresses 203.0.113.101, 203.0.113.102, and 203.0.113.103.
|
||||
|
||||
#. A self-service network using IP address range 198.51.100.0/24.
|
||||
|
||||
#. The SNAT gateway resides on 203.0.113.11.
|
||||
|
||||
#. The floating IP agent gateways (one per compute node) reside on
|
||||
203.0.113.12, 203.0.113.13, and 203.0.113.14.
|
||||
|
||||
#. Three instances, one per compute node, each with a floating IP
|
||||
address.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-speaker-advertiseroute-list bgpspeaker
|
||||
+------------------+--------------+
|
||||
| destination | next_hop |
|
||||
+------------------+--------------+
|
||||
| 198.51.100.0/24 | 203.0.113.11 |
|
||||
| 203.0.113.101/32 | 203.0.113.12 |
|
||||
| 203.0.113.102/32 | 203.0.113.13 |
|
||||
| 203.0.113.103/32 | 203.0.113.14 |
|
||||
+------------------+--------------+
|
||||
|
||||
.. note::
|
||||
|
||||
DVR lacks support for routing directly to a fixed IP address via the
|
||||
floating IP agent gateway port and thus prevents the BGP speaker from
|
||||
advertising fixed IP addresses.
|
||||
|
||||
You can also identify floating IP agent gateways in your environment to
|
||||
assist with verifying operation of the BGP speaker.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-list --device_owner="network:floatingip_agent_gateway"
|
||||
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------------------------+
|
||||
| 87cf2970-4970-462e-939e-00e808295dfa | | fa:16:3e:7c:68:e3 | {"subnet_id": "8ed65d41-2b2a-4f3a-9f92-45adb266e01a", "ip_address": "203.0.113.12"} |
|
||||
| 8d218440-0d2e-49d0-8a7b-3266a6146dc1 | | fa:16:3e:9d:78:cf | {"subnet_id": "8ed65d41-2b2a-4f3a-9f92-45adb266e01a", "ip_address": "203.0.113.13"} |
|
||||
| 87cf2970-4970-462e-939e-00e802281dfa | | fa:16:3e:6b:18:e0 | {"subnet_id": "8ed65d41-2b2a-4f3a-9f92-45adb266e01a", "ip_address": "203.0.113.14"} |
|
||||
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------------------------+
|
||||
|
||||
IPv6
|
||||
~~~~
|
||||
|
||||
BGP dynamic routing supports peering via IPv6 and advertising IPv6 prefixes.
|
||||
|
||||
* To enable peering via IPv6, create a BGP peer and use an IPv6 address for
|
||||
``peer_ip``.
|
||||
|
||||
* To enable advertising IPv6 prefixes, create an address scope with
|
||||
``ip_version=6`` and a BGP speaker with ``ip_version=6``.
|
||||
|
||||
.. note::
|
||||
|
||||
DVR with IPv6 functions similarly to DVR with IPv4.
|
||||
|
||||
High availability
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
BGP dynamic routing supports scheduling a BGP speaker to multiple agents
|
||||
which effectively multiplies prefix advertisements to the same peer. If
|
||||
an agent fails, the peer continues to receive advertisements from one or
|
||||
more operational agents.
|
||||
|
||||
#. Show available dynamic routing agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron agent-list --agent-type="BGP dynamic routing agent"
|
||||
+--------------------------------------+---------------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
|
||||
+--------------------------------------+---------------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
| 37729181-2224-48d8-89ef-16eca8e2f77e | BGP dynamic routing agent | bgp-ha1 | | :-) | True | neutron-bgp-dragent |
|
||||
| 1a2d33bb-9321-30a2-76ab-22eff3d2f56a | BGP dynamic routing agent | bgp-ha2 | | :-) | True | neutron-bgp-dragent |
|
||||
+--------------------------------------+---------------------------+----------+-------------------+-------+----------------+---------------------------+
|
||||
|
||||
#. Schedule BGP speaker to multiple agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron bgp-dragent-speaker-add 37729181-2224-48d8-89ef-16eca8e2f77e bgpspeaker
|
||||
Associated BGP speaker bgpspeaker to the Dynamic Routing agent.
|
||||
|
||||
$ neutron bgp-dragent-speaker-add 1a2d33bb-9321-30a2-76ab-22eff3d2f56a bgpspeaker
|
||||
Associated BGP speaker bgpspeaker to the Dynamic Routing agent.
|
||||
|
||||
$ neutron bgp-dragent-list-hosting-speaker bgpspeaker
|
||||
+--------------------------------------+---------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+---------+----------------+-------+
|
||||
| 37729181-2224-48d8-89ef-16eca8e2f77e | bgp-ha1 | True | :-) |
|
||||
| 1a2d33bb-9321-30a2-76ab-22eff3d2f56a | bgp-ha2 | True | :-) |
|
||||
+--------------------------------------+---------+----------------+-------+
|
||||
|
||||
$ neutron bgp-speaker-list-on-dragent 37729181-2224-48d8-89ef-16eca8e2f77e
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
| id | name | local_as | ip_version |
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
| 5f227f14-4f46-4eca-9524-fc5a1eabc358 | bgpspeaker | 1234 | 4 |
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
|
||||
$ neutron bgp-speaker-list-on-dragent 1a2d33bb-9321-30a2-76ab-22eff3d2f56a
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
| id | name | local_as | ip_version |
|
||||
+--------------------------------------+------------+----------+------------+
|
||||
| 5f227f14-4f46-4eca-9524-fc5a1eabc358 | bgpspeaker | 1234 | 4 |
|
||||
+--------------------------------------+------------+----------+------------+
|
504
doc/source/admin/config-dhcp-ha.rst
Normal file
@ -0,0 +1,504 @@
|
||||
.. _config-dhcp-ha:
|
||||
|
||||
==========================
|
||||
High-availability for DHCP
|
||||
==========================
|
||||
|
||||
This section describes how to use the agent management (alias agent) and
|
||||
scheduler (alias agent_scheduler) extensions for DHCP agents
|
||||
scalability and HA.
|
||||
|
||||
.. note::
|
||||
|
||||
Use the :command:`openstack extension list` command to check if these
|
||||
extensions are enabled. Check ``agent`` and ``agent_scheduler``
|
||||
are included in the output.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension list --network -c Name -c Alias
|
||||
+-------------------------------------------------------------+---------------------------+
|
||||
| Name | Alias |
|
||||
+-------------------------------------------------------------+---------------------------+
|
||||
| Default Subnetpools | default-subnetpools |
|
||||
| Network IP Availability | network-ip-availability |
|
||||
| Network Availability Zone | network_availability_zone |
|
||||
| Auto Allocated Topology Services | auto-allocated-topology |
|
||||
| Neutron L3 Configurable external gateway mode | ext-gw-mode |
|
||||
| Port Binding | binding |
|
||||
| Neutron Metering | metering |
|
||||
| agent | agent |
|
||||
| Subnet Allocation | subnet_allocation |
|
||||
| L3 Agent Scheduler | l3_agent_scheduler |
|
||||
| Tag support | tag |
|
||||
| Neutron external network | external-net |
|
||||
| Neutron Service Flavors | flavors |
|
||||
| Network MTU | net-mtu |
|
||||
| Availability Zone | availability_zone |
|
||||
| Quota management support | quotas |
|
||||
| HA Router extension | l3-ha |
|
||||
| Provider Network | provider |
|
||||
| Multi Provider Network | multi-provider |
|
||||
| Address scope | address-scope |
|
||||
| Neutron Extra Route | extraroute |
|
||||
| Subnet service types | subnet-service-types |
|
||||
| Resource timestamps | standard-attr-timestamp |
|
||||
| Neutron Service Type Management | service-type |
|
||||
| Router Flavor Extension | l3-flavors |
|
||||
| Tag support for resources: subnet, subnetpool, port, router | tag-ext |
|
||||
| Neutron Extra DHCP opts | extra_dhcp_opt |
|
||||
| Resource revision numbers | standard-attr-revisions |
|
||||
| Pagination support | pagination |
|
||||
| Sorting support | sorting |
|
||||
| security-group | security-group |
|
||||
| DHCP Agent Scheduler | dhcp_agent_scheduler |
|
||||
| Router Availability Zone | router_availability_zone |
|
||||
| RBAC Policies | rbac-policies |
|
||||
| standard-attr-description | standard-attr-description |
|
||||
| Neutron L3 Router | router |
|
||||
| Allowed Address Pairs | allowed-address-pairs |
|
||||
| project_id field enabled | project-id |
|
||||
| Distributed Virtual Router | dvr |
|
||||
+-------------------------------------------------------------+---------------------------+
|
||||
|
||||
Demo setup
|
||||
~~~~~~~~~~
|
||||
|
||||
.. figure:: figures/demo_multiple_dhcp_agents.png
|
||||
|
||||
There will be three hosts in the setup.
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Host
|
||||
- Description
|
||||
* - OpenStack controller host - controlnode
|
||||
- Runs the Networking, Identity, and Compute services that are required
|
||||
to deploy VMs. The node must have at least one network interface that
|
||||
is connected to the Management Network. Note that ``nova-network`` should
|
||||
not be running because it is replaced by Neutron.
|
||||
* - HostA
|
||||
- Runs ``nova-compute``, the Neutron L2 agent and DHCP agent
|
||||
* - HostB
|
||||
- Same as HostA
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
**controlnode: neutron server**
|
||||
|
||||
#. Neutron configuration file ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
core_plugin = linuxbridge
|
||||
rabbit_host = controlnode
|
||||
allow_overlapping_ips = True
|
||||
host = controlnode
|
||||
agent_down_time = 5
|
||||
dhcp_agents_per_network = 1
|
||||
|
||||
.. note::
|
||||
|
||||
In the above configuration, we use ``dhcp_agents_per_network = 1``
|
||||
for this demonstration. In usual deployments, we suggest setting
|
||||
``dhcp_agents_per_network`` to more than one to match the number of
|
||||
DHCP agents in your deployment.
|
||||
See :ref:`conf-dhcp-agents-per-network`.
|
||||
|
||||
#. Update the plug-in configuration file
|
||||
``/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
|
||||
**HostA and HostB: L2 agent**
|
||||
|
||||
#. Neutron configuration file ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
rabbit_host = controlnode
|
||||
rabbit_password = openstack
|
||||
# host = HostB on hostb
|
||||
host = HostA
|
||||
|
||||
#. Update the plug-in configuration file
|
||||
``/etc/neutron/plugins/linuxbridge/linuxbridge_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vlans]
|
||||
tenant_network_type = vlan
|
||||
network_vlan_ranges = physnet1:1000:2999
|
||||
[database]
|
||||
connection = mysql://root:root@127.0.0.1:3306/neutron_linux_bridge
|
||||
retry_interval = 2
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = physnet1:eth0
|
||||
|
||||
#. Update the nova configuration file ``/etc/nova/nova.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
use_neutron=True
|
||||
firewall_driver=nova.virt.firewall.NoopFirewallDriver
|
||||
|
||||
[neutron]
|
||||
admin_username=neutron
|
||||
admin_password=servicepassword
|
||||
admin_auth_url=http://controlnode:35357/v2.0/
|
||||
auth_strategy=keystone
|
||||
admin_tenant_name=servicetenant
|
||||
url=http://203.0.113.10:9696/
|
||||
|
||||
**HostA and HostB: DHCP agent**
|
||||
|
||||
- Update the DHCP configuration file ``/etc/neutron/dhcp_agent.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
|
||||
|
||||
Prerequisites for demonstration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Admin role is required to use the agent management and scheduler extensions.
|
||||
Ensure you run the following commands under a project with an admin role.
|
||||
|
||||
To experiment, you need VMs and a neutron network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------+--------+----------------+------------+
|
||||
| ID | Name | Status | Networks | Image Name |
|
||||
+--------------------------------------+-----------+--------+----------------+------------+
|
||||
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE | net1=192.0.2.3 | cirros |
|
||||
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=192.0.2.4 | ubuntu |
|
||||
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=192.0.2.5 | centos |
|
||||
+--------------------------------------+-----------+--------+----------------+------------+
|
||||
|
||||
$ openstack network list
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ad88e059-e7fa-4cf7-8857-6731a2a3a554 | net1 | 8086db87-3a7a-4cad-88c9-7bab9bc69258 |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
|
||||
Managing agents in neutron deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
#. List all agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | Linux bridge agent | HostA | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | DHCP agent | HostA | None | True | UP | neutron-dhcp-agent |
|
||||
| 3066d20c-9f8f-440c-ae7c-a40ffb4256b6 | Linux bridge agent | HostB | nova | True | UP | neutron-linuxbridge-agent |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | DHCP agent | HostB | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Every agent that supports these extensions will register itself with the
|
||||
neutron server when it starts up.
|
||||
|
||||
The output shows information for four agents. The ``alive`` field shows
|
||||
``True`` if the agent reported its state within the period defined by the
|
||||
``agent_down_time`` option in the ``neutron.conf`` file. Otherwise the
|
||||
``alive`` is ``False``.
|
||||
|
||||
#. List DHCP agents that host a specified network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list --network net1
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | HostA | UP | True |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
|
||||
#. List the networks hosted by a given DHCP agent:
|
||||
|
||||
This command is to show which networks a given dhcp agent is managing.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list --agent 22467163-01ea-4231-ba45-3bd316f425e6
|
||||
+--------------------------------+------------------------+---------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------+------------------------+---------------------------------+
|
||||
| ad88e059-e7fa- | net1 | 8086db87-3a7a-4cad- |
|
||||
| 4cf7-8857-6731a2a3a554 | | 88c9-7bab9bc69258 |
|
||||
+--------------------------------+------------------------+---------------------------------+
|
||||
|
||||
#. Show agent details.
|
||||
|
||||
The :command:`openstack network agent show` command shows details for a
|
||||
specified agent:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent show 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b
|
||||
+---------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| agent_type | DHCP agent |
|
||||
| alive | True |
|
||||
| availability_zone | nova |
|
||||
| binary | neutron-dhcp-agent |
|
||||
| configurations | dhcp_driver='neutron.agent.linux.dhcp.Dnsmasq', |
|
||||
| | dhcp_lease_duration='86400', |
|
||||
| | log_agent_heartbeats='False', networks='1', |
|
||||
| | notifies_port_ready='True', ports='3', |
|
||||
| | subnets='1' |
|
||||
| created_at | 2016-12-14 00:25:54 |
|
||||
| description | None |
|
||||
| last_heartbeat_at | 2016-12-14 06:53:24 |
|
||||
| host | HostA |
|
||||
| id | 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b |
|
||||
| started_at | 2016-12-14 00:25:54 |
|
||||
| topic | dhcp_agent |
|
||||
+---------------------+--------------------------------------------------+
|
||||
|
||||
In this output, ``last_heartbeat_at`` is the time on the neutron
|
||||
server. You do not need to synchronize all agents to this time for this
|
||||
extension to run correctly. ``configurations`` describes the static
|
||||
configuration for the agent or run time data. This agent is a DHCP agent
|
||||
and it hosts one network, one subnet, and three ports.
|
||||
|
||||
Different types of agents show different details. The following output
|
||||
shows information for a Linux bridge agent:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent show 22467163-01ea-4231-ba45-3bd316f425e6
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| agent_type | Linux bridge agent |
|
||||
| alive | True |
|
||||
| availability_zone | nova |
|
||||
| binary | neutron-linuxbridge-agent |
|
||||
| configurations | { |
|
||||
| | "physnet1": "eth0", |
|
||||
| | "devices": "4" |
|
||||
| | } |
|
||||
| created_at | 2016-12-14 00:26:54 |
|
||||
| description | None |
|
||||
| last_heartbeat_at | 2016-12-14 06:53:24 |
|
||||
| host | HostA |
|
||||
| id | 22467163-01ea-4231-ba45-3bd316f425e6 |
|
||||
| started_at | 2016-12-14T06:48:39.000000 |
|
||||
| topic | N/A |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
The output shows ``bridge-mapping`` and the number of virtual network
|
||||
devices on this L2 agent.
|
||||
|
||||
Managing assignment of networks to DHCP agent
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
A single network can be assigned to more than one DHCP agents and
|
||||
one DHCP agent can host more than one network.
|
||||
You can add a network to a DHCP agent and remove one from it.
|
||||
|
||||
#. Default scheduling.
|
||||
|
||||
When you create a network with one port, the network will be scheduled to
|
||||
an active DHCP agent. If many active DHCP agents are running, select one
|
||||
randomly. You can design more sophisticated scheduling algorithms in the
|
||||
same way as nova-schedule later on.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create net2
|
||||
$ openstack subnet create --network net2 --subnet-range 198.51.100.0/24 subnet2
|
||||
$ openstack port create port2 --network net2
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP | True |
|
||||
+--------------------------------------+---------------+----------------+-------+
|
||||
|
||||
It is allocated to DHCP agent on HostA. If you want to validate the
|
||||
behavior through the :command:`dnsmasq` command, you must create a subnet for
|
||||
the network because the DHCP agent starts the dnsmasq service only if
|
||||
there is a DHCP.
|
||||
|
||||
#. Assign a network to a given DHCP agent.
|
||||
|
||||
To add another DHCP agent to host the network, run this command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent add network --dhcp \
|
||||
55569f4e-6f31-41a6-be9d-526efce1f7fe net2
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+-------+----------------+--------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+-------+----------------+--------+
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP | True |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | HostB | UP | True |
|
||||
+--------------------------------------+-------+----------------+--------+
|
||||
|
||||
Both DHCP agents host the ``net2`` network.
|
||||
|
||||
#. Remove a network from a specified DHCP agent.
|
||||
|
||||
This command is the sibling command for the previous one. Remove
|
||||
``net2`` from the DHCP agent for HostA:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent remove network --dhcp \
|
||||
2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b net2
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | HostB | UP | True |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
You can see that only the DHCP agent for HostB is hosting the ``net2``
|
||||
network.
|
||||
|
||||
HA of DHCP agents
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Boot a VM on ``net2``. Let both DHCP agents host ``net2``. Fail the agents
|
||||
in turn to see if the VM can still get the desired IP.
|
||||
|
||||
#. Boot a VM on ``net2``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network list
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ID | Name | Subnets |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
| ad88e059-e7fa-4cf7-8857-6731a2a3a554 | net1 | 8086db87-3a7a-4cad-88c9-7bab9bc69258 |
|
||||
| 9b96b14f-71b8-4918-90aa-c5d705606b1a | net2 | 6979b71a-0ae8-448c-aa87-65f68eedcaaa |
|
||||
+--------------------------------------+------+--------------------------------------+
|
||||
$ openstack server create --image tty --flavor 1 myserver4 \
|
||||
--nic net-id=9b96b14f-71b8-4918-90aa-c5d705606b1a
|
||||
...
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------+--------+-------------------+------------+
|
||||
| ID | Name | Status | Networks | Image Name |
|
||||
+--------------------------------------+-----------+--------+-------------------+------------+
|
||||
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE | net1=192.0.2.3 | cirros |
|
||||
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE | net1=192.0.2.4 | ubuntu |
|
||||
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE | net1=192.0.2.5 | centos |
|
||||
| f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE | net2=198.51.100.2 | cirros1 |
|
||||
+--------------------------------------+-----------+--------+-------------------+------------+
|
||||
|
||||
#. Make sure both DHCP agents hosting ``net2``:
|
||||
|
||||
Use the previous commands to assign the network to agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list --network net2
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| ID | Host | Admin State Up | Alive |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP | True |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | HostB | UP | True |
|
||||
+--------------------------------------+-------+----------------+-------+
|
||||
|
||||
To test the HA of DHCP agent:
|
||||
|
||||
#. Log in to the ``myserver4`` VM, and run ``udhcpc``, ``dhclient`` or
|
||||
other DHCP client.
|
||||
|
||||
#. Stop the DHCP agent on HostA. Besides stopping the
|
||||
``neutron-dhcp-agent`` binary, you must stop the ``dnsmasq`` processes.
|
||||
|
||||
#. Run a DHCP client in VM to see if it can get the wanted IP.
|
||||
|
||||
#. Stop the DHCP agent on HostB too.
|
||||
|
||||
#. Run ``udhcpc`` in the VM; it cannot get the wanted IP.
|
||||
|
||||
#. Start DHCP agent on HostB. The VM gets the wanted IP again.
|
||||
|
||||
Disabling and removing an agent
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
An administrator might want to disable an agent if a system hardware or
|
||||
software upgrade is planned. Some agents that support scheduling also
|
||||
support disabling and enabling agents, such as L3 and DHCP agents. After
|
||||
the agent is disabled, the scheduler does not schedule new resources to
|
||||
the agent.
|
||||
|
||||
After the agent is disabled, you can safely remove the agent.
|
||||
Even after disabling the agent, resources on the agent are kept assigned.
|
||||
Ensure you remove the resources on the agent before you delete the agent.
|
||||
|
||||
Disable the DHCP agent on HostA before you stop it:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent set 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b --disable
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | Linux bridge agent | HostA | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | DHCP agent | HostA | None | True | DOWN | neutron-dhcp-agent |
|
||||
| 3066d20c-9f8f-440c-ae7c-a40ffb4256b6 | Linux bridge agent | HostB | nova | True | UP | neutron-linuxbridge-agent |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | DHCP agent | HostB | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
After you stop the DHCP agent on HostA, you can delete it by the following
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent delete 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
| 22467163-01ea-4231-ba45-3bd316f425e6 | Linux bridge agent | HostA | None | True | UP | neutron-linuxbridge-agent |
|
||||
| 3066d20c-9f8f-440c-ae7c-a40ffb4256b6 | Linux bridge agent | HostB | nova | True | UP | neutron-linuxbridge-agent |
|
||||
| 55569f4e-6f31-41a6-be9d-526efce1f7fe | DHCP agent | HostB | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+-------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
After deletion, if you restart the DHCP agent, it appears on the agent
|
||||
list again.
|
||||
|
||||
.. _conf-dhcp-agents-per-network:
|
||||
|
||||
Enabling DHCP high availability by default
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can control the default number of DHCP agents assigned to a network
|
||||
by setting the following configuration option
|
||||
in the file ``/etc/neutron/neutron.conf``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
dhcp_agents_per_network = 3
|
841
doc/source/admin/config-dns-int.rst
Normal file
@ -0,0 +1,841 @@
|
||||
.. _config-dns-int:
|
||||
|
||||
===============
|
||||
DNS integration
|
||||
===============
|
||||
|
||||
This page serves as a guide for how to use the DNS integration functionality of
|
||||
the Networking service. The functionality described covers DNS from two points
|
||||
of view:
|
||||
|
||||
* The internal DNS functionality offered by the Networking service and its
|
||||
interaction with the Compute service.
|
||||
* Integration of the Compute service and the Networking service with an
|
||||
external DNSaaS (DNS-as-a-Service).
|
||||
|
||||
Users can control the behavior of the Networking service in regards to DNS
|
||||
using two attributes associated with ports, networks, and floating IPs. The
|
||||
following table shows the attributes available for each one of these resources:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 30 30 30
|
||||
|
||||
* - Resource
|
||||
- dns_name
|
||||
- dns_domain
|
||||
* - Ports
|
||||
- Yes
|
||||
- No
|
||||
* - Networks
|
||||
- No
|
||||
- Yes
|
||||
* - Floating IPs
|
||||
- Yes
|
||||
- Yes
|
||||
|
||||
.. _config-dns-int-dns-resolution:
|
||||
|
||||
The Networking service internal DNS resolution
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The Networking service enables users to control the name assigned to ports by
|
||||
the internal DNS. To enable this functionality, do the following:
|
||||
|
||||
1. Edit the ``/etc/neutron/neutron.conf`` file and assign a value different to
|
||||
``openstacklocal`` (its default value) to the ``dns_domain`` parameter in
|
||||
the ``[default]`` section. As an example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
dns_domain = example.org.
|
||||
|
||||
2. Add ``dns`` to ``extension_drivers`` in the ``[ml2]`` section of
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini``. The following is an example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[ml2]
|
||||
extension_drivers = port_security,dns
|
||||
|
||||
After re-starting the ``neutron-server``, users will be able to assign a
|
||||
``dns_name`` attribute to their ports.
|
||||
|
||||
.. note::
|
||||
The enablement of this functionality is prerequisite for the enablement of
|
||||
the Networking service integration with an external DNS service, which is
|
||||
described in detail in :ref:`config-dns-int-ext-serv`.
|
||||
|
||||
The following illustrates the creation of a port with ``my-port``
|
||||
in its ``dns_name`` attribute.
|
||||
|
||||
.. note::
|
||||
The name assigned to the port by the Networking service internal DNS is now
|
||||
visible in the response in the ``dns_assignment`` attribute.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-create my-net --dns-name my-port
|
||||
Created a new port:
|
||||
+-----------------------+-------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+-------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| allowed_address_pairs | |
|
||||
| binding:vnic_type | normal |
|
||||
| device_id | |
|
||||
| device_owner | |
|
||||
| dns_assignment | {"hostname": "my-port", "ip_address": "192.0.2.67", "fqdn": "my-port.example.org."} |
|
||||
| dns_name | my-port |
|
||||
| fixed_ips | {"subnet_id":"6141b474-56cd-430f-b731-71660bb79b79", "ip_address": "192.0.2.67"} |
|
||||
| id | fb3c10f4-017e-420c-9be1-8f8c557ae21f |
|
||||
| mac_address | fa:16:3e:aa:9b:e1 |
|
||||
| name | |
|
||||
| network_id | bf2802a0-99a0-4e8c-91e4-107d03f158ea |
|
||||
| port_security_enabled | True |
|
||||
| security_groups | 1f0ddd73-7e3c-48bd-a64c-7ded4fe0e635 |
|
||||
| status | DOWN |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-----------------------+-------------------------------------------------------------------------------------+
|
||||
|
||||
When this functionality is enabled, it is leveraged by the Compute service when
|
||||
creating instances. When allocating ports for an instance during boot, the
|
||||
Compute service populates the ``dns_name`` attributes of these ports with
|
||||
the ``hostname`` attribute of the instance, which is a DNS sanitized version of
|
||||
its display name. As a consequence, at the end of the boot process, the
|
||||
allocated ports will be known in the dnsmasq associated to their networks by
|
||||
their instance ``hostname``.
|
||||
|
||||
The following is an example of an instance creation, showing how its
|
||||
``hostname`` populates the ``dns_name`` attribute of the allocated port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --image cirros --flavor 42 \
|
||||
--nic net-id=37aaff3a-6047-45ac-bf4f-a825e56fd2b3 my_vm
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | dB45Zvo8Jpfe |
|
||||
| config_drive | |
|
||||
| created | 2016-02-05T21:35:04Z |
|
||||
| flavor | m1.nano (42) |
|
||||
| hostId | |
|
||||
| id | 66c13cb4-3002-4ab3-8400-7efc2659c363 |
|
||||
| image | cirros-0.3.5-x86_64-uec(b9d981eb-d21c-4ce2-9dbc-dd38f3d9015f) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | my_vm |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
| updated | 2016-02-05T21:35:04Z |
|
||||
| user_id | 8bb6e578cba24e7db9d3810633124525 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
$ neutron port-list --device_id 66c13cb4-3002-4ab3-8400-7efc2659c363
|
||||
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
|
||||
| b3ecc464-1263-44a7-8c38-2d8a52751773 | | fa:16:3e:a8:ce:b8 | {"subnet_id": "277eca5d-9869-474b-960e-6da5951d09f7", "ip_address": "203.0.113.8"} |
|
||||
| | | | {"subnet_id": "eab47748-3f0a-4775-a09f-b0c24bb64bc4", "ip_address":"2001:db8:10::8"} |
|
||||
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------------+
|
||||
|
||||
$ neutron port-show b3ecc464-1263-44a7-8c38-2d8a52751773
|
||||
+-----------------------+---------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+---------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| allowed_address_pairs | |
|
||||
| binding:vnic_type | normal |
|
||||
| device_id | 66c13cb4-3002-4ab3-8400-7efc2659c363 |
|
||||
| device_owner | compute:None |
|
||||
| dns_assignment | {"hostname": "my-vm", "ip_address": "203.0.113.8", "fqdn": "my-vm.example.org."} |
|
||||
| | {"hostname": "my-vm", "ip_address": "2001:db8:10::8", "fqdn": "my-vm.example.org."} |
|
||||
| dns_name | my-vm |
|
||||
| extra_dhcp_opts | |
|
||||
| fixed_ips | {"subnet_id": "277eca5d-9869-474b-960e-6da5951d09f7", "ip_address": "203.0.113.8"} |
|
||||
| | {"subnet_id": "eab47748-3f0a-4775-a09f-b0c24bb64bc4", "ip_address": "2001:db8:10::8"} |
|
||||
| id | b3ecc464-1263-44a7-8c38-2d8a52751773 |
|
||||
| mac_address | fa:16:3e:a8:ce:b8 |
|
||||
| name | |
|
||||
| network_id | 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 |
|
||||
| port_security_enabled | True |
|
||||
| security_groups | 1f0ddd73-7e3c-48bd-a64c-7ded4fe0e635 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-----------------------+---------------------------------------------------------------------------------------+
|
||||
|
||||
In the above example notice that:
|
||||
|
||||
* The name given to the instance by the user, ``my_vm``, is sanitized by the
|
||||
Compute service and becomes ``my-vm`` as the port's ``dns_name``.
|
||||
* The port's ``dns_assignment`` attribute shows that its FQDN is
|
||||
``my-vm.example.org.`` in the Networking service internal DNS, which is
|
||||
the result of concatenating the port's ``dns_name`` with the value configured
|
||||
in the ``dns_domain`` parameter in ``neutron.conf``, as explained previously.
|
||||
* The ``dns_assignment`` attribute also shows that the port's ``hostname`` in
|
||||
the Networking service internal DNS is ``my-vm``.
|
||||
* Instead of having the Compute service create the port for the instance, the
|
||||
user might have created it and assigned a value to its ``dns_name``
|
||||
attribute. In this case, the value assigned to the ``dns_name`` attribute
|
||||
must be equal to the value that Compute service will assign to the instance's
|
||||
``hostname``, in this example ``my-vm``. Otherwise, the instance boot will
|
||||
fail.
|
||||
|
||||
Integration with an external DNS service
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Users can also integrate the Networking and Compute services with an external
|
||||
DNS. To accomplish this, the users have to:
|
||||
|
||||
#. Enable the functionality described in
|
||||
:ref:`config-dns-int-dns-resolution`.
|
||||
#. Configure an external DNS driver. The Networking service provides a driver
|
||||
reference implementation based on the OpenStack DNS service. It is expected
|
||||
that third party vendors will provide other implementations in the future.
|
||||
For detailed configuration instructions, see
|
||||
:ref:`config-dns-int-ext-serv`.
|
||||
|
||||
Once the ``neutron-server`` has been configured and restarted, users will have
|
||||
functionality that covers three use cases, described in the following sections.
|
||||
In each of the use cases described below:
|
||||
|
||||
* The examples assume the OpenStack DNS service as the external DNS.
|
||||
* A, AAAA and PTR records will be created in the DNS service.
|
||||
* Before executing any of the use cases, the user must create in the DNS
|
||||
service under his project a DNS zone where the A and AAAA records will be
|
||||
created. For the description of the use cases below, it is assumed the zone
|
||||
``example.org.`` was created previously.
|
||||
* The PTR records will be created in zones owned by a project with admin
|
||||
privileges. See :ref:`config-dns-int-ext-serv` for more details.
|
||||
|
||||
.. _config-dns-use-case-1:
|
||||
|
||||
Use case 1: Ports are published directly in the external DNS service
|
||||
--------------------------------------------------------------------
|
||||
|
||||
In this case, the user is creating ports or booting instances on a network
|
||||
that is accessible externally. The steps to publish the port in the external
|
||||
DNS service are the following:
|
||||
|
||||
#. Assign a valid domain name to the network's ``dns_domain`` attribute. This
|
||||
name must end with a period (``.``).
|
||||
#. Boot an instance specifying the externally accessible network.
|
||||
Alternatively, create a port on the externally accessible network specifying
|
||||
a valid value to its ``dns_name`` attribute. If the port is going to be used
|
||||
for an instance boot, the value assigned to ``dns_name`` must be equal to
|
||||
the ``hostname`` that the Compute service will assign to the instance.
|
||||
Otherwise, the boot will fail.
|
||||
|
||||
Once these steps are executed, the port's DNS data will be published in the
|
||||
external DNS service. This is an example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-list
|
||||
+--------------------------------------+----------+----------------------------------------------------------+
|
||||
| id | name | subnets |
|
||||
+--------------------------------------+----------+----------------------------------------------------------+
|
||||
| 41fa3995-9e4a-4cd9-bb51-3e5424f2ff2a | public | a67cfdf7-9d5d-406f-8a19-3f38e4fc3e74 |
|
||||
| | | cbd8c6dc-ca81-457e-9c5d-f8ece7ef67f8 |
|
||||
| 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 | external | 277eca5d-9869-474b-960e-6da5951d09f7 203.0.113.0/24 |
|
||||
| | | eab47748-3f0a-4775-a09f-b0c24bb64bc4 2001:db8:10::/64 |
|
||||
| bf2802a0-99a0-4e8c-91e4-107d03f158ea | my-net | 6141b474-56cd-430f-b731-71660bb79b79 192.0.2.64/26 |
|
||||
| 38c5e950-b450-4c30-83d4-ee181c28aad3 | private | 43414c53-62ae-49bc-aa6c-c9dd7705818a fda4:653e:71b0::/64 |
|
||||
| | | 5b9282a1-0be1-4ade-b478-7868ad2a16ff 192.0.2.0/26 |
|
||||
+--------------------------------------+----------+----------------------------------------------------------+
|
||||
|
||||
$ neutron net-update 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 --dns_domain example.org.
|
||||
Updated network: 37aaff3a-6047-45ac-bf4f-a825e56fd2b3
|
||||
|
||||
$ neutron net-show 37aaff3a-6047-45ac-bf4f-a825e56fd2b3
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | nova |
|
||||
| dns_domain | example.org. |
|
||||
| id | 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 |
|
||||
| mtu | 1450 |
|
||||
| name | external |
|
||||
| port_security_enabled | True |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | |
|
||||
| provider:segmentation_id | 2016 |
|
||||
| router:external | False |
|
||||
| shared | True |
|
||||
| status | ACTIVE |
|
||||
| subnets | eab47748-3f0a-4775-a09f-b0c24bb64bc4 |
|
||||
| | 277eca5d-9869-474b-960e-6da5951d09f7 |
|
||||
| tenant_id | 04fc2f83966245dba907efb783f8eab9 |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
$ designate record-list example.org.
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
| 10a36008-6ecf-47c3-b321-05652a929b04 | SOA | example.org. | ns1.devstack.org. malavall.us.ibm.com. 1454729414 3600 600 86400 3600 |
|
||||
| 56ca0b88-e343-4c98-8faa-19746e169baf | NS | example.org. | ns1.devstack.org. |
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
|
||||
$ neutron port-create 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 --dns_name my-vm
|
||||
Created a new port:
|
||||
+-----------------------+---------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+---------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| allowed_address_pairs | |
|
||||
| binding:vnic_type | normal |
|
||||
| device_id | |
|
||||
| device_owner | |
|
||||
| dns_assignment | {"hostname": "my-vm", "ip_address": "203.0.113.9", "fqdn": "my-vm.example.org."} |
|
||||
| | {"hostname": "my-vm", "ip_address": "2001:db8:10::9", "fqdn": "my-vm.example.org."} |
|
||||
| dns_name | my-vm |
|
||||
| fixed_ips | {"subnet_id": "277eca5d-9869-474b-960e-6da5951d09f7", "ip_address": "203.0.113.9"} |
|
||||
| | {"subnet_id": "eab47748-3f0a-4775-a09f-b0c24bb64bc4", "ip_address": "2001:db8:10::9"} |
|
||||
| id | 04be331b-dc5e-410a-9103-9c8983aeb186 |
|
||||
| mac_address | fa:16:3e:0f:4b:e4 |
|
||||
| name | |
|
||||
| network_id | 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 |
|
||||
| port_security_enabled | True |
|
||||
| security_groups | 1f0ddd73-7e3c-48bd-a64c-7ded4fe0e635 |
|
||||
| status | DOWN |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-----------------------+---------------------------------------------------------------------------------------+
|
||||
|
||||
$ designate record-list example.org.
|
||||
+--------------------------------------+------+--------------------+-----------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+--------------------+-----------------------------------------------------------------------+
|
||||
| 10a36008-6ecf-47c3-b321-05652a929b04 | SOA | example.org. | ns1.devstack.org. malavall.us.ibm.com. 1455563035 3600 600 86400 3600 |
|
||||
| 56ca0b88-e343-4c98-8faa-19746e169baf | NS | example.org. | ns1.devstack.org. |
|
||||
| 3593591b-181f-4beb-9ab7-67fad7413b37 | A | my-vm.example.org. | 203.0.113.9 |
|
||||
| 5649c68f-7a88-48f5-9f87-ccb1f6ae67ca | AAAA | my-vm.example.org. | 2001:db8:10::9 |
|
||||
+--------------------------------------+------+--------------------+-----------------------------------------------------------------------+
|
||||
|
||||
$ openstack server create --image cirros --flavor 42 \
|
||||
--nic port-id=04be331b-dc5e-410a-9103-9c8983aeb186 my_vm
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | TDc9EpBT3B9W |
|
||||
| config_drive | |
|
||||
| created | 2016-02-15T19:10:43Z |
|
||||
| flavor | m1.nano (42) |
|
||||
| hostId | |
|
||||
| id | 62c19691-d1c7-4d7b-a88e-9cc4d95d4f41 |
|
||||
| image | cirros-0.3.5-x86_64-uec (b9d981eb-d21c-4ce2-9dbc-dd38f3d9015f) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | my_vm |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
| updated | 2016-02-15T19:10:43Z |
|
||||
| user_id | 8bb6e578cba24e7db9d3810633124525 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------+------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks | Image Name |
|
||||
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------+------------+
|
||||
| 62c19691-d1c7-4d7b-a88e-9cc4d95d4f41 | my_vm | ACTIVE | - | Running | external=203.0.113.9, 2001:db8:10::9 | cirros |
|
||||
+--------------------------------------+-------+--------+------------+-------------+--------------------------------------+------------+
|
||||
|
||||
In this example the port is created manually by the user and then used to boot
|
||||
an instance. Notice that:
|
||||
|
||||
* The port's data was visible in the DNS service as soon as it was created.
|
||||
* See :ref:`config-dns-performance-considerations` for an explanation of
|
||||
the potential performance impact associated with this use case.
|
||||
|
||||
Following are the PTR records created for this example. Note that for
|
||||
IPv4, the value of ipv4_ptr_zone_prefix_size is 24. In the case of IPv6, the
|
||||
value of ipv6_ptr_zone_prefix_size is 116. For more details, see
|
||||
:ref:`config-dns-int-ext-serv`:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ designate record-list 113.0.203.in-addr.arpa.
|
||||
+--------------------------------------+------+---------------------------+---------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+---------------------------+---------------------------------------------------------------------+
|
||||
| ab7ada72-7e64-4bed-913e-04718a80fafc | NS | 113.0.203.in-addr.arpa. | ns1.devstack.org. |
|
||||
| 28346a94-790c-4ae1-9f7b-069d98d9efbd | SOA | 113.0.203.in-addr.arpa. | ns1.devstack.org. admin.example.org. 1455563035 3600 600 86400 3600 |
|
||||
| cfcaf537-844a-4c1b-9b5f-464ff07dca33 | PTR | 9.113.0.203.in-addr.arpa. | my-vm.example.org. |
|
||||
+--------------------------------------+------+---------------------------+---------------------------------------------------------------------+
|
||||
|
||||
$ designate record-list 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.8.b.d.0.1.0.0.2.ip6.arpa.
|
||||
+--------------------------------------+------+---------------------------------------------------------------------------+---------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+---------------------------------------------------------------------------+---------------------------------------------------------------------+
|
||||
| d8923354-13eb-4bd9-914a-0a2ae5f95989 | SOA | 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.8.b.d.0.1.0.0.2.ip6.arpa. | ns1.devstack.org. admin.example.org. 1455563036 3600 600 86400 3600 |
|
||||
| 72e60acd-098d-41ea-9771-5b6546c9c06f | NS | 0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.8.b.d.0.1.0.0.2.ip6.arpa. | ns1.devstack.org. |
|
||||
| 877e0215-2ddf-4d01-a7da-47f1092dfd56 | PTR | 9.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.1.0.0.8.b.d.0.1.0.0.2.ip6.arpa. | my-vm.example.org. |
|
||||
+--------------------------------------+------+---------------------------------------------------------------------------+---------------------------------------------------------------------+
|
||||
|
||||
See :ref:`config-dns-int-ext-serv` for detailed instructions on how
|
||||
to create the externally accessible network.
|
||||
|
||||
Use case 2: Floating IPs are published with associated port DNS attributes
|
||||
--------------------------------------------------------------------------
|
||||
|
||||
In this use case, the address of a floating IP is published in the external
|
||||
DNS service in conjunction with the ``dns_name`` of its associated port and the
|
||||
``dns_domain`` of the port's network. The steps to execute in this use case are
|
||||
the following:
|
||||
|
||||
#. Assign a valid domain name to the network's ``dns_domain`` attribute. This
|
||||
name must end with a period (``.``).
|
||||
#. Boot an instance or alternatively, create a port specifying a valid value to
|
||||
its ``dns_name`` attribute. If the port is going to be used for an instance
|
||||
boot, the value assigned to ``dns_name`` must be equal to the ``hostname``
|
||||
that the Compute service will assign to the instance. Otherwise, the boot
|
||||
will fail.
|
||||
#. Create a floating IP and associate it to the port.
|
||||
|
||||
Following is an example of these steps:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-update 38c5e950-b450-4c30-83d4-ee181c28aad3 --dns_domain example.org.
|
||||
Updated network: 38c5e950-b450-4c30-83d4-ee181c28aad3
|
||||
|
||||
$ neutron net-show 38c5e950-b450-4c30-83d4-ee181c28aad3
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | nova |
|
||||
| dns_domain | example.org. |
|
||||
| id | 38c5e950-b450-4c30-83d4-ee181c28aad3 |
|
||||
| mtu | 1450 |
|
||||
| name | private |
|
||||
| port_security_enabled | True |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | 43414c53-62ae-49bc-aa6c-c9dd7705818a |
|
||||
| | 5b9282a1-0be1-4ade-b478-7868ad2a16ff |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
$ openstack server create --image cirros --flavor 42 \
|
||||
--nic net-id=38c5e950-b450-4c30-83d4-ee181c28aad3 my_vm
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | oTLQLR3Kezmt |
|
||||
| config_drive | |
|
||||
| created | 2016-02-15T19:27:34Z |
|
||||
| flavor | m1.nano (42) |
|
||||
| hostId | |
|
||||
| id | 43f328bb-b2d1-4cf1-a36f-3b2593397cb1 |
|
||||
| image | cirros-0.3.5-x86_64-uec (b9d981eb-d21c-4ce2-9dbc-dd38f3d9015f) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | my_vm |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
| updated | 2016-02-15T19:27:34Z |
|
||||
| user_id | 8bb6e578cba24e7db9d3810633124525 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------+------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks | Image Name |
|
||||
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------+------------+
|
||||
| 43f328bb-b2d1-4cf1-a36f-3b2593397cb1 | my_vm | ACTIVE | - | Running | private=fda4:653e:71b0:0:f816:3eff:fe16:b5f2, 192.0.2.15 | cirros |
|
||||
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------+------------+
|
||||
|
||||
$ neutron port-list --device_id 43f328bb-b2d1-4cf1-a36f-3b2593397cb1
|
||||
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| da0b1f75-c895-460f-9fc1-4d6ec84cf85f | | fa:16:3e:16:b5:f2 | {"subnet_id": "5b9282a1-0be1-4ade-b478-7868ad2a16ff", "ip_address": "192.0.2.15"} |
|
||||
| | | | {"subnet_id": "43414c53-62ae-49bc-aa6c-c9dd7705818a", "ip_address": "fda4:653e:71b0:0:f816:3eff:fe16:b5f2"} |
|
||||
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
$ neutron port-show da0b1f75-c895-460f-9fc1-4d6ec84cf85f
|
||||
+-----------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| allowed_address_pairs | |
|
||||
| binding:vnic_type | normal |
|
||||
| device_id | 43f328bb-b2d1-4cf1-a36f-3b2593397cb1 |
|
||||
| device_owner | compute:None |
|
||||
| dns_assignment | {"hostname": "my-vm", "ip_address": "192.0.2.15", "fqdn": "my-vm.example.org."} |
|
||||
| | {"hostname": "my-vm", "ip_address": "fda4:653e:71b0:0:f816:3eff:fe16:b5f2", "fqdn": "my-vm.example.org."} |
|
||||
| dns_name | my-vm |
|
||||
| extra_dhcp_opts | |
|
||||
| fixed_ips | {"subnet_id": "5b9282a1-0be1-4ade-b478-7868ad2a16ff", "ip_address": "192.0.2.15"} |
|
||||
| | {"subnet_id": "43414c53-62ae-49bc-aa6c-c9dd7705818a", "ip_address": "fda4:653e:71b0:0:f816:3eff:fe16:b5f2"} |
|
||||
| id | da0b1f75-c895-460f-9fc1-4d6ec84cf85f |
|
||||
| mac_address | fa:16:3e:16:b5:f2 |
|
||||
| name | |
|
||||
| network_id | 38c5e950-b450-4c30-83d4-ee181c28aad3 |
|
||||
| port_security_enabled | True |
|
||||
| security_groups | 1f0ddd73-7e3c-48bd-a64c-7ded4fe0e635 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-----------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
$ designate record-list example.org.
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
| 10a36008-6ecf-47c3-b321-05652a929b04 | SOA | example.org. | ns1.devstack.org. malavall.us.ibm.com. 1455563783 3600 600 86400 3600 |
|
||||
| 56ca0b88-e343-4c98-8faa-19746e169baf | NS | example.org. | ns1.devstack.org. |
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
|
||||
$ neutron floatingip-create 41fa3995-9e4a-4cd9-bb51-3e5424f2ff2a \
|
||||
--port_id da0b1f75-c895-460f-9fc1-4d6ec84cf85f
|
||||
Created a new floatingip:
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| dns_domain | |
|
||||
| dns_name | |
|
||||
| fixed_ip_address | 192.0.2.15 |
|
||||
| floating_ip_address | 198.51.100.4 |
|
||||
| floating_network_id | 41fa3995-9e4a-4cd9-bb51-3e5424f2ff2a |
|
||||
| id | e78f6eb1-a35f-4a90-941d-87c888d5fcc7 |
|
||||
| port_id | da0b1f75-c895-460f-9fc1-4d6ec84cf85f |
|
||||
| router_id | 970ebe83-c4a3-4642-810e-43ab7b0c2b5f |
|
||||
| status | DOWN |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
$ designate record-list example.org.
|
||||
+--------------------------------------+------+--------------------+-----------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+--------------------+-----------------------------------------------------------------------+
|
||||
| 10a36008-6ecf-47c3-b321-05652a929b04 | SOA | example.org. | ns1.devstack.org. malavall.us.ibm.com. 1455564861 3600 600 86400 3600 |
|
||||
| 56ca0b88-e343-4c98-8faa-19746e169baf | NS | example.org. | ns1.devstack.org. |
|
||||
| 5ff53fd0-3746-48da-b9c9-77ed3004ec67 | A | my-vm.example.org. | 198.51.100.4 |
|
||||
+--------------------------------------+------+--------------------+-----------------------------------------------------------------------+
|
||||
|
||||
In this example, notice that the data is published in the DNS service when the
|
||||
floating IP is associated to the port.
|
||||
|
||||
Following are the PTR records created for this example. Note that for
|
||||
IPv4, the value of ``ipv4_ptr_zone_prefix_size`` is 24. For more details, see
|
||||
:ref:`config-dns-int-ext-serv`:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ designate record-list 100.51.198.in-addr.arpa.
|
||||
+--------------------------------------+------+----------------------------+---------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+----------------------------+---------------------------------------------------------------------+
|
||||
| 2dd0b894-25fa-4563-9d32-9f13bd67f329 | NS | 100.51.198.in-addr.arpa. | ns1.devstack.org. |
|
||||
| 47b920f1-5eff-4dfa-9616-7cb5b7cb7ca6 | SOA | 100.51.198.in-addr.arpa. | ns1.devstack.org. admin.example.org. 1455564862 3600 600 86400 3600 |
|
||||
| fb1edf42-abba-410c-8397-831f45fd0cd7 | PTR | 4.100.51.198.in-addr.arpa. | my-vm.example.org. |
|
||||
+--------------------------------------+------+----------------------------+---------------------------------------------------------------------+
|
||||
|
||||
|
||||
Use case 3: Floating IPs are published in the external DNS service
|
||||
------------------------------------------------------------------
|
||||
|
||||
In this use case, the user assigns ``dns_name`` and ``dns_domain`` attributes
|
||||
to a floating IP when it is created. The floating IP data becomes visible in
|
||||
the external DNS service as soon as it is created. The floating IP can be
|
||||
associated with a port on creation or later on. The following example shows a
|
||||
user booting an instance and then creating a floating IP associated to the port
|
||||
allocated for the instance:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron net-show 38c5e950-b450-4c30-83d4-ee181c28aad3
|
||||
+-------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | nova |
|
||||
| dns_domain | example.org. |
|
||||
| id | 38c5e950-b450-4c30-83d4-ee181c28aad3 |
|
||||
| mtu | 1450 |
|
||||
| name | private |
|
||||
| port_security_enabled | True |
|
||||
| router:external | False |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | 43414c53-62ae-49bc-aa6c-c9dd7705818a |
|
||||
| | 5b9282a1-0be1-4ade-b478-7868ad2a16ff |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-------------------------+--------------------------------------+
|
||||
|
||||
$ openstack server create --image cirros --flavor 42 \
|
||||
--nic net-id=38c5e950-b450-4c30-83d4-ee181c28aad3 my_vm
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | - |
|
||||
| OS-SRV-USG:terminated_at | - |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| adminPass | HLXGznYqXM4J |
|
||||
| config_drive | |
|
||||
| created | 2016-02-15T19:42:44Z |
|
||||
| flavor | m1.nano (42) |
|
||||
| hostId | |
|
||||
| id | 71fb4ac8-eed8-4644-8113-0641962bb125 |
|
||||
| image | cirros-0.3.5-x86_64-uec (b9d981eb-d21c-4ce2-9dbc-dd38f3d9015f) |
|
||||
| key_name | - |
|
||||
| locked | False |
|
||||
| metadata | {} |
|
||||
| name | my_vm |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| security_groups | default |
|
||||
| status | BUILD |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
| updated | 2016-02-15T19:42:44Z |
|
||||
| user_id | 8bb6e578cba24e7db9d3810633124525 |
|
||||
+--------------------------------------+----------------------------------------------------------------+
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------+------------+
|
||||
| ID | Name | Status | Task State | Power State | Networks | Image Name |
|
||||
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------+------------+
|
||||
| 71fb4ac8-eed8-4644-8113-0641962bb125 | my_vm | ACTIVE | - | Running | private=fda4:653e:71b0:0:f816:3eff:fe24:8614, 192.0.2.16 | cirros |
|
||||
+--------------------------------------+-------+--------+------------+-------------+----------------------------------------------------------+------------+
|
||||
|
||||
$ neutron port-list --device_id 71fb4ac8-eed8-4644-8113-0641962bb125
|
||||
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| id | name | mac_address | fixed_ips |
|
||||
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| 1e7033fb-8e9d-458b-89ed-8312cafcfdcb | | fa:16:3e:24:86:14 | {"subnet_id": "5b9282a1-0be1-4ade-b478-7868ad2a16ff", "ip_address": "192.0.2.16"} |
|
||||
| | | | {"subnet_id": "43414c53-62ae-49bc-aa6c-c9dd7705818a", "ip_address": "fda4:653e:71b0:0:f816:3eff:fe24:8614"} |
|
||||
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
$ neutron port-show 1e7033fb-8e9d-458b-89ed-8312cafcfdcb
|
||||
+-----------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| allowed_address_pairs | |
|
||||
| binding:vnic_type | normal |
|
||||
| device_id | 71fb4ac8-eed8-4644-8113-0641962bb125 |
|
||||
| device_owner | compute:None |
|
||||
| dns_assignment | {"hostname": "my-vm", "ip_address": "192.0.2.16", "fqdn": "my-vm.example.org."} |
|
||||
| | {"hostname": "my-vm", "ip_address": "fda4:653e:71b0:0:f816:3eff:fe24:8614", "fqdn": "my-vm.example.org."} |
|
||||
| dns_name | my-vm |
|
||||
| extra_dhcp_opts | |
|
||||
| fixed_ips | {"subnet_id": "5b9282a1-0be1-4ade-b478-7868ad2a16ff", "ip_address": "192.0.2.16"} |
|
||||
| | {"subnet_id": "43414c53-62ae-49bc-aa6c-c9dd7705818a", "ip_address": "fda4:653e:71b0:0:f816:3eff:fe24:8614"} |
|
||||
| id | 1e7033fb-8e9d-458b-89ed-8312cafcfdcb |
|
||||
| mac_address | fa:16:3e:24:86:14 |
|
||||
| name | |
|
||||
| network_id | 38c5e950-b450-4c30-83d4-ee181c28aad3 |
|
||||
| port_security_enabled | True |
|
||||
| security_groups | 1f0ddd73-7e3c-48bd-a64c-7ded4fe0e635 |
|
||||
| status | ACTIVE |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+-----------------------+-------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
$ designate record-list example.org.
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
| 10a36008-6ecf-47c3-b321-05652a929b04 | SOA | example.org. | ns1.devstack.org. malavall.us.ibm.com. 1455565110 3600 600 86400 3600 |
|
||||
| 56ca0b88-e343-4c98-8faa-19746e169baf | NS | example.org. | ns1.devstack.org. |
|
||||
+--------------------------------------+------+--------------+-----------------------------------------------------------------------+
|
||||
|
||||
$ neutron floatingip-create 41fa3995-9e4a-4cd9-bb51-3e5424f2ff2a \
|
||||
--dns_domain example.org. --dns_name my-floatingip
|
||||
Created a new floatingip:
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| dns_domain | example.org. |
|
||||
| dns_name | my-floatingip |
|
||||
| fixed_ip_address | |
|
||||
| floating_ip_address | 198.51.100.5 |
|
||||
| floating_network_id | 41fa3995-9e4a-4cd9-bb51-3e5424f2ff2a |
|
||||
| id | 9f23a9c6-eceb-42eb-9f45-beb58c473728 |
|
||||
| port_id | |
|
||||
| router_id | |
|
||||
| status | DOWN |
|
||||
| tenant_id | d5660cb1e6934612a01b4fb2fb630725 |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
$ designate record-list example.org.
|
||||
+--------------------------------------+------+----------------------------+-----------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+----------------------------+-----------------------------------------------------------------------+
|
||||
| 10a36008-6ecf-47c3-b321-05652a929b04 | SOA | example.org. | ns1.devstack.org. malavall.us.ibm.com. 1455566486 3600 600 86400 3600 |
|
||||
| 56ca0b88-e343-4c98-8faa-19746e169baf | NS | example.org. | ns1.devstack.org. |
|
||||
| 8884c56f-3ef5-446e-ae4d-8053cc8bc2b4 | A | my-floatingip.example.org. | 198.51.100.53 |
|
||||
+--------------------------------------+------+----------------------------+-----------------------------------------------------------------------+
|
||||
|
||||
Note that in this use case:
|
||||
|
||||
* The ``dns_name`` and ``dns_domain`` attributes of a floating IP must be
|
||||
specified together on creation. They cannot be assigned to the floating IP
|
||||
separately.
|
||||
* The ``dns_name`` and ``dns_domain`` of a floating IP have precedence, for
|
||||
purposes of being published in the external DNS service, over the
|
||||
``dns_name`` of its associated port and the ``dns_domain`` of the port's
|
||||
network, whether they are specified or not. Only the ``dns_name`` and the
|
||||
``dns_domain`` of the floating IP are published in the external DNS service.
|
||||
|
||||
Following are the PTR records created for this example. Note that for
|
||||
IPv4, the value of ipv4_ptr_zone_prefix_size is 24. For more details, see
|
||||
:ref:`config-dns-int-ext-serv`:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ designate record-list 100.51.198.in-addr.arpa.
|
||||
+--------------------------------------+------+----------------------------+---------------------------------------------------------------------+
|
||||
| id | type | name | data |
|
||||
+--------------------------------------+------+----------------------------+---------------------------------------------------------------------+
|
||||
| 2dd0b894-25fa-4563-9d32-9f13bd67f329 | NS | 100.51.198.in-addr.arpa. | ns1.devstack.org. |
|
||||
| 47b920f1-5eff-4dfa-9616-7cb5b7cb7ca6 | SOA | 100.51.198.in-addr.arpa. | ns1.devstack.org. admin.example.org. 1455566487 3600 600 86400 3600 |
|
||||
| 589a0171-e77a-4ab6-ba6e-23114f2b9366 | PTR | 5.100.51.198.in-addr.arpa. | my-floatingip.example.org. |
|
||||
+--------------------------------------+------+----------------------------+---------------------------------------------------------------------+
|
||||
|
||||
.. _config-dns-performance-considerations:
|
||||
|
||||
Performance considerations
|
||||
--------------------------
|
||||
|
||||
Only for :ref:`config-dns-use-case-1`, if the port binding extension is
|
||||
enabled in the Networking service, the Compute service will execute one
|
||||
additional port update operation when allocating the port for the instance
|
||||
during the boot process. This may have a noticeable adverse effect in the
|
||||
performance of the boot process that must be evaluated before adoption of this
|
||||
use case.
|
||||
|
||||
.. _config-dns-int-ext-serv:
|
||||
|
||||
Configuring OpenStack Networking for integration with an external DNS service
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
The first step to configure the integration with an external DNS service is to
|
||||
enable the functionality described in :ref:`config-dns-int-dns-resolution`.
|
||||
Once this is done, the user has to take the following steps and restart
|
||||
``neutron-server``.
|
||||
|
||||
#. Edit the ``[default]`` section of ``/etc/neutron/neutron.conf`` and specify
|
||||
the external DNS service driver to be used in parameter
|
||||
``external_dns_driver``. The valid options are defined in namespace
|
||||
``neutron.services.external_dns_drivers``. The following example shows how
|
||||
to set up the driver for the OpenStack DNS service:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
external_dns_driver = designate
|
||||
|
||||
#. If the OpenStack DNS service is the target external DNS, the ``[designate]``
|
||||
section of ``/etc/neutron/neutron.conf`` must define the following
|
||||
parameters:
|
||||
|
||||
* ``url``: the OpenStack DNS service public endpoint URL.
|
||||
* ``allow_reverse_dns_lookup``: a boolean value specifying whether to enable
|
||||
or not the creation of reverse lookup (PTR) records.
|
||||
* ``admin_auth_url``: the Identity service admin authorization endpoint url.
|
||||
This endpoint will be used by the Networking service to authenticate as an
|
||||
admin user to create and update reverse lookup (PTR) zones.
|
||||
* ``admin_username``: the admin user to be used by the Networking service to
|
||||
create and update reverse lookup (PTR) zones.
|
||||
* ``admin_password``: the password of the admin user to be used by
|
||||
Networking service to create and update reverse lookup (PTR) zones.
|
||||
* ``admin_tenant_name``: the project of the admin user to be used by the
|
||||
Networking service to create and update reverse lookup (PTR) zones.
|
||||
* ``ipv4_ptr_zone_prefix_size``: the size in bits of the prefix for the IPv4
|
||||
reverse lookup (PTR) zones.
|
||||
* ``ipv6_ptr_zone_prefix_size``: the size in bits of the prefix for the IPv6
|
||||
reverse lookup (PTR) zones.
|
||||
* ``insecure``: Disable SSL certificate validation. By default, certificates
|
||||
are validated.
|
||||
* ``cafile``: Path to a valid Certificate Authority (CA) certificate.
|
||||
* ``auth_uri``: the unversioned public endpoint of the Identity service.
|
||||
* ``project_domain_id``: the domain ID of the admin user's project.
|
||||
* ``user_domain_id``: the domain ID of the admin user to be used by the
|
||||
Networking service.
|
||||
* ``project_name``: the project of the admin user to be used by the
|
||||
Networking service.
|
||||
* ``username``: the admin user to be used by the Networking service to
|
||||
create and update reverse lookup (PTR) zones.
|
||||
* ``password``: the password of the admin user to be used by
|
||||
Networking service.
|
||||
|
||||
The following is an example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[designate]
|
||||
url = http://192.0.2.240:9001/v2
|
||||
auth_uri = http://192.0.2.240:5000
|
||||
admin_auth_url = http://192.0.2.240:35357
|
||||
admin_username = neutron
|
||||
admin_password = PASSWORD
|
||||
admin_tenant_name = service
|
||||
project_domain_id = default
|
||||
user_domain_id = default
|
||||
project_name = service
|
||||
username = neutron
|
||||
password = PASSWORD
|
||||
allow_reverse_dns_lookup = True
|
||||
ipv4_ptr_zone_prefix_size = 24
|
||||
ipv6_ptr_zone_prefix_size = 116
|
||||
cafile = /etc/ssl/certs/my_ca_cert
|
||||
|
||||
Configuration of the externally accessible network for use case 1
|
||||
-----------------------------------------------------------------
|
||||
|
||||
In :ref:`config-dns-use-case-1`, the externally accessible network must
|
||||
meet the following requirements:
|
||||
|
||||
* The network cannot have attribute ``router:external`` set to ``True``.
|
||||
* The network type can be FLAT, VLAN, GRE, VXLAN or GENEVE.
|
||||
* For network types VLAN, GRE, VXLAN or GENEVE, the segmentation ID must be
|
||||
outside the ranges assigned to project networks.
|
97
doc/source/admin/config-dns-res.rst
Normal file
@ -0,0 +1,97 @@
|
||||
.. _config-dns-res:
|
||||
|
||||
=============================
|
||||
Name resolution for instances
|
||||
=============================
|
||||
|
||||
The Networking service offers several methods to configure name
|
||||
resolution (DNS) for instances. Most deployments should implement
|
||||
case 1 or 2. Case 3 requires security considerations to prevent
|
||||
leaking internal DNS information to instances.
|
||||
|
||||
Case 1: Each virtual network uses unique DNS resolver(s)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In this case, the DHCP agent offers one or more unique DNS resolvers
|
||||
to instances via DHCP on each virtual network. You can configure a DNS
|
||||
resolver when creating or updating a subnet. To configure more than
|
||||
one DNS resolver, use a comma between each value.
|
||||
|
||||
* Configure a DNS resolver when creating a subnet.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --dns-nameserver DNS_RESOLVER
|
||||
|
||||
Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver reachable
|
||||
from the virtual network. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-create --dns-nameserver 203.0.113.8,198.51.100.53
|
||||
|
||||
.. note::
|
||||
|
||||
This command requires other options outside the scope of this
|
||||
content.
|
||||
|
||||
* Configure a DNS resolver on an existing subnet.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-update --dns-nameserver DNS_RESOLVER SUBNET_ID_OR_NAME
|
||||
|
||||
Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver reachable
|
||||
from the virtual network and ``SUBNET_ID_OR_NAME`` with the UUID or name
|
||||
of the subnet. For example, using the ``selfservice`` subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron subnet-update --dns-nameserver 203.0.113.8,198.51.100.53 selfservice
|
||||
|
||||
Case 2: All virtual networks use same DNS resolver(s)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In this case, the DHCP agent offers the same DNS resolver(s) to
|
||||
instances via DHCP on all virtual networks.
|
||||
|
||||
* In the ``dhcp_agent.ini`` file, configure one or more DNS resolvers. To
|
||||
configure more than one DNS resolver, use a comma between each value.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dnsmasq_dns_servers = DNS_RESOLVER
|
||||
|
||||
Replace ``DNS_RESOLVER`` with the IP address of a DNS resolver reachable
|
||||
from all virtual networks. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dnsmasq_dns_servers = 203.0.113.8, 198.51.100.53
|
||||
|
||||
.. note::
|
||||
|
||||
You must configure this option for all eligible DHCP agents and
|
||||
restart them to activate the values.
|
||||
|
||||
Case 3: All virtual networks use DNS resolver(s) on the host
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In this case, the DHCP agent offers the DNS resolver(s) in the
|
||||
``resolv.conf`` file on the host running the DHCP agent via DHCP to
|
||||
instances on all virtual networks.
|
||||
|
||||
* In the ``dhcp_agent.ini`` file, enable advertisement of the DNS resolver(s)
|
||||
on the host.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dnsmasq_local_resolv = True
|
||||
|
||||
.. note::
|
||||
|
||||
You must configure this option for all eligible DHCP agents and
|
||||
restart them to activate the values.
|
195
doc/source/admin/config-dvr-ha-snat.rst
Normal file
@ -0,0 +1,195 @@
|
||||
.. _config-dvr-snat-ha-ovs:
|
||||
|
||||
=====================================
|
||||
Distributed Virtual Routing with VRRP
|
||||
=====================================
|
||||
|
||||
:ref:`deploy-ovs-ha-dvr` supports augmentation
|
||||
using Virtual Router Redundancy Protocol (VRRP). Using this configuration,
|
||||
virtual routers support both the ``--distributed`` and ``--ha`` options.
|
||||
|
||||
Similar to legacy HA routers, DVR/SNAT HA routers provide a quick fail over of
|
||||
the SNAT service to a backup DVR/SNAT router on an l3-agent running on a
|
||||
different node.
|
||||
|
||||
SNAT high availability is implemented in a manner similar to the
|
||||
:ref:`deploy-lb-ha-vrrp` and :ref:`deploy-ovs-ha-vrrp` examples where
|
||||
``keepalived`` uses VRRP to provide quick failover of SNAT services.
|
||||
|
||||
During normal operation, the master router periodically transmits *heartbeat*
|
||||
packets over a hidden project network that connects all HA routers for a
|
||||
particular project.
|
||||
|
||||
If the DVR/SNAT backup router stops receiving these packets, it assumes failure
|
||||
of the master DVR/SNAT router and promotes itself to master router by
|
||||
configuring IP addresses on the interfaces in the ``snat`` namespace. In
|
||||
environments with more than one backup router, the rules of VRRP are followed
|
||||
to select a new master router.
|
||||
|
||||
.. warning::
|
||||
|
||||
There is a known bug with ``keepalived`` v1.2.15 and earlier which can
|
||||
cause packet loss when ``max_l3_agents_per_router`` is set to 3 or more.
|
||||
Therefore, we recommend that you upgrade to ``keepalived`` v1.2.16
|
||||
or greater when using this feature.
|
||||
|
||||
.. note::
|
||||
|
||||
Experimental feature or incomplete documentation.
|
||||
|
||||
|
||||
Configuration example
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The basic deployment model consists of one controller node, two or more network
|
||||
nodes, and multiple computes nodes.
|
||||
|
||||
Controller node configuration
|
||||
-----------------------------
|
||||
|
||||
#. Add the following to ``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
core_plugin = ml2
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
router_distributed = True
|
||||
l3_ha = True
|
||||
l3_ha_net_cidr = 169.254.192.0/18
|
||||
max_l3_agents_per_router = 3
|
||||
|
||||
When the ``router_distributed = True`` flag is configured, routers created
|
||||
by all users are distributed. Without it, only privileged users can create
|
||||
distributed routers by using ``--distributed True``.
|
||||
|
||||
Similarly, when the ``l3_ha = True`` flag is configured, routers created
|
||||
by all users default to HA.
|
||||
|
||||
It follows that with these two flags set to ``True`` in the configuration
|
||||
file, routers created by all users will default to distributed HA routers
|
||||
(DVR HA).
|
||||
|
||||
The same can explicitly be accomplished by a user with administrative
|
||||
credentials setting the flags in the :command:`neutron router-create`
|
||||
command:
|
||||
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-create name-of-router --distributed=True --ha=True
|
||||
|
||||
.. note::
|
||||
|
||||
The *max_l3_agents_per_router* determine the number of backup
|
||||
DVR/SNAT routers which will be instantiated.
|
||||
|
||||
#. Add the following to ``/etc/neutron/plugins/ml2/ml2_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vxlan
|
||||
tenant_network_types = vxlan
|
||||
mechanism_drivers = openvswitch,l2population
|
||||
extension_drivers = port_security
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = external
|
||||
|
||||
[ml2_type_vxlan]
|
||||
vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID
|
||||
|
||||
Replace ``MIN_VXLAN_ID`` and ``MAX_VXLAN_ID`` with VXLAN ID minimum and
|
||||
maximum values suitable for your environment.
|
||||
|
||||
.. note::
|
||||
|
||||
The first value in the ``tenant_network_types`` option becomes the
|
||||
default project network type when a regular user creates a network.
|
||||
|
||||
Network nodes
|
||||
-------------
|
||||
|
||||
#. Configure the Open vSwitch agent. Add the following to
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
|
||||
bridge_mappings = external:br-ex
|
||||
|
||||
[agent]
|
||||
enable_distributed_routing = True
|
||||
tunnel_types = vxlan
|
||||
l2_population = True
|
||||
|
||||
Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface
|
||||
that handles VXLAN project networks.
|
||||
|
||||
#. Configure the L3 agent. Add the following to ``/etc/neutron/l3_agent.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
ha_vrrp_auth_password = password
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
agent_mode = dvr_snat
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Configure the Open vSwitch agent. Add the following to
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
|
||||
bridge_mappings = external:br-ex
|
||||
|
||||
[agent]
|
||||
enable_distributed_routing = True
|
||||
tunnel_types = vxlan
|
||||
l2_population = True
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
|
||||
|
||||
#. Configure the L3 agent. Add the following to ``/etc/neutron/l3_agent.ini``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
agent_mode = dvr
|
||||
|
||||
Replace ``TUNNEL_INTERFACE_IP_ADDRESS`` with the IP address of the interface
|
||||
that handles VXLAN project networks.
|
||||
|
||||
Keepalived VRRP health check
|
||||
----------------------------
|
||||
|
||||
.. include:: shared/keepalived-vrrp-healthcheck.txt
|
||||
|
||||
Known limitations
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
* Migrating a router from distributed only, HA only, or legacy to distributed
|
||||
HA is not supported at this time. The router must be created as distributed
|
||||
HA.
|
||||
The reverse direction is also not supported. You cannot reconfigure a
|
||||
distributed HA router to be only distributed, only HA, or legacy.
|
||||
|
||||
* There are certain scenarios where l2pop and distributed HA routers do not
|
||||
interact in an expected manner. These situations are the same that affect HA
|
||||
only routers and l2pop.
|
49
doc/source/admin/config-ipam.rst
Normal file
@ -0,0 +1,49 @@
|
||||
.. _config-ipam:
|
||||
|
||||
==================
|
||||
IPAM configuration
|
||||
==================
|
||||
|
||||
.. note::
|
||||
|
||||
Experimental feature or incomplete documentation.
|
||||
|
||||
Starting with the Liberty release, OpenStack Networking includes a pluggable
|
||||
interface for the IP Address Management (IPAM) function. This interface creates
|
||||
a driver framework for the allocation and de-allocation of subnets and IP
|
||||
addresses, enabling the integration of alternate IPAM implementations or
|
||||
third-party IP Address Management systems.
|
||||
|
||||
The basics
|
||||
~~~~~~~~~~
|
||||
|
||||
In Liberty and Mitaka, the IPAM implementation within OpenStack Networking
|
||||
provided a pluggable and non-pluggable flavor. As of Newton, the non-pluggable
|
||||
flavor is no longer available. Instead, it is completely replaced with a
|
||||
reference driver implementation of the pluggable framework. All data will
|
||||
be automatically migrated during the upgrade process, unless you have
|
||||
previously configured a pluggable IPAM driver. In that case, no migration
|
||||
is necessary.
|
||||
|
||||
To configure a driver other than the reference driver, specify it
|
||||
in the ``neutron.conf`` file. Do this after the migration is
|
||||
complete. There is no need to specify any value if you wish to use the
|
||||
reference driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
ipam_driver = ipam-driver-name
|
||||
|
||||
There is no need to specify any value if you wish to use the reference
|
||||
driver, though specifying ``internal`` will explicitly choose the reference
|
||||
driver. The documentation for any alternate drivers will include the value to
|
||||
use when specifying that driver.
|
||||
|
||||
Known limitations
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
* The driver interface is designed to allow separate drivers for each
|
||||
subnet pool. However, the current implementation allows only a single
|
||||
IPAM driver system-wide.
|
||||
* Third-party drivers must provide their own migration mechanisms to convert
|
||||
existing OpenStack installations to their IPAM.
|
750
doc/source/admin/config-ipv6.rst
Normal file
@ -0,0 +1,750 @@
|
||||
.. _config-ipv6:
|
||||
|
||||
====
|
||||
IPv6
|
||||
====
|
||||
|
||||
This section describes the following items:
|
||||
|
||||
* How to enable dual-stack (IPv4 and IPv6 enabled) instances.
|
||||
* How those instances receive an IPv6 address.
|
||||
* How those instances communicate across a router to other subnets or
|
||||
the internet.
|
||||
* How those instances interact with other OpenStack services.
|
||||
|
||||
Enabling a dual-stack network in OpenStack Networking simply requires
|
||||
creating a subnet with the ``ip_version`` field set to ``6``, then the
|
||||
IPv6 attributes (``ipv6_ra_mode`` and ``ipv6_address_mode``) set. The
|
||||
``ipv6_ra_mode`` and ``ipv6_address_mode`` will be described in detail in
|
||||
the next section. Finally, the subnets ``cidr`` needs to be provided.
|
||||
|
||||
This section does not include the following items:
|
||||
|
||||
* Single stack IPv6 project networking
|
||||
* OpenStack control communication between servers and services over an IPv6
|
||||
network.
|
||||
* Connection to the OpenStack APIs via an IPv6 transport network
|
||||
* IPv6 multicast
|
||||
* IPv6 support in conjunction with any out of tree routers, switches, services
|
||||
or agents whether in physical or virtual form factors.
|
||||
|
||||
Neutron subnets and the IPv6 API attributes
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As of Juno, the OpenStack Networking service (neutron) provides two
|
||||
new attributes to the subnet object, which allows users of the API to
|
||||
configure IPv6 subnets.
|
||||
|
||||
There are two IPv6 attributes:
|
||||
|
||||
* ``ipv6_ra_mode``
|
||||
* ``ipv6_address_mode``
|
||||
|
||||
These attributes can be set to the following values:
|
||||
|
||||
* ``slaac``
|
||||
* ``dhcpv6-stateful``
|
||||
* ``dhcpv6-stateless``
|
||||
|
||||
The attributes can also be left unset.
|
||||
|
||||
|
||||
IPv6 addressing
|
||||
---------------
|
||||
|
||||
The ``ipv6_address_mode`` attribute is used to control how addressing is
|
||||
handled by OpenStack. There are a number of different ways that guest
|
||||
instances can obtain an IPv6 address, and this attribute exposes these
|
||||
choices to users of the Networking API.
|
||||
|
||||
|
||||
Router advertisements
|
||||
---------------------
|
||||
|
||||
The ``ipv6_ra_mode`` attribute is used to control router
|
||||
advertisements for a subnet.
|
||||
|
||||
The IPv6 Protocol uses Internet Control Message Protocol packets
|
||||
(ICMPv6) as a way to distribute information about networking. ICMPv6
|
||||
packets with the type flag set to 134 are called "Router
|
||||
Advertisement" packets, which contain information about the router
|
||||
and the route that can be used by guest instances to send network
|
||||
traffic.
|
||||
|
||||
The ``ipv6_ra_mode`` is used to specify if the Networking service should
|
||||
generate Router Advertisement packets for a subnet.
|
||||
|
||||
ipv6_ra_mode and ipv6_address_mode combinations
|
||||
-----------------------------------------------
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 10 10 10 10 60
|
||||
|
||||
* - ipv6 ra mode
|
||||
- ipv6 address mode
|
||||
- radvd A,M,O
|
||||
- External Router A,M,O
|
||||
- Description
|
||||
* - *N/S*
|
||||
- *N/S*
|
||||
- Off
|
||||
- Not Defined
|
||||
- Backwards compatibility with pre-Juno IPv6 behavior.
|
||||
* - *N/S*
|
||||
- slaac
|
||||
- Off
|
||||
- 1,0,0
|
||||
- Guest instance obtains IPv6 address from non-OpenStack router using SLAAC.
|
||||
* - *N/S*
|
||||
- dhcpv6-stateful
|
||||
- Off
|
||||
- 0,1,1
|
||||
- Not currently implemented in the reference implementation.
|
||||
* - *N/S*
|
||||
- dhcpv6-stateless
|
||||
- Off
|
||||
- 1,0,1
|
||||
- Not currently implemented in the reference implementation.
|
||||
* - slaac
|
||||
- *N/S*
|
||||
- 1,0,0
|
||||
- Off
|
||||
- Not currently implemented in the reference implementation.
|
||||
* - dhcpv6-stateful
|
||||
- *N/S*
|
||||
- 0,1,1
|
||||
- Off
|
||||
- Not currently implemented in the reference implementation.
|
||||
* - dhcpv6-stateless
|
||||
- *N/S*
|
||||
- 1,0,1
|
||||
- Off
|
||||
- Not currently implemented in the reference implementation.
|
||||
* - slaac
|
||||
- slaac
|
||||
- 1,0,0
|
||||
- Off
|
||||
- Guest instance obtains IPv6 address from OpenStack managed radvd using SLAAC.
|
||||
* - dhcpv6-stateful
|
||||
- dhcpv6-stateful
|
||||
- 0,1,1
|
||||
- Off
|
||||
- Guest instance obtains IPv6 address from dnsmasq using DHCPv6
|
||||
stateful and optional info from dnsmasq using DHCPv6.
|
||||
* - dhcpv6-stateless
|
||||
- dhcpv6-stateless
|
||||
- 1,0,1
|
||||
- Off
|
||||
- Guest instance obtains IPv6 address from OpenStack managed
|
||||
radvd using SLAAC and optional info from dnsmasq using
|
||||
DHCPv6.
|
||||
* - slaac
|
||||
- dhcpv6-stateful
|
||||
-
|
||||
-
|
||||
- *Invalid combination.*
|
||||
* - slaac
|
||||
- dhcpv6-stateless
|
||||
-
|
||||
-
|
||||
- *Invalid combination.*
|
||||
* - dhcpv6-stateful
|
||||
- slaac
|
||||
-
|
||||
-
|
||||
- *Invalid combination.*
|
||||
* - dhcpv6-stateful
|
||||
- dhcpv6-stateless
|
||||
-
|
||||
-
|
||||
- *Invalid combination.*
|
||||
* - dhcpv6-stateless
|
||||
- slaac
|
||||
-
|
||||
-
|
||||
- *Invalid combination.*
|
||||
* - dhcpv6-stateless
|
||||
- dhcpv6-stateful
|
||||
-
|
||||
-
|
||||
- *Invalid combination.*
|
||||
|
||||
Project network considerations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Dataplane
|
||||
---------
|
||||
|
||||
Both the Linux bridge and the Open vSwitch dataplane modules support
|
||||
forwarding IPv6
|
||||
packets amongst the guests and router ports. Similar to IPv4, there is no
|
||||
special configuration or setup required to enable the dataplane to properly
|
||||
forward packets from the source to the destination using IPv6. Note that these
|
||||
dataplanes will forward Link-local Address (LLA) packets between hosts on the
|
||||
same network just fine without any participation or setup by OpenStack
|
||||
components after the ports are all connected and MAC addresses learned.
|
||||
|
||||
Addresses for subnets
|
||||
---------------------
|
||||
|
||||
There are three methods currently implemented for a subnet to get its
|
||||
``cidr`` in OpenStack:
|
||||
|
||||
#. Direct assignment during subnet creation via command line or Horizon
|
||||
#. Referencing a subnet pool during subnet creation
|
||||
#. Using a Prefix Delegation (PD) client to request a prefix for a
|
||||
subnet from a PD server
|
||||
|
||||
In the future, additional techniques could be used to allocate subnets
|
||||
to projects, for example, use of an external IPAM module.
|
||||
|
||||
Address modes for ports
|
||||
-----------------------
|
||||
|
||||
.. note::
|
||||
|
||||
An external DHCPv6 server in theory could override the full
|
||||
address OpenStack assigns based on the EUI-64 address, but that
|
||||
would not be wise as it would not be consistent through the system.
|
||||
|
||||
IPv6 supports three different addressing schemes for address configuration and
|
||||
for providing optional network information.
|
||||
|
||||
Stateless Address Auto Configuration (SLAAC)
|
||||
Address configuration using Router Advertisement (RA).
|
||||
|
||||
DHCPv6-stateless
|
||||
Address configuration using RA and optional information
|
||||
using DHCPv6.
|
||||
|
||||
DHCPv6-stateful
|
||||
Address configuration and optional information using DHCPv6.
|
||||
|
||||
OpenStack can be setup such that OpenStack Networking directly
|
||||
provides RA, DHCP
|
||||
relay and DHCPv6 address and optional information for their networks
|
||||
or this can be delegated to external routers and services based on the
|
||||
drivers that are in use. There are two neutron subnet attributes -
|
||||
``ipv6_ra_mode`` and ``ipv6_address_mode`` – that determine how IPv6
|
||||
addressing and network information is provided to project instances:
|
||||
|
||||
* ``ipv6_ra_mode``: Determines who sends RA.
|
||||
* ``ipv6_address_mode``: Determines how instances obtain IPv6 address,
|
||||
default gateway, or optional information.
|
||||
|
||||
For the above two attributes to be effective, ``enable_dhcp`` of the
|
||||
subnet object must be set to True.
|
||||
|
||||
Using SLAAC for addressing
|
||||
--------------------------
|
||||
|
||||
When using SLAAC, the currently supported combinations for ``ipv6_ra_mode`` and
|
||||
``ipv6_address_mode`` are as follows.
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 10 10 50
|
||||
|
||||
* - ipv6_ra_mode
|
||||
- ipv6_address_mode
|
||||
- Result
|
||||
* - Not specified.
|
||||
- SLAAC
|
||||
- Addresses are assigned using EUI-64, and an external router
|
||||
will be used for routing.
|
||||
* - SLAAC
|
||||
- SLAAC
|
||||
- Address are assigned using EUI-64, and OpenStack Networking
|
||||
provides routing.
|
||||
|
||||
Setting ``ipv6_ra_mode`` to ``slaac`` will result in OpenStack Networking
|
||||
routers being configured to send RA packets, when they are created.
|
||||
This results in the following values set for the address configuration
|
||||
flags in the RA messages:
|
||||
|
||||
* Auto Configuration Flag = 1
|
||||
* Managed Configuration Flag = 0
|
||||
* Other Configuration Flag = 0
|
||||
|
||||
New or existing neutron networks that contain a SLAAC enabled IPv6 subnet will
|
||||
result in all neutron ports attached to the network receiving IPv6 addresses.
|
||||
This is because when RA broadcast messages are sent out on a neutron
|
||||
network, they are received by all IPv6 capable ports on the network,
|
||||
and each port will then configure an IPv6 address based on the
|
||||
information contained in the RA packet. In some cases, an IPv6 SLAAC
|
||||
address will be added to a port, in addition to other IPv4 and IPv6 addresses
|
||||
that the port already has been assigned.
|
||||
|
||||
DHCPv6
|
||||
------
|
||||
|
||||
For DHCPv6, the currently supported combinations are as
|
||||
follows:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 10 10 50
|
||||
|
||||
* - ipv6_ra_mode
|
||||
- ipv6_address_mode
|
||||
- Result
|
||||
* - DHCPv6-stateless
|
||||
- DHCPv6-stateless
|
||||
- Addresses are assigned through RAs (see SLAAC above) and optional
|
||||
information is delivered through DHCPv6.
|
||||
* - DHCPv6-stateful
|
||||
- DHCPv6-stateful
|
||||
- Addresses and optional information are assigned using DHCPv6.
|
||||
|
||||
Setting DHCPv6-stateless for ``ipv6_ra_mode`` configures the neutron
|
||||
router with radvd agent to send RAs. The list below captures the
|
||||
values set for the address configuration flags in the RA packet in
|
||||
this scenario. Similarly, setting DHCPv6-stateless for
|
||||
``ipv6_address_mode`` configures neutron DHCP implementation to provide
|
||||
the additional network information.
|
||||
|
||||
* Auto Configuration Flag = 1
|
||||
* Managed Configuration Flag = 0
|
||||
* Other Configuration Flag = 1
|
||||
|
||||
Setting DHCPv6-stateful for ``ipv6_ra_mode`` configures the neutron
|
||||
router with radvd agent to send RAs. The list below captures the
|
||||
values set for the address configuration flags in the RA packet in
|
||||
this scenario. Similarly, setting DHCPv6-stateful for
|
||||
``ipv6_address_mode`` configures neutron DHCP implementation to provide
|
||||
addresses and additional network information through DHCPv6.
|
||||
|
||||
* Auto Configuration Flag = 0
|
||||
* Managed Configuration Flag = 1
|
||||
* Other Configuration Flag = 1
|
||||
|
||||
Router support
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
The behavior of the neutron router for IPv6 is different than for IPv4 in
|
||||
a few ways.
|
||||
|
||||
Internal router ports, that act as default gateway ports for a network, will
|
||||
share a common port for all IPv6 subnets associated with the network. This
|
||||
implies that there will be an IPv6 internal router interface with multiple
|
||||
IPv6 addresses from each of the IPv6 subnets associated with the network and a
|
||||
separate IPv4 internal router interface for the IPv4 subnet. On the other
|
||||
hand, external router ports are allowed to have a dual-stack configuration
|
||||
with both an IPv4 and an IPv6 address assigned to them.
|
||||
|
||||
Neutron project networks that are assigned Global Unicast Address (GUA)
|
||||
prefixes and addresses don’t require NAT on the neutron router external gateway
|
||||
port to access the outside world. As a consequence of the lack of NAT the
|
||||
external router port doesn’t require a GUA to send and receive to the external
|
||||
networks. This implies a GUA IPv6 subnet prefix is not necessarily needed for
|
||||
the neutron external network. By default, a IPv6 LLA associated with the
|
||||
external gateway port can be used for routing purposes. To handle this
|
||||
scenario, the implementation of router-gateway-set API in neutron has been
|
||||
modified so that an IPv6 subnet is not required for the external network that
|
||||
is associated with the neutron router. The LLA address of the upstream router
|
||||
can be learned in two ways.
|
||||
|
||||
#. In the absence of an upstream RA support, ``ipv6_gateway`` flag can be set
|
||||
with the external router gateway LLA in the neutron L3 agent configuration
|
||||
file. This also requires that no subnet is associated with that port.
|
||||
#. The upstream router can send an RA and the neutron router will
|
||||
automatically learn the next-hop LLA, provided again that no subnet is
|
||||
assigned and the ``ipv6_gateway`` flag is not set.
|
||||
|
||||
Effectively the ``ipv6_gateway`` flag takes precedence over an RA that
|
||||
is received from the upstream router. If it is desired to use a GUA
|
||||
next hop that is accomplished by allocating a subnet to the external
|
||||
router port and assigning the upstream routers GUA address as the
|
||||
gateway for the subnet.
|
||||
|
||||
.. note::
|
||||
|
||||
It should be possible for projects to communicate with each other
|
||||
on an isolated network (a network without a router port) using LLA
|
||||
with little to no participation on the part of OpenStack. The authors
|
||||
of this section have not proven that to be true for all scenarios.
|
||||
|
||||
.. note::
|
||||
|
||||
When using the neutron L3 agent in a configuration where it is
|
||||
auto-configuring an IPv6 address via SLAAC, and the agent is
|
||||
learning its default IPv6 route from the ICMPv6 Router Advertisement,
|
||||
it may be necessary to set the
|
||||
``net.ipv6.conf.<physical_interface>.accept_ra`` sysctl to the
|
||||
value ``2`` in order for routing to function correctly.
|
||||
For a more detailed description, please see the `bug <https://bugs.launchpad.net/neutron/+bug/1616282>`__.
|
||||
|
||||
|
||||
Neutron's Distributed Router feature and IPv6
|
||||
---------------------------------------------
|
||||
|
||||
IPv6 does work when the Distributed Virtual Router functionality is enabled,
|
||||
but all ingress/egress traffic is via the centralized router (hence, not
|
||||
distributed). More work is required to fully enable this functionality.
|
||||
|
||||
|
||||
Advanced services
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
VPNaaS
|
||||
------
|
||||
|
||||
VPNaaS supports IPv6, but support in Kilo and prior releases will have
|
||||
some bugs that may limit how it can be used. More thorough and
|
||||
complete testing and bug fixing is being done as part of the Liberty
|
||||
release. IPv6-based VPN-as-a-Service is configured similar to the IPv4
|
||||
configuration. Either or both the ``peer_address`` and the
|
||||
``peer_cidr`` can specified as an IPv6 address. The choice of
|
||||
addressing modes and router modes described above should not impact
|
||||
support.
|
||||
|
||||
|
||||
LBaaS
|
||||
-----
|
||||
|
||||
TODO
|
||||
|
||||
FWaaS
|
||||
-----
|
||||
|
||||
FWaaS allows creation of IPv6 based rules.
|
||||
|
||||
NAT & Floating IPs
|
||||
------------------
|
||||
|
||||
At the current time OpenStack Networking does not provide any facility
|
||||
to support any flavor of NAT with IPv6. Unlike IPv4 there is no
|
||||
current embedded support for floating IPs with IPv6. It is assumed
|
||||
that the IPv6 addressing amongst the projects is using GUAs with no
|
||||
overlap across the projects.
|
||||
|
||||
Security considerations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. todo:: Initially this is probably just stating the security group rules
|
||||
relative to IPv6 that are applied. Need some help for these
|
||||
|
||||
Configuring interfaces of the guest
|
||||
-----------------------------------
|
||||
|
||||
OpenStack currently doesn't support the privacy extensions defined by RFC 4941.
|
||||
The interface identifier and DUID used must be directly derived from the MAC
|
||||
as described in RFC 2373. The compute hosts must not be setup to utilize the
|
||||
privacy extensions when generating their interface identifier.
|
||||
|
||||
There is no provisions for an IPv6-based metadata service similar to what is
|
||||
provided for IPv4. In the case of dual stacked guests though it is always
|
||||
possible to use the IPv4 metadata service instead.
|
||||
|
||||
Unlike IPv4 the MTU of a given network can be conveyed in the RA messages sent
|
||||
by the router as well as in the DHCP messages.
|
||||
|
||||
OpenStack control & management network considerations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As of the Kilo release, considerable effort has gone in to ensuring
|
||||
the project network can handle dual stack IPv6 and IPv4 transport
|
||||
across the variety of configurations described above. OpenStack control
|
||||
network can be run in a dual stack configuration and OpenStack API
|
||||
endpoints can be accessed via an IPv6 network. At this time, Open vSwitch
|
||||
(OVS) tunnel types - STT, VXLAN, GRE, support both IPv4 and IPv6 endpoints.
|
||||
|
||||
|
||||
Prefix delegation
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
From the Liberty release onwards, OpenStack Networking supports IPv6 prefix
|
||||
delegation. This section describes the configuration and workflow steps
|
||||
necessary to use IPv6 prefix delegation to provide automatic allocation of
|
||||
subnet CIDRs. This allows you as the OpenStack administrator to rely on an
|
||||
external (to the OpenStack Networking service) DHCPv6 server to manage your
|
||||
project network prefixes.
|
||||
|
||||
.. note::
|
||||
|
||||
Prefix delegation became available in the Liberty release, it is
|
||||
not available in the Kilo release. HA and DVR routers
|
||||
are not currently supported by this feature.
|
||||
|
||||
Configuring OpenStack Networking for prefix delegation
|
||||
------------------------------------------------------
|
||||
|
||||
To enable prefix delegation, edit the ``/etc/neutron/neutron.conf`` file.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ipv6_pd_enabled = True
|
||||
|
||||
.. note::
|
||||
|
||||
If you are not using the default dibbler-based driver for prefix
|
||||
delegation, then you also need to set the driver in
|
||||
``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
pd_dhcp_driver = <class path to driver>
|
||||
|
||||
Drivers other than the default one may require extra configuration,
|
||||
please refer to :ref:`extra-driver-conf`
|
||||
|
||||
This tells OpenStack Networking to use the prefix delegation mechanism for
|
||||
subnet allocation when the user does not provide a CIDR or subnet pool id when
|
||||
creating a subnet.
|
||||
|
||||
Requirements
|
||||
------------
|
||||
|
||||
To use this feature, you need a prefix delegation capable DHCPv6 server that is
|
||||
reachable from your OpenStack Networking node(s). This could be software
|
||||
running on the OpenStack Networking node(s) or elsewhere, or a physical router.
|
||||
For the purposes of this guide we are using the open-source DHCPv6 server,
|
||||
Dibbler. Dibbler is available in many Linux package managers, or from source at
|
||||
`tomaszmrugalski/dibbler <https://github.com/tomaszmrugalski/dibbler>`_.
|
||||
|
||||
When using the reference implementation of the OpenStack Networking prefix
|
||||
delegation driver, Dibbler must also be installed on your OpenStack Networking
|
||||
node(s) to serve as a DHCPv6 client. Version 1.0.1 or higher is required.
|
||||
|
||||
This guide assumes that you are running a Dibbler server on the network node
|
||||
where the external network bridge exists. If you already have a prefix
|
||||
delegation capable DHCPv6 server in place, then you can skip the following
|
||||
section.
|
||||
|
||||
Configuring the Dibbler server
|
||||
------------------------------
|
||||
|
||||
After installing Dibbler, edit the ``/etc/dibbler/server.conf`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
script "/var/lib/dibbler/pd-server.sh"
|
||||
|
||||
iface "br-ex" {
|
||||
pd-class {
|
||||
pd-pool 2001:db8:2222::/48
|
||||
pd-length 64
|
||||
}
|
||||
}
|
||||
|
||||
The options used in the configuration file above are:
|
||||
|
||||
- ``script``
|
||||
Points to a script to be run when a prefix is delegated or
|
||||
released. This is only needed if you want instances on your
|
||||
subnets to have external network access. More on this below.
|
||||
- ``iface``
|
||||
The name of the network interface on which to listen for
|
||||
prefix delegation messages.
|
||||
- ``pd-pool``
|
||||
The larger prefix from which you want your delegated
|
||||
prefixes to come. The example given is sufficient if you do
|
||||
not need external network access, otherwise a unique
|
||||
globally routable prefix is necessary.
|
||||
- ``pd-length``
|
||||
The length that delegated prefixes will be. This must be
|
||||
64 to work with the current OpenStack Networking reference implementation.
|
||||
|
||||
To provide external network access to your instances, your Dibbler server also
|
||||
needs to create new routes for each delegated prefix. This is done using the
|
||||
script file named in the config file above. Edit the
|
||||
``/var/lib/dibbler/pd-server.sh`` file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
if [ "$PREFIX1" != "" ]; then
|
||||
if [ "$1" == "add" ]; then
|
||||
sudo ip -6 route add ${PREFIX1}/64 via $REMOTE_ADDR dev $IFACE
|
||||
fi
|
||||
if [ "$1" == "delete" ]; then
|
||||
sudo ip -6 route del ${PREFIX1}/64 via $REMOTE_ADDR dev $IFACE
|
||||
fi
|
||||
fi
|
||||
|
||||
The variables used in the script file above are:
|
||||
|
||||
- ``$PREFIX1``
|
||||
The prefix being added/deleted by the Dibbler server.
|
||||
- ``$1``
|
||||
The operation being performed.
|
||||
- ``$REMOTE_ADDR``
|
||||
The IP address of the requesting Dibbler client.
|
||||
- ``$IFACE``
|
||||
The network interface upon which the request was received.
|
||||
|
||||
The above is all you need in this scenario, but more information on
|
||||
installing, configuring, and running Dibbler is available in the Dibbler user
|
||||
guide, at `Dibbler – a portable DHCPv6
|
||||
<http://klub.com.pl/dhcpv6/doc/dibbler-user.pdf>`_.
|
||||
|
||||
To start your Dibbler server, run:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# dibbler-server run
|
||||
|
||||
Or to run in headless mode:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# dibbler-server start
|
||||
|
||||
When using DevStack, it is important to start your server after the
|
||||
``stack.sh`` script has finished to ensure that the required network
|
||||
interfaces have been created.
|
||||
|
||||
User workflow
|
||||
-------------
|
||||
|
||||
First, create a network and IPv6 subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create ipv6-pd
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-25T19:26:01Z |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | 4b782725-6abe-4a2d-b061-763def1bb029 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | ipv6-pd |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 46 |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
| updated_at | 2017-01-25T19:26:01Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
$ openstack subnet create --ip-version 6 --ipv6-ra-mode slaac \
|
||||
--ipv6-address-mode slaac --use-default-subnet-pool \
|
||||
--network ipv6-pd ipv6-pd-1
|
||||
+------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------------------+--------------------------------------+
|
||||
| allocation_pools | ::2-::ffff:ffff:ffff:ffff |
|
||||
| cidr | ::/64 |
|
||||
| created_at | 2017-01-25T19:31:53Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | ::1 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | 1319510d-c92c-4532-bf5d-8bcf3da761a1 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | slaac |
|
||||
| name | ipv6-pd-1 |
|
||||
| network_id | 4b782725-6abe-4a2d-b061-763def1bb029 |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | prefix_delegation |
|
||||
| updated_at | 2017-01-25T19:31:53Z |
|
||||
| use_default_subnetpool | True |
|
||||
+------------------------+--------------------------------------+
|
||||
|
||||
The subnet is initially created with a temporary CIDR before one can be
|
||||
assigned by prefix delegation. Any number of subnets with this temporary CIDR
|
||||
can exist without raising an overlap error. The subnetpool_id is automatically
|
||||
set to ``prefix_delegation``.
|
||||
|
||||
To trigger the prefix delegation process, create a router interface between
|
||||
this subnet and a router with an active interface on the external network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add subnet router1 ipv6-pd-1
|
||||
|
||||
The prefix delegation mechanism then sends a request via the external network
|
||||
to your prefix delegation server, which replies with the delegated prefix. The
|
||||
subnet is then updated with the new prefix, including issuing new IP addresses
|
||||
to all ports:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet show ipv6-pd-1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 2001:db8:2222:6977::2-2001:db8:2222: |
|
||||
| | 6977:ffff:ffff:ffff:ffff |
|
||||
| cidr | 2001:db8:2222:6977::/64 |
|
||||
| created_at | 2017-01-25T19:31:53Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 2001:db8:2222:6977::1 |
|
||||
| host_routes | |
|
||||
| id | 1319510d-c92c-4532-bf5d-8bcf3da761a1 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | slaac |
|
||||
| name | ipv6-pd-1 |
|
||||
| network_id | 4b782725-6abe-4a2d-b061-763def1bb029 |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| revision_number | 4 |
|
||||
| service_types | |
|
||||
| subnetpool_id | prefix_delegation |
|
||||
| updated_at | 2017-01-25T19:35:26Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
If the prefix delegation server is configured to delegate globally routable
|
||||
prefixes and setup routes, then any instance with a port on this subnet should
|
||||
now have external network access.
|
||||
|
||||
Deleting the router interface causes the subnet to be reverted to the temporary
|
||||
CIDR, and all ports have their IPs updated. Prefix leases are released and
|
||||
renewed automatically as necessary.
|
||||
|
||||
References
|
||||
----------
|
||||
|
||||
The following link provides a great step by step tutorial on setting up IPv6
|
||||
with OpenStack: `Tenant IPV6 deployment in OpenStack Kilo release
|
||||
<http://www.debug-all.com/?p=52>`_.
|
||||
|
||||
.. _extra-driver-conf:
|
||||
|
||||
Extra configuration
|
||||
-------------------
|
||||
|
||||
Neutron dhcpv6_pd_agent
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To enable the driver for the dhcpv6_pd_agent, set pd_dhcp_driver to this in
|
||||
``/etc/neutron/neutron.conf``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
pd_dhcp_driver = neutron_pd_agent
|
||||
|
||||
To allow the neutron-pd-agent to communicate with prefix delegation servers,
|
||||
you must set which network interface to use for external communication. In
|
||||
DevStack the default for this is ``br-ex``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
pd_interface = br-ex
|
||||
|
||||
Once you have stacked run the command below to start the neutron-pd-agent:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
neutron-pd-agent --config-file /etc/neutron/neutron.conf
|
503
doc/source/admin/config-lbaas.rst
Normal file
@ -0,0 +1,503 @@
|
||||
.. _config-lbaas:
|
||||
|
||||
==================================
|
||||
Load Balancer as a Service (LBaaS)
|
||||
==================================
|
||||
|
||||
The Networking service offers a load balancer feature called "LBaaS v2"
|
||||
through the ``neutron-lbaas`` service plug-in.
|
||||
|
||||
LBaaS v2 adds the concept of listeners to the LBaaS v1 load balancers.
|
||||
LBaaS v2 allows you to configure multiple listener ports on a single load
|
||||
balancer IP address.
|
||||
|
||||
There are two reference implementations of LBaaS v2.
|
||||
The one is an agent based implementation with HAProxy.
|
||||
The agents handle the HAProxy configuration and manage the HAProxy daemon.
|
||||
Another LBaaS v2 implementation, `Octavia
|
||||
<https://docs.openstack.org/developer/octavia/>`_, has a separate API and
|
||||
separate worker processes that build load balancers within virtual machines on
|
||||
hypervisors that are managed by the Compute service. You do not need an agent
|
||||
for Octavia.
|
||||
|
||||
.. note::
|
||||
|
||||
LBaaS v1 was removed in the Newton release. These links provide more
|
||||
details about how LBaaS v1 works and how to configure it:
|
||||
|
||||
* `Load-Balancer-as-a-Service (LBaaS) overview <https://docs.openstack.org/admin-guide/networking-introduction.html#load-balancer-as-a-service-lbaas-overview>`__
|
||||
* `Basic Load-Balancer-as-a-Service operations <https://docs.openstack.org/admin-guide/networking-adv-features.html#basic-load-balancer-as-a-service-operations>`__
|
||||
|
||||
.. warning::
|
||||
|
||||
Currently, no migration path exists between v1 and v2 load balancers. If you
|
||||
choose to switch from v1 to v2, you must recreate all load balancers, pools,
|
||||
and health monitors.
|
||||
|
||||
.. TODO(amotoki): Data mirgation from v1 to v2 is provided in Newton release,
|
||||
but its usage is not documented enough. It should be added here.
|
||||
|
||||
LBaaS v2 Concepts
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
LBaaS v2 has several new concepts to understand:
|
||||
|
||||
.. image:: figures/lbaasv2-diagram.png
|
||||
:alt: LBaaS v2 layout
|
||||
|
||||
Load balancer
|
||||
The load balancer occupies a neutron network port and has an IP address
|
||||
assigned from a subnet.
|
||||
|
||||
Listener
|
||||
Load balancers can listen for requests on multiple ports. Each one of those
|
||||
ports is specified by a listener.
|
||||
|
||||
Pool
|
||||
A pool holds a list of members that serve content through the load balancer.
|
||||
|
||||
Member
|
||||
Members are servers that serve traffic behind a load balancer. Each member
|
||||
is specified by the IP address and port that it uses to serve traffic.
|
||||
|
||||
Health monitor
|
||||
Members may go offline from time to time and health monitors divert traffic
|
||||
away from members that are not responding properly. Health monitors are
|
||||
associated with pools.
|
||||
|
||||
LBaaS v2 has multiple implementations via different service plug-ins. The two
|
||||
most common implementations use either an agent or the Octavia services. Both
|
||||
implementations use the `LBaaS v2 API <https://developer.openstack.org/api-ref/networking/v2/#lbaas-2-0-stable>`_.
|
||||
|
||||
Configurations
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
Configuring LBaaS v2 with an agent
|
||||
----------------------------------
|
||||
|
||||
#. Add the LBaaS v2 service plug-in to the ``service_plugins`` configuration
|
||||
directive in ``/etc/neutron/neutron.conf``. The plug-in list is
|
||||
comma-separated:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
service_plugins = [existing service plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
|
||||
|
||||
#. Add the LBaaS v2 service provider to the ``service_provider`` configuration
|
||||
directive within the ``[service_providers]`` section in
|
||||
``/etc/neutron/neutron_lbaas.conf``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default
|
||||
|
||||
If you have existing service providers for other networking service
|
||||
plug-ins, such as VPNaaS or FWaaS, add the ``service_provider`` line shown
|
||||
above in the ``[service_providers]`` section as a separate line. These
|
||||
configuration directives are repeatable and are not comma-separated.
|
||||
|
||||
#. Select the driver that manages virtual interfaces in
|
||||
``/etc/neutron/lbaas_agent.ini``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[DEFAULT]
|
||||
device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver
|
||||
interface_driver = INTERFACE_DRIVER
|
||||
[haproxy]
|
||||
user_group = haproxy
|
||||
|
||||
Replace ``INTERFACE_DRIVER`` with the interface driver that the layer-2
|
||||
agent in your environment uses. For example, ``openvswitch`` for Open
|
||||
vSwitch or ``linuxbridge`` for Linux bridge.
|
||||
|
||||
#. Run the ``neutron-lbaas`` database migration:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
neutron-db-manage --subproject neutron-lbaas upgrade head
|
||||
|
||||
#. If you have deployed LBaaS v1, **stop the LBaaS v1 agent now**. The v1 and
|
||||
v2 agents **cannot** run simultaneously.
|
||||
|
||||
#. Start the LBaaS v2 agent:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
neutron-lbaasv2-agent \
|
||||
--config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/lbaas_agent.ini
|
||||
|
||||
#. Restart the Network service to activate the new configuration. You are now
|
||||
ready to create load balancers with the LBaaS v2 agent.
|
||||
|
||||
Configuring LBaaS v2 with Octavia
|
||||
---------------------------------
|
||||
|
||||
Octavia provides additional capabilities for load balancers, including using a
|
||||
compute driver to build instances that operate as load balancers.
|
||||
The `Hands on Lab - Install and Configure OpenStack Octavia
|
||||
<https://www.openstack.org/summit/tokyo-2015/videos/presentation/rsvp-required-hands-on-lab-install-and-configure-openstack-octavia>`_
|
||||
session at the OpenStack Summit in Tokyo provides an overview of Octavia.
|
||||
|
||||
The DevStack documentation offers a `simple method to deploy Octavia
|
||||
<https://docs.openstack.org/developer/devstack/guides/devstack-with-lbaas-v2.html>`_
|
||||
and test the service with redundant load balancer instances. If you already
|
||||
have Octavia installed and configured within your environment, you can
|
||||
configure the Network service to use Octavia:
|
||||
|
||||
#. Add the LBaaS v2 service plug-in to the ``service_plugins`` configuration
|
||||
directive in ``/etc/neutron/neutron.conf``. The plug-in list is
|
||||
comma-separated:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
service_plugins = [existing service plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
|
||||
|
||||
#. Add the Octavia service provider to the ``service_provider`` configuration
|
||||
directive within the ``[service_providers]`` section in
|
||||
``/etc/neutron/neutron_lbaas.conf``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.drivers.octavia.driver.OctaviaDriver:default
|
||||
|
||||
Ensure that the LBaaS v1 and v2 service providers are removed from the
|
||||
``[service_providers]`` section. They are not used with Octavia. **Verify
|
||||
that all LBaaS agents are stopped.**
|
||||
|
||||
#. Restart the Network service to activate the new configuration. You are now
|
||||
ready to create and manage load balancers with Octavia.
|
||||
|
||||
Add LBaaS panels to Dashboard
|
||||
-----------------------------
|
||||
|
||||
The Dashboard panels for managing LBaaS v2 are available starting with the
|
||||
Mitaka release.
|
||||
|
||||
#. Clone the `neutron-lbaas-dashboard repository
|
||||
<https://git.openstack.org/cgit/openstack/neutron-lbaas-dashboard/>`__
|
||||
and check out the release
|
||||
branch that matches the installed version of Dashboard:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ git clone https://git.openstack.org/openstack/neutron-lbaas-dashboard
|
||||
$ cd neutron-lbaas-dashboard
|
||||
$ git checkout OPENSTACK_RELEASE
|
||||
|
||||
#. Install the Dashboard panel plug-in:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ python setup.py install
|
||||
|
||||
#. Copy the ``_1481_project_ng_loadbalancersv2_panel.py`` file from the
|
||||
``neutron-lbaas-dashboard/enabled`` directory into the Dashboard
|
||||
``openstack_dashboard/local/enabled`` directory.
|
||||
|
||||
This step ensures that Dashboard can find the plug-in when it enumerates
|
||||
all of its available panels.
|
||||
|
||||
#. Enable the plug-in in Dashboard by editing the ``local_settings.py`` file
|
||||
and setting ``enable_lb`` to ``True`` in the ``OPENSTACK_NEUTRON_NETWORK``
|
||||
dictionary.
|
||||
|
||||
#. If Dashboard is configured to compress static files for better performance
|
||||
(usually set through ``COMPRESS_OFFLINE`` in ``local_settings.py``),
|
||||
optimize the static files again:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ./manage.py collectstatic
|
||||
$ ./manage.py compress
|
||||
|
||||
#. Restart Apache to activate the new panel:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo service apache2 restart
|
||||
|
||||
To find the panel, click on :guilabel:`Project` in Dashboard, then click the
|
||||
:guilabel:`Network` drop-down menu and select :guilabel:`Load Balancers`.
|
||||
|
||||
LBaaS v2 operations
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The same neutron commands are used for LBaaS v2 with an agent or with Octavia.
|
||||
|
||||
Building an LBaaS v2 load balancer
|
||||
----------------------------------
|
||||
|
||||
#. Start by creating a load balancer on a network. In this example, the
|
||||
``private`` network is an isolated network with two web server instances:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-loadbalancer-create --name test-lb private-subnet
|
||||
|
||||
#. You can view the load balancer status and IP address with the
|
||||
:command:`neutron lbaas-loadbalancer-show` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-loadbalancer-show test-lb
|
||||
+---------------------+------------------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+------------------------------------------------+
|
||||
| admin_state_up | True |
|
||||
| description | |
|
||||
| id | 7780f9dd-e5dd-43a9-af81-0d2d1bd9c386 |
|
||||
| listeners | {"id": "23442d6a-4d82-40ee-8d08-243750dbc191"} |
|
||||
| | {"id": "7e0d084d-6d67-47e6-9f77-0115e6cf9ba8"} |
|
||||
| name | test-lb |
|
||||
| operating_status | ONLINE |
|
||||
| provider | octavia |
|
||||
| provisioning_status | ACTIVE |
|
||||
| tenant_id | fbfce4cb346c4f9097a977c54904cafd |
|
||||
| vip_address | 192.0.2.22 |
|
||||
| vip_port_id | 9f8f8a75-a731-4a34-b622-864907e1d556 |
|
||||
| vip_subnet_id | f1e7827d-1bfe-40b6-b8f0-2d9fd946f59b |
|
||||
+---------------------+------------------------------------------------+
|
||||
|
||||
#. Update the security group to allow traffic to reach the new load balancer.
|
||||
Create a new security group along with ingress rules to allow traffic into
|
||||
the new load balancer. The neutron port for the load balancer is shown as
|
||||
``vip_port_id`` above.
|
||||
|
||||
Create a security group and rules to allow TCP port 80, TCP port 443, and
|
||||
all ICMP traffic:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron security-group-create lbaas
|
||||
$ neutron security-group-rule-create \
|
||||
--direction ingress \
|
||||
--protocol tcp \
|
||||
--port-range-min 80 \
|
||||
--port-range-max 80 \
|
||||
--remote-ip-prefix 0.0.0.0/0 \
|
||||
lbaas
|
||||
$ neutron security-group-rule-create \
|
||||
--direction ingress \
|
||||
--protocol tcp \
|
||||
--port-range-min 443 \
|
||||
--port-range-max 443 \
|
||||
--remote-ip-prefix 0.0.0.0/0 \
|
||||
lbaas
|
||||
$ neutron security-group-rule-create \
|
||||
--direction ingress \
|
||||
--protocol icmp \
|
||||
lbaas
|
||||
|
||||
Apply the security group to the load balancer's network port using
|
||||
``vip_port_id`` from the :command:`neutron lbaas-loadbalancer-show`
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-update \
|
||||
--security-group lbaas \
|
||||
9f8f8a75-a731-4a34-b622-864907e1d556
|
||||
|
||||
Adding an HTTP listener
|
||||
-----------------------
|
||||
|
||||
#. With the load balancer online, you can add a listener for plaintext
|
||||
HTTP traffic on port 80:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-listener-create \
|
||||
--name test-lb-http \
|
||||
--loadbalancer test-lb \
|
||||
--protocol HTTP \
|
||||
--protocol-port 80
|
||||
|
||||
This load balancer is active and ready to serve traffic on ``192.0.2.22``.
|
||||
|
||||
#. Verify that the load balancer is responding to pings before moving further:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ping -c 4 192.0.2.22
|
||||
PING 192.0.2.22 (192.0.2.22) 56(84) bytes of data.
|
||||
64 bytes from 192.0.2.22: icmp_seq=1 ttl=62 time=0.410 ms
|
||||
64 bytes from 192.0.2.22: icmp_seq=2 ttl=62 time=0.407 ms
|
||||
64 bytes from 192.0.2.22: icmp_seq=3 ttl=62 time=0.396 ms
|
||||
64 bytes from 192.0.2.22: icmp_seq=4 ttl=62 time=0.397 ms
|
||||
|
||||
--- 192.0.2.22 ping statistics ---
|
||||
4 packets transmitted, 4 received, 0% packet loss, time 2997ms
|
||||
rtt min/avg/max/mdev = 0.396/0.402/0.410/0.020 ms
|
||||
|
||||
|
||||
#. You can begin building a pool and adding members to the pool to serve HTTP
|
||||
content on port 80. For this example, the web servers are ``192.0.2.16``
|
||||
and ``192.0.2.17``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-pool-create \
|
||||
--name test-lb-pool-http \
|
||||
--lb-algorithm ROUND_ROBIN \
|
||||
--listener test-lb-http \
|
||||
--protocol HTTP
|
||||
$ neutron lbaas-member-create \
|
||||
--name test-lb-http-member-1 \
|
||||
--subnet private-subnet \
|
||||
--address 192.0.2.16 \
|
||||
--protocol-port 80 \
|
||||
test-lb-pool-http
|
||||
$ neutron lbaas-member-create \
|
||||
--name test-lb-http-member-2 \
|
||||
--subnet private-subnet \
|
||||
--address 192.0.2.17 \
|
||||
--protocol-port 80 \
|
||||
test-lb-pool-http
|
||||
|
||||
#. You can use ``curl`` to verify connectivity through the load balancers to
|
||||
your web servers:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ curl 192.0.2.22
|
||||
web2
|
||||
$ curl 192.0.2.22
|
||||
web1
|
||||
$ curl 192.0.2.22
|
||||
web2
|
||||
$ curl 192.0.2.22
|
||||
web1
|
||||
|
||||
In this example, the load balancer uses the round robin algorithm and the
|
||||
traffic alternates between the web servers on the backend.
|
||||
|
||||
#. You can add a health monitor so that unresponsive servers are removed
|
||||
from the pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-healthmonitor-create \
|
||||
--name test-lb-http-monitor \
|
||||
--delay 5 \
|
||||
--max-retries 2 \
|
||||
--timeout 10 \
|
||||
--type HTTP \
|
||||
--pool test-lb-pool-http
|
||||
|
||||
In this example, the health monitor removes the server from the pool if
|
||||
it fails a health check at two five-second intervals. When the server
|
||||
recovers and begins responding to health checks again, it is added to
|
||||
the pool once again.
|
||||
|
||||
Adding an HTTPS listener
|
||||
------------------------
|
||||
|
||||
You can add another listener on port 443 for HTTPS traffic. LBaaS v2 offers
|
||||
SSL/TLS termination at the load balancer, but this example takes a simpler
|
||||
approach and allows encrypted connections to terminate at each member server.
|
||||
|
||||
#. Start by creating a listener, attaching a pool, and then adding members:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-listener-create \
|
||||
--name test-lb-https \
|
||||
--loadbalancer test-lb \
|
||||
--protocol HTTPS \
|
||||
--protocol-port 443
|
||||
$ neutron lbaas-pool-create \
|
||||
--name test-lb-pool-https \
|
||||
--lb-algorithm LEAST_CONNECTIONS \
|
||||
--listener test-lb-https \
|
||||
--protocol HTTPS
|
||||
$ neutron lbaas-member-create \
|
||||
--name test-lb-https-member-1 \
|
||||
--subnet private-subnet \
|
||||
--address 192.0.2.16 \
|
||||
--protocol-port 443 \
|
||||
test-lb-pool-https
|
||||
$ neutron lbaas-member-create \
|
||||
--name test-lb-https-member-2 \
|
||||
--subnet private-subnet \
|
||||
--address 192.0.2.17 \
|
||||
--protocol-port 443 \
|
||||
test-lb-pool-https
|
||||
|
||||
#. You can also add a health monitor for the HTTPS pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-healthmonitor-create \
|
||||
--name test-lb-https-monitor \
|
||||
--delay 5 \
|
||||
--max-retries 2 \
|
||||
--timeout 10 \
|
||||
--type HTTPS \
|
||||
--pool test-lb-pool-https
|
||||
|
||||
The load balancer now handles traffic on ports 80 and 443.
|
||||
|
||||
Associating a floating IP address
|
||||
---------------------------------
|
||||
|
||||
Load balancers that are deployed on a public or provider network that are
|
||||
accessible to external clients do not need a floating IP address assigned.
|
||||
External clients can directly access the virtual IP address (VIP) of those
|
||||
load balancers.
|
||||
|
||||
However, load balancers deployed onto private or isolated networks need a
|
||||
floating IP address assigned if they must be accessible to external clients. To
|
||||
complete this step, you must have a router between the private and public
|
||||
networks and an available floating IP address.
|
||||
|
||||
You can use the :command:`neutron lbaas-loadbalancer-show` command from the
|
||||
beginning of this section to locate the ``vip_port_id``. The ``vip_port_id``
|
||||
is the ID of the network port that is assigned to the load balancer. You can
|
||||
associate a free floating IP address to the load balancer using
|
||||
:command:`neutron floatingip-associate`:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron floatingip-associate FLOATINGIP_ID LOAD_BALANCER_PORT_ID
|
||||
|
||||
Setting quotas for LBaaS v2
|
||||
---------------------------
|
||||
|
||||
Quotas are available for limiting the number of load balancers and load
|
||||
balancer pools. By default, both quotas are set to 10.
|
||||
|
||||
You can adjust quotas using the :command:`neutron quota-update` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron quota-update --tenant-id TENANT_UUID --loadbalancer 25
|
||||
$ neutron quota-update --tenant-id TENANT_UUID --pool 50
|
||||
|
||||
A setting of ``-1`` disables the quota for a tenant.
|
||||
|
||||
Retrieving load balancer statistics
|
||||
-----------------------------------
|
||||
|
||||
The LBaaS v2 agent collects four types of statistics for each load balancer
|
||||
every six seconds. Users can query these statistics with the
|
||||
:command:`neutron lbaas-loadbalancer-stats` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron lbaas-loadbalancer-stats test-lb
|
||||
+--------------------+----------+
|
||||
| Field | Value |
|
||||
+--------------------+----------+
|
||||
| active_connections | 0 |
|
||||
| bytes_in | 40264557 |
|
||||
| bytes_out | 71701666 |
|
||||
| total_connections | 384601 |
|
||||
+--------------------+----------+
|
||||
|
||||
The ``active_connections`` count is the total number of connections that were
|
||||
active at the time the agent polled the load balancer. The other three
|
||||
statistics are cumulative since the load balancer was last started. For
|
||||
example, if the load balancer restarts due to a system error or a configuration
|
||||
change, these statistics will be reset.
|
181
doc/source/admin/config-macvtap.rst
Normal file
@ -0,0 +1,181 @@
|
||||
.. _config-macvtap:
|
||||
|
||||
========================
|
||||
Macvtap mechanism driver
|
||||
========================
|
||||
|
||||
The Macvtap mechanism driver for the ML2 plug-in generally increases
|
||||
network performance of instances.
|
||||
|
||||
Consider the following attributes of this mechanism driver to determine
|
||||
practicality in your environment:
|
||||
|
||||
* Supports only instance ports. Ports for DHCP and layer-3 (routing)
|
||||
services must use another mechanism driver such as Linux bridge or
|
||||
Open vSwitch (OVS).
|
||||
|
||||
* Supports only untagged (flat) and tagged (VLAN) networks.
|
||||
|
||||
* Lacks support for security groups including basic (sanity) and
|
||||
anti-spoofing rules.
|
||||
|
||||
* Lacks support for layer-3 high-availability mechanisms such as
|
||||
Virtual Router Redundancy Protocol (VRRP) and Distributed Virtual
|
||||
Routing (DVR).
|
||||
|
||||
* Only compute resources can be attached via macvtap. Attaching other
|
||||
resources like DHCP, Routers and others is not supported. Therefore run
|
||||
either OVS or linux bridge in VLAN or flat mode on the controller node.
|
||||
|
||||
* Instance migration requires the same values for the
|
||||
``physical_interface_mapping`` configuration option on each compute node.
|
||||
For more information, see
|
||||
`<https://bugs.launchpad.net/neutron/+bug/1550400>`_.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
You can add this mechanism driver to an existing environment using either
|
||||
the Linux bridge or OVS mechanism drivers with only provider networks or
|
||||
provider and self-service networks. You can change the configuration of
|
||||
existing compute nodes or add compute nodes with the Macvtap mechanism
|
||||
driver. The example configuration assumes addition of compute nodes with
|
||||
the Macvtap mechanism driver to the :ref:`deploy-lb-selfservice` or
|
||||
:ref:`deploy-ovs-selfservice` deployment examples.
|
||||
|
||||
Add one or more compute nodes with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Macvtap layer-2 agent and any dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
To support integration with the deployment examples, this content
|
||||
configures the Macvtap mechanism driver to use the overlay network
|
||||
for untagged (flat) or tagged (VLAN) networks in addition to overlay
|
||||
networks such as VXLAN. Your physical network infrastructure
|
||||
must support VLAN (802.1q) tagging on the overlay network.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Macvtap mechanism driver only applies to compute nodes. Otherwise,
|
||||
the environment resembles the prerequisite deployment example.
|
||||
|
||||
.. image:: figures/config-macvtap-compute1.png
|
||||
:alt: Macvtap mechanism driver - compute node components
|
||||
|
||||
.. image:: figures/config-macvtap-compute2.png
|
||||
:alt: Macvtap mechanism driver - compute node connectivity
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
the Macvtap mechanism driver to an existing operational environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``macvtap`` to mechanism drivers.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = macvtap
|
||||
|
||||
* Configure network mappings.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider,macvtap
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider,macvtap:VLAN_ID_START:VLAN_ID_END
|
||||
|
||||
.. note::
|
||||
|
||||
Use of ``macvtap`` is arbitrary. Only the self-service deployment
|
||||
examples require VLAN ID ranges. Replace ``VLAN_ID_START`` and
|
||||
``VLAN_ID_END`` with appropriate numerical values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service Macvtap layer-2 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``macvtap_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[macvtap]
|
||||
physical_interface_mappings = macvtap:MACVTAP_INTERFACE
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = noop
|
||||
|
||||
Replace ``MACVTAP_INTERFACE`` with the name of the underlying
|
||||
interface that handles Macvtap mechanism driver interfaces.
|
||||
If using a prerequisite deployment example, replace
|
||||
``MACVTAP_INTERFACE`` with the name of the underlying interface
|
||||
that handles overlay networks. For example, ``eth1``.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Macvtap agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 31e1bc1b-c872-4429-8fc3-2c8eba52634e | Metadata agent | compute1 | None | True | UP | neutron-metadata-agent |
|
||||
| 378f5550-feee-42aa-a1cb-e548b7c2601f | Open vSwitch agent | compute1 | None | True | UP | neutron-openvswitch-agent |
|
||||
| 7d2577d0-e640-42a3-b303-cb1eb077f2b6 | L3 agent | compute1 | nova | True | UP | neutron-l3-agent |
|
||||
| d5d7522c-ad14-4c63-ab45-f6420d6a81dd | Metering agent | compute1 | None | True | UP | neutron-metering-agent |
|
||||
| e838ef5c-75b1-4b12-84da-7bdbd62f1040 | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
This mechanism driver simply changes the virtual network interface driver
|
||||
for instances. Thus, you can reference the ``Create initial networks``
|
||||
content for the prerequisite deployment example.
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
This mechanism driver simply changes the virtual network interface driver
|
||||
for instances. Thus, you can reference the ``Verify network operation``
|
||||
content for the prerequisite deployment example.
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This mechanism driver simply removes the Linux bridge handling security
|
||||
groups on the compute nodes. Thus, you can reference the network traffic
|
||||
flow scenarios for the prerequisite deployment example.
|
510
doc/source/admin/config-ml2.rst
Normal file
@ -0,0 +1,510 @@
|
||||
.. _config-plugin-ml2:
|
||||
|
||||
===========
|
||||
ML2 plug-in
|
||||
===========
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Modular Layer 2 (ML2) neutron plug-in is a framework allowing OpenStack
|
||||
Networking to simultaneously use the variety of layer 2 networking
|
||||
technologies found in complex real-world data centers. The ML2 framework
|
||||
distinguishes between the two kinds of drivers that can be configured:
|
||||
|
||||
* Type drivers
|
||||
|
||||
Define how an OpenStack network is technically realized. Example: VXLAN
|
||||
|
||||
Each available network type is managed by an ML2 type driver. Type drivers
|
||||
maintain any needed type-specific network state. They validate the type
|
||||
specific information for provider networks and are responsible for the
|
||||
allocation of a free segment in project networks.
|
||||
|
||||
* Mechanism drivers
|
||||
|
||||
Define the mechanism to access an OpenStack network of a certain type.
|
||||
Example: Open vSwitch mechanism driver.
|
||||
|
||||
The mechanism driver is responsible for taking the information established by
|
||||
the type driver and ensuring that it is properly applied given the
|
||||
specific networking mechanisms that have been enabled.
|
||||
|
||||
Mechanism drivers can utilize L2 agents (via RPC) and/or interact directly
|
||||
with external devices or controllers.
|
||||
|
||||
Multiple mechanism and type drivers can be used simultaneously to access
|
||||
different ports of the same virtual network.
|
||||
|
||||
.. todo::
|
||||
Picture showing relationships
|
||||
|
||||
ML2 driver support matrix
|
||||
-------------------------
|
||||
|
||||
|
||||
.. list-table:: Mechanism drivers and L2 agents
|
||||
:header-rows: 1
|
||||
|
||||
* - type driver / mech driver
|
||||
- Flat
|
||||
- VLAN
|
||||
- VXLAN
|
||||
- GRE
|
||||
* - Open vSwitch
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
* - Linux bridge
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
- no
|
||||
* - SRIOV
|
||||
- yes
|
||||
- yes
|
||||
- no
|
||||
- no
|
||||
* - MacVTap
|
||||
- yes
|
||||
- yes
|
||||
- no
|
||||
- no
|
||||
* - L2 population
|
||||
- no
|
||||
- no
|
||||
- yes
|
||||
- yes
|
||||
|
||||
.. note::
|
||||
|
||||
L2 population is a special mechanism driver that optimizes BUM (Broadcast,
|
||||
unknown destination address, multicast) traffic in the overlay networks
|
||||
VXLAN and GRE. It needs to be used in conjunction with either the
|
||||
Linux bridge or the Open vSwitch mechanism driver and cannot be used as
|
||||
standalone mechanism driver. For more information, see the
|
||||
*Mechanism drivers* section below.
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Network type drivers
|
||||
--------------------
|
||||
|
||||
To enable type drivers in the ML2 plug-in. Edit the
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan,gre
|
||||
|
||||
.. note::
|
||||
|
||||
For more details,see the `Bug 1567792 <https://bugs.launchpad.net/openstack-manuals/+bug/1567792>`__.
|
||||
|
||||
For more details, see the
|
||||
`Networking configuration options <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-plug-in-configuration-options>`__
|
||||
of Configuration Reference.
|
||||
|
||||
The following type drivers are available
|
||||
|
||||
* Flat
|
||||
|
||||
* VLAN
|
||||
|
||||
* GRE
|
||||
|
||||
* VXLAN
|
||||
|
||||
Provider network types
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Provider networks provide connectivity like project networks.
|
||||
But only administrative (privileged) users can manage those
|
||||
networks because they interface with the physical network infrastructure.
|
||||
More information about provider networks see
|
||||
:doc:`intro-os-networking` or the `OpenStack Administrator Guide
|
||||
<https://docs.openstack.org/admin-guide/networking-adv-features.html#provider-networks>`__.
|
||||
|
||||
* Flat
|
||||
|
||||
The administrator needs to configure a list of physical network names that
|
||||
can be used for provider networks.
|
||||
For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-flat-type-configuration-options>`__.
|
||||
|
||||
* VLAN
|
||||
|
||||
The administrator needs to configure a list of physical network names that
|
||||
can be used for provider networks.
|
||||
For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-vlan-type-configuration-options>`__.
|
||||
|
||||
* GRE
|
||||
|
||||
No additional configuration required.
|
||||
|
||||
* VXLAN
|
||||
|
||||
The administrator can configure the VXLAN multicast group that should be
|
||||
used.
|
||||
|
||||
.. note::
|
||||
|
||||
VXLAN multicast group configuration is not applicable for the Open
|
||||
vSwitch agent.
|
||||
|
||||
As of today it is not used in the Linux bridge agent. The Linux bridge
|
||||
agent has its own agent specific configuration option. For more details,
|
||||
see the `Bug 1523614 <https://bugs.launchpad.net/neutron/+bug/1523614>`__.
|
||||
|
||||
Project network types
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Project networks provide connectivity to instances for a particular
|
||||
project. Regular (non-privileged) users can manage project networks
|
||||
within the allocation that an administrator or operator defines for
|
||||
them. More information about project and provider networks see
|
||||
:doc:`intro-os-networking`
|
||||
or the `OpenStack Administrator Guide
|
||||
<https://docs.openstack.org/admin-guide/networking-adv-features.html#provider-networks>`__.
|
||||
|
||||
Project network configurations are made in the
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini`` configuration file on the neutron
|
||||
server:
|
||||
|
||||
* VLAN
|
||||
|
||||
The administrator needs to configure the range of VLAN IDs that can be
|
||||
used for project network allocation.
|
||||
For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-vlan-type-configuration-options>`__.
|
||||
|
||||
* GRE
|
||||
|
||||
The administrator needs to configure the range of tunnel IDs that can be
|
||||
used for project network allocation.
|
||||
For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-gre-type-configuration-options>`__.
|
||||
|
||||
* VXLAN
|
||||
|
||||
The administrator needs to configure the range of VXLAN IDs that can be
|
||||
used for project network allocation.
|
||||
For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-vxlan-type-configuration-options>`__.
|
||||
|
||||
.. note::
|
||||
Flat networks for project allocation are not supported. They only
|
||||
can exist as a provider network.
|
||||
|
||||
Mechanism drivers
|
||||
-----------------
|
||||
|
||||
To enable mechanism drivers in the ML2 plug-in, edit the
|
||||
``/etc/neutron/plugins/ml2/ml2_conf.ini`` file on the neutron server:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = ovs,l2pop
|
||||
|
||||
.. note::
|
||||
|
||||
For more details, see the `Bug 1567792 <https://bugs.launchpad.net/openstack-manuals/+bug/1567792>`__.
|
||||
|
||||
For more details, see the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-plug-in-configuration-options>`__.
|
||||
|
||||
* Linux bridge
|
||||
|
||||
No additional configurations required for the mechanism driver. Additional
|
||||
agent configuration is required. For details, see the related *L2 agent*
|
||||
section below.
|
||||
|
||||
* Open vSwitch
|
||||
|
||||
No additional configurations required for the mechanism driver. Additional
|
||||
agent configuration is required. For details, see the related *L2 agent*
|
||||
section below.
|
||||
|
||||
* SRIOV
|
||||
|
||||
The administrator needs to define a list PCI hardware that shall be used
|
||||
by OpenStack. For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-sr-iov-mechanism-configuration-options>`__.
|
||||
|
||||
* MacVTap
|
||||
|
||||
No additional configurations required for the mechanism driver. Additional
|
||||
agent configuration is required. Please see the related section.
|
||||
|
||||
* L2 population
|
||||
|
||||
The administrator can configure some optional configuration options. For more
|
||||
details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-l2-population-mechanism-configuration-options>`__.
|
||||
|
||||
* Specialized
|
||||
|
||||
* Open source
|
||||
|
||||
External open source mechanism drivers exist as well as the neutron
|
||||
integrated reference implementations. Configuration of those drivers is not
|
||||
part of this document. For example:
|
||||
|
||||
* OpenDaylight
|
||||
* OpenContrail
|
||||
|
||||
* Proprietary (vendor)
|
||||
|
||||
External mechanism drivers from various vendors exist as well as the
|
||||
neutron integrated reference implementations.
|
||||
|
||||
Configuration of those drivers is not part of this document.
|
||||
|
||||
|
||||
Agents
|
||||
------
|
||||
|
||||
L2 agent
|
||||
^^^^^^^^
|
||||
|
||||
An L2 agent serves layer 2 (Ethernet) network connectivity to OpenStack
|
||||
resources. It typically runs on each Network Node and on each Compute Node.
|
||||
|
||||
* Open vSwitch agent
|
||||
|
||||
The Open vSwitch agent configures the Open vSwitch to realize L2 networks for
|
||||
OpenStack resources.
|
||||
|
||||
Configuration for the Open vSwitch agent is typically done in the
|
||||
``openvswitch_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#open-vswitch-agent-configuration-options>`__.
|
||||
|
||||
* Linux bridge agent
|
||||
|
||||
The Linux bridge agent configures Linux bridges to realize L2 networks for
|
||||
OpenStack resources.
|
||||
|
||||
Configuration for the Linux bridge agent is typically done in the
|
||||
``linuxbridge_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#linux-bridge-agent-configuration-options>`__.
|
||||
|
||||
* SRIOV Nic Switch agent
|
||||
|
||||
The sriov nic switch agent configures PCI virtual functions to realize L2
|
||||
networks for OpenStack instances. Network attachments for other resources
|
||||
like routers, DHCP, and so on are not supported.
|
||||
|
||||
Configuration for the SRIOV nic switch agent is typically done in the
|
||||
``sriov_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#sr-iov-agent-configuration-options>`__.
|
||||
|
||||
* MacVTap agent
|
||||
|
||||
The MacVTap agent uses kernel MacVTap devices for realizing L2
|
||||
networks for OpenStack instances. Network attachments for other resources
|
||||
like routers, DHCP, and so on are not supported.
|
||||
|
||||
Configuration for the MacVTap agent is typically done in the
|
||||
``macvtap_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#modular-layer-2-ml2-macvtap-mechanism-configuration-options>`__.
|
||||
|
||||
L3 agent
|
||||
^^^^^^^^
|
||||
|
||||
The L3 agent offers advanced layer 3 services, like virtual Routers and
|
||||
Floating IPs. It requires an L2 agent running in parallel.
|
||||
|
||||
Configuration for the L3 agent is typically done in the
|
||||
``l3_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#l3-agent>`__.
|
||||
|
||||
DHCP agent
|
||||
^^^^^^^^^^
|
||||
|
||||
The DHCP agent is responsible for DHCP (Dynamic Host Configuration
|
||||
Protocol) and RADVD (Router Advertisement Daemon) services.
|
||||
It requires a running L2 agent on the same node.
|
||||
|
||||
Configuration for the DHCP agent is typically done in the
|
||||
``dhcp_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#dhcp-agent>`__.
|
||||
|
||||
Metadata agent
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
The Metadata agent allows instances to access cloud-init meta data and user
|
||||
data via the network. It requires a running L2 agent on the same node.
|
||||
|
||||
Configuration for the Metadata agent is typically done in the
|
||||
``metadata_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#metadata-agent>`__.
|
||||
|
||||
L3 metering agent
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
The L3 metering agent enables layer3 traffic metering. It requires a running L3
|
||||
agent on the same node.
|
||||
|
||||
Configuration for the L3 metering agent is typically done in the
|
||||
``metering_agent.ini`` configuration file. Make sure that on agent start
|
||||
you pass this configuration file as argument.
|
||||
|
||||
For a detailed list of configuration options, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#metering-agent>`__.
|
||||
|
||||
Security
|
||||
--------
|
||||
|
||||
L2 agents support some important security configurations.
|
||||
|
||||
* Security Groups
|
||||
|
||||
For more details, see the related section in the
|
||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/networking/networking_options_reference.html#security-groups>`__.
|
||||
|
||||
* Arp Spoofing Prevention
|
||||
|
||||
Configured in the *L2 agent* configuration.
|
||||
|
||||
|
||||
Reference implementations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
In this section, the combination of a mechanism driver and an L2 agent is
|
||||
called 'reference implementation'. The following table lists these
|
||||
implementations:
|
||||
|
||||
.. list-table:: Mechanism drivers and L2 agents
|
||||
:header-rows: 1
|
||||
|
||||
* - Mechanism Driver
|
||||
- L2 agent
|
||||
* - Open vSwitch
|
||||
- Open vSwitch agent
|
||||
* - Linux bridge
|
||||
- Linux bridge agent
|
||||
* - SRIOV
|
||||
- SRIOV nic switch agent
|
||||
* - MacVTap
|
||||
- MacVTap agent
|
||||
* - L2 population
|
||||
- Open vSwitch agent, Linux bridge agent
|
||||
|
||||
The following tables shows which reference implementations support which
|
||||
non-L2 neutron agents:
|
||||
|
||||
.. list-table:: Reference implementations and other agents
|
||||
:header-rows: 1
|
||||
|
||||
* - Reference Implementation
|
||||
- L3 agent
|
||||
- DHCP agent
|
||||
- Metadata agent
|
||||
- L3 Metering agent
|
||||
* - Open vSwitch & Open vSwitch agent
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
* - Linux bridge & Linux bridge agent
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
- yes
|
||||
* - SRIOV & SRIOV nic switch agent
|
||||
- no
|
||||
- no
|
||||
- no
|
||||
- no
|
||||
* - MacVTap & MacVTap agent
|
||||
- no
|
||||
- no
|
||||
- no
|
||||
- no
|
||||
|
||||
.. note::
|
||||
L2 population is not listed here, as it is not a standalone mechanism.
|
||||
If other agents are supported depends on the conjunctive mechanism driver
|
||||
that is used for binding a port.
|
||||
|
||||
More information about L2 population see the
|
||||
`OpenStack Manuals <http://docs.ocselected.org/openstack-manuals/kilo/networking-guide/content/ml2_l2pop_scenarios.html>`__.
|
||||
|
||||
|
||||
Buying guide
|
||||
------------
|
||||
|
||||
This guide characterizes the L2 reference implementations that currently exist.
|
||||
|
||||
* Open vSwitch mechanism and Open vSwitch agent
|
||||
|
||||
Can be used for instance network attachments as well as for attachments of
|
||||
other network resources like routers, DHCP, and so on.
|
||||
|
||||
* Linux bridge mechanism and Linux bridge agent
|
||||
|
||||
Can be used for instance network attachments as well as for attachments of
|
||||
other network resources like routers, DHCP, and so on.
|
||||
|
||||
* SRIOV mechanism driver and SRIOV NIC switch agent
|
||||
|
||||
Can only be used for instance network attachments (device_owner = compute).
|
||||
|
||||
Is deployed besides an other mechanism driver and L2 agent such as OVS or
|
||||
Linux bridge. It offers instances direct access to the network adapter
|
||||
through a PCI Virtual Function (VF). This gives an instance direct access to
|
||||
hardware capabilities and high performance networking.
|
||||
|
||||
The cloud consumer can decide via the neutron APIs VNIC_TYPE attribute, if
|
||||
an instance gets a normal OVS port or an SRIOV port.
|
||||
|
||||
Due to direct connection, some features are not available when using SRIOV.
|
||||
For example, DVR, security groups, migration.
|
||||
|
||||
For more information see the :ref:`config-sriov`.
|
||||
|
||||
* MacVTap mechanism driver and MacVTap agent
|
||||
|
||||
Can only be used for instance network attachments (device_owner = compute)
|
||||
and not for attachment of other resources like routers, DHCP, and so on.
|
||||
|
||||
It is positioned as alternative to Open vSwitch or Linux bridge support on
|
||||
the compute node for internal deployments.
|
||||
|
||||
MacVTap offers a direct connection with very little overhead between
|
||||
instances and down to the adapter. You can use MacVTap agent on the
|
||||
compute node when you require a network connection that is performance
|
||||
critical. It does not require specific hardware (like with SRIOV).
|
||||
|
||||
Due to the direct connection, some features are not available when using
|
||||
it on the compute node. For example, DVR, security groups and arp-spoofing
|
||||
protection.
|
134
doc/source/admin/config-mtu.rst
Normal file
@ -0,0 +1,134 @@
|
||||
.. _config-mtu:
|
||||
|
||||
==================
|
||||
MTU considerations
|
||||
==================
|
||||
|
||||
The Networking service uses the MTU of the underlying physical network to
|
||||
calculate the MTU for virtual network components including instance network
|
||||
interfaces. By default, it assumes a standard 1500-byte MTU for the
|
||||
underlying physical network.
|
||||
|
||||
The Networking service only references the underlying physical network MTU.
|
||||
Changing the underlying physical network device MTU requires configuration
|
||||
of physical network devices such as switches and routers.
|
||||
|
||||
Jumbo frames
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The Networking service supports underlying physical networks using jumbo
|
||||
frames and also enables instances to use jumbo frames minus any overlay
|
||||
protocol overhead. For example, an underlying physical network with a
|
||||
9000-byte MTU yields a 8950-byte MTU for instances using a VXLAN network
|
||||
with IPv4 endpoints. Using IPv6 endpoints for overlay networks adds 20
|
||||
bytes of overhead for any protocol.
|
||||
|
||||
The Networking service supports the following underlying physical network
|
||||
architectures. Case 1 refers to the most common architecture. In general,
|
||||
architectures should avoid cases 2 and 3.
|
||||
|
||||
.. note::
|
||||
|
||||
You can trigger MTU recalculation for existing networks by changing the
|
||||
MTU configuration and restarting the ``neutron-server`` service.
|
||||
However, propagating MTU calculations to the data plane may require
|
||||
users to delete and recreate ports on the network.
|
||||
|
||||
When using the Open vSwitch or Linux bridge drivers, new MTU calculations
|
||||
will be propogated automatically after restarting the ``l3-agent`` service.
|
||||
|
||||
Case 1
|
||||
------
|
||||
|
||||
For typical underlying physical network architectures that implement a single
|
||||
MTU value, you can leverage jumbo frames using two options, one in the
|
||||
``neutron.conf`` file and the other in the ``ml2_conf.ini`` file. Most
|
||||
environments should use this configuration.
|
||||
|
||||
For example, referencing an underlying physical network with a 9000-byte MTU:
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
global_physnet_mtu = 9000
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
path_mtu = 9000
|
||||
|
||||
Case 2
|
||||
------
|
||||
|
||||
Some underlying physical network architectures contain multiple layer-2
|
||||
networks with different MTU values. You can configure each flat or VLAN
|
||||
provider network in the bridge or interface mapping options of the layer-2
|
||||
agent to reference a unique MTU value.
|
||||
|
||||
For example, referencing a 4000-byte MTU for ``provider2``, a 1500-byte
|
||||
MTU for ``provider3``, and a 9000-byte MTU for other networks using the
|
||||
Open vSwitch agent:
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
global_physnet_mtu = 9000
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider1:eth1,provider2:eth2,provider3:eth3
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
physical_network_mtus = provider2:4000,provider3:1500
|
||||
path_mtu = 9000
|
||||
|
||||
Case 3
|
||||
------
|
||||
|
||||
Some underlying physical network architectures contain a unique layer-2 network
|
||||
for overlay networks using protocols such as VXLAN and GRE.
|
||||
|
||||
For example, referencing a 4000-byte MTU for overlay networks and a 9000-byte
|
||||
MTU for other networks:
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
global_physnet_mtu = 9000
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
path_mtu = 4000
|
||||
|
||||
.. note::
|
||||
|
||||
Other networks including provider networks and flat or VLAN
|
||||
self-service networks assume the value of the ``global_physnet_mtu``
|
||||
option.
|
||||
|
||||
Instance network interfaces (VIFs)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The DHCP agent provides an appropriate MTU value to instances using IPv4,
|
||||
while the L3 agent provides an appropriate MTU value to instances using
|
||||
IPv6. IPv6 uses RA via the L3 agent because the DHCP agent only supports
|
||||
IPv4. Instances using IPv4 and IPv6 should obtain the same MTU value
|
||||
regardless of method.
|
155
doc/source/admin/config-ovs-dpdk.rst
Normal file
@ -0,0 +1,155 @@
|
||||
.. _config-ovs-dpdk:
|
||||
|
||||
===============================
|
||||
Open vSwitch with DPDK datapath
|
||||
===============================
|
||||
|
||||
This page serves as a guide for how to use the OVS with DPDK datapath
|
||||
functionality available in the Networking service as of the Mitaka release.
|
||||
|
||||
The basics
|
||||
~~~~~~~~~~
|
||||
|
||||
Open vSwitch (OVS) provides support for a Data Plane Development Kit (DPDK)
|
||||
datapath since OVS 2.2, and a DPDK-backed ``vhost-user`` virtual interface
|
||||
since OVS 2.4. The DPDK datapath provides lower latency and higher performance
|
||||
than the standard kernel OVS datapath, while DPDK-backed ``vhost-user``
|
||||
interfaces can connect guests to this datapath. For more information on DPDK,
|
||||
refer to the `DPDK <http://dpdk.org/>`__ website.
|
||||
|
||||
OVS with DPDK, or OVS-DPDK, can be used to provide high-performance networking
|
||||
between instances on OpenStack compute nodes.
|
||||
|
||||
Prerequisites
|
||||
-------------
|
||||
|
||||
Using DPDK in OVS requires the following minimum software versions:
|
||||
|
||||
* OVS 2.4
|
||||
* DPDK 2.0
|
||||
* QEMU 2.1.0
|
||||
* libvirt 1.2.13
|
||||
|
||||
Support of ``vhost-user`` multiqueue that enables use of multiqueue with
|
||||
``virtio-net`` and ``igb_uio`` is available if the following newer
|
||||
versions are used:
|
||||
|
||||
* OVS 2.5
|
||||
* DPDK 2.2
|
||||
* QEMU 2.5
|
||||
* libvirt 1.2.17
|
||||
|
||||
In both cases, install and configure Open vSwitch with DPDK support for each
|
||||
node. For more information, see the
|
||||
`OVS-DPDK <https://github.com/openvswitch/ovs/blob/master/Documentation/intro/install/dpdk.rst>`__
|
||||
installation guide (select an appropriate OVS version in the
|
||||
:guilabel:`Branch` drop-down menu).
|
||||
|
||||
`Neutron configuration reference for OVS-DPDK
|
||||
<https://docs.openstack.org/developer/neutron/devref/ovs_vhostuser.html>`__
|
||||
for configuration of neutron OVS agent.
|
||||
|
||||
In case you wish to configure multiqueue, see the
|
||||
`OVS configuration chapter on vhost-user
|
||||
<http://wiki.qemu.org/Documentation/vhost-user-ovs-dpdk#Enabling_multi-queue>`__
|
||||
in QEMU documentation.
|
||||
|
||||
The technical background of multiqueue is explained in the corresponding
|
||||
`blueprint <https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html>`__.
|
||||
|
||||
Additionally, OpenStack supports ``vhost-user`` reconnect feature starting
|
||||
from the Ocata release, as implementation of fix for
|
||||
`bug 1604924 <https://bugs.launchpad.net/neutron/+bug/1604924>`__.
|
||||
Starting from OpenStack Ocata release this feature is used without any
|
||||
configuration necessary in case the following minimum software versions
|
||||
are used:
|
||||
|
||||
* OVS 2.6
|
||||
* DPDK 16.07
|
||||
* QEMU 2.7
|
||||
|
||||
The support of this feature is not yet present in ML2 OVN and ODL
|
||||
mechanism drivers.
|
||||
|
||||
Using vhost-user interfaces
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Once OVS and neutron are correctly configured with DPDK support,
|
||||
``vhost-user`` interfaces are completely transparent to the guest
|
||||
(except in case of multiqueue configuration described below).
|
||||
However, guests must request huge pages. This can be done through flavors.
|
||||
For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set m1.large --property hw:mem_page_size=large
|
||||
|
||||
For more information about the syntax for ``hw:mem_page_size``, refer to the
|
||||
`Flavors <https://docs.openstack.org/admin-guide/compute-flavors.html>`__ guide.
|
||||
|
||||
.. note::
|
||||
|
||||
``vhost-user`` requires file descriptor-backed shared memory. Currently, the
|
||||
only way to request this is by requesting large pages. This is why instances
|
||||
spawned on hosts with OVS-DPDK must request large pages. The aggregate
|
||||
flavor affinity filter can be used to associate flavors with large page
|
||||
support to hosts with OVS-DPDK support.
|
||||
|
||||
Create and add ``vhost-user`` network interfaces to instances in the same
|
||||
fashion as conventional interfaces. These interfaces can use the kernel
|
||||
``virtio-net`` driver or a DPDK-compatible driver in the guest
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --nic net-id=$net_id ... testserver
|
||||
|
||||
Using vhost-user multiqueue
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To use this feature, the following should be set in the flavor extra specs
|
||||
(flavor keys):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack flavor set $m1.large --property hw:vif_multiqueue_enabled=true
|
||||
|
||||
This setting can be overridden by the image metadata property if the feature
|
||||
is enabled in the extra specs:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack image set --property hw_vif_mutliqueue_enabled=true IMAGE_NAME
|
||||
|
||||
Support of ``virtio-net`` multiqueue needs to be present in kernel of
|
||||
guest VM and is available starting from Linux kernel 3.8.
|
||||
|
||||
Check pre-set maximum for number of combined channels in channel
|
||||
configuration.
|
||||
Configuration of OVS and flavor done successfully should result in
|
||||
maximum being more than '1'):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ethtool -l INTERFACE_NAME
|
||||
|
||||
To increase number of current combined channels run following command in
|
||||
guest VM:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ethtool -L INTERFACE_NAME combined QUEUES_NR
|
||||
|
||||
The number of queues should typically match the number of vCPUs
|
||||
defined for the instance. In newer kernel versions
|
||||
this is configured automatically.
|
||||
|
||||
Known limitations
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
* This feature is only supported when using the libvirt compute driver, and the
|
||||
KVM/QEMU hypervisor.
|
||||
* Huge pages are required for each instance running on hosts with OVS-DPDK.
|
||||
If huge pages are not present in the guest, the interface will appear but
|
||||
will not function.
|
||||
* Expect performance degradation of services using tap devices: these devices
|
||||
do not support DPDK. Example services include DVR, FWaaS, or LBaaS.
|
55
doc/source/admin/config-ovsfwdriver.rst
Normal file
@ -0,0 +1,55 @@
|
||||
.. _config-ovsfwdriver:
|
||||
|
||||
===================================
|
||||
Native Open vSwitch firewall driver
|
||||
===================================
|
||||
|
||||
.. note::
|
||||
|
||||
Experimental feature or incomplete documentation.
|
||||
|
||||
Historically, Open vSwitch (OVS) could not interact directly with *iptables*
|
||||
to implement security groups. Thus, the OVS agent and Compute service use
|
||||
a Linux bridge between each instance (VM) and the OVS integration bridge
|
||||
``br-int`` to implement security groups. The Linux bridge device contains
|
||||
the *iptables* rules pertaining to the instance. In general, additional
|
||||
components between instances and physical network infrastructure cause
|
||||
scalability and performance problems. To alleviate such problems, the OVS
|
||||
agent includes an optional firewall driver that natively implements security
|
||||
groups as flows in OVS rather than the Linux bridge device and *iptables*.
|
||||
This increases scalability and performance.
|
||||
|
||||
Configuring heterogeneous firewall drivers
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
L2 agents can be configured to use differing firewall drivers. There is no
|
||||
requirement that they all be the same. If an agent lacks a firewall driver
|
||||
configuration, it will default to what is configured on its server. This also
|
||||
means there is no requirement that the server has any firewall driver
|
||||
configured at all, as long as the agents are configured correctly.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The native OVS firewall implementation requires kernel and user space support
|
||||
for *conntrack*, thus requiring minimum versions of the Linux kernel and
|
||||
Open vSwitch. All cases require Open vSwitch version 2.5 or newer.
|
||||
|
||||
* Kernel version 4.3 or newer includes *conntrack* support.
|
||||
* Kernel version 3.3, but less than 4.3, does not include *conntrack*
|
||||
support and requires building the OVS modules.
|
||||
|
||||
Enable the native OVS firewall driver
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* On nodes running the Open vSwitch agent, edit the
|
||||
``openvswitch_agent.ini`` file and enable the firewall driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = openvswitch
|
||||
|
||||
For more information, see the `Open vSwitch Firewall Driver
|
||||
<https://docs.openstack.org/developer/neutron/devref/openvswitch_firewall.html>`_
|
||||
and the `video <https://www.youtube.com/watch?v=SOHeZ3g9yxM>`_.
|
479
doc/source/admin/config-qos.rst
Normal file
@ -0,0 +1,479 @@
|
||||
.. _config-qos:
|
||||
|
||||
========================
|
||||
Quality of Service (QoS)
|
||||
========================
|
||||
|
||||
QoS is defined as the ability to guarantee certain network requirements
|
||||
like bandwidth, latency, jitter, and reliability in order to satisfy a
|
||||
Service Level Agreement (SLA) between an application provider and end
|
||||
users.
|
||||
|
||||
Network devices such as switches and routers can mark traffic so that it is
|
||||
handled with a higher priority to fulfill the QoS conditions agreed under
|
||||
the SLA. In other cases, certain network traffic such as Voice over IP (VoIP)
|
||||
and video streaming needs to be transmitted with minimal bandwidth
|
||||
constraints. On a system without network QoS management, all traffic will be
|
||||
transmitted in a "best-effort" manner making it impossible to guarantee service
|
||||
delivery to customers.
|
||||
|
||||
QoS is an advanced service plug-in. QoS is decoupled from the rest of the
|
||||
OpenStack Networking code on multiple levels and it is available through the
|
||||
ml2 extension driver.
|
||||
|
||||
Details about the DB models, API extension, and use cases are out of the scope
|
||||
of this guide but can be found in the
|
||||
`Neutron QoS specification <https://specs.openstack.org/openstack/neutron-specs/specs/liberty/qos-api-extension.html>`_.
|
||||
|
||||
|
||||
Supported QoS rule types
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Any plug-in or ml2 mechanism driver can claim support for some QoS rule types
|
||||
by providing a plug-in/driver class property called
|
||||
``supported_qos_rule_types`` that returns a list of strings that correspond
|
||||
to `QoS rule types
|
||||
<https://git.openstack.org/cgit/openstack/neutron/tree/neutron/services/qos/qos_consts.py>`_.
|
||||
|
||||
The following table shows the Networking back ends, QoS supported rules, and
|
||||
traffic directions (from the VM point of view).
|
||||
|
||||
.. table:: **Networking back ends, supported rules, and traffic direction**
|
||||
|
||||
==================== ================ ================ ================
|
||||
Rule \ back end Open vSwitch SR-IOV Linux bridge
|
||||
==================== ================ ================ ================
|
||||
Bandwidth limit Egress Egress (1) Egress
|
||||
Minimum bandwidth - Egress -
|
||||
DSCP marking Egress - Egress
|
||||
==================== ================ ================ ================
|
||||
|
||||
.. note::
|
||||
|
||||
(1) Max burst parameter is skipped because it is not supported by the
|
||||
IP tool.
|
||||
|
||||
In the most simple case, the property can be represented by a simple Python
|
||||
list defined on the class.
|
||||
|
||||
For an ml2 plug-in, the list of supported QoS rule types and parameters is
|
||||
defined as a common subset of rules supported by all active mechanism drivers.
|
||||
A QoS rule is always attached to a QoS policy. When a rule is created or
|
||||
updated:
|
||||
|
||||
* The QoS plug-in will check if this rule and parameters are supported by any
|
||||
active mechanism driver if the QoS policy is not attached to any port or
|
||||
network.
|
||||
|
||||
* The QoS plug-in will check if this rule and parameters are supported by the
|
||||
mechanism drivers managing those ports if the QoS policy is attached to any
|
||||
port or network.
|
||||
|
||||
|
||||
Configuration
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
To enable the service, follow the steps below:
|
||||
|
||||
On network nodes:
|
||||
|
||||
#. Add the QoS service to the ``service_plugins`` setting in
|
||||
``/etc/neutron/neutron.conf``. For example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
service_plugins = \
|
||||
neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,
|
||||
neutron.services.metering.metering_plugin.MeteringPlugin,
|
||||
neutron.services.qos.qos_plugin.QoSPlugin
|
||||
|
||||
#. Optionally, set the needed ``notification_drivers`` in the ``[qos]``
|
||||
section in ``/etc/neutron/neutron.conf`` (``message_queue`` is the
|
||||
default).
|
||||
|
||||
#. In ``/etc/neutron/plugins/ml2/ml2_conf.ini``, add ``qos`` to
|
||||
``extension_drivers`` in the ``[ml2]`` section. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
extension_drivers = port_security, qos
|
||||
|
||||
#. If the Open vSwitch agent is being used, set ``extensions`` to
|
||||
``qos`` in the ``[agent]`` section of
|
||||
``/etc/neutron/plugins/ml2/openvswitch_agent.ini``. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
extensions = qos
|
||||
|
||||
On compute nodes:
|
||||
|
||||
#. In ``/etc/neutron/plugins/ml2/openvswitch_agent.ini``, add ``qos`` to the
|
||||
``extensions`` setting in the ``[agent]`` section. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[agent]
|
||||
extensions = qos
|
||||
|
||||
.. note::
|
||||
|
||||
QoS currently works with ml2 only (SR-IOV, Open vSwitch, and linuxbridge
|
||||
are drivers that are enabled for QoS in Mitaka release).
|
||||
|
||||
Trusted projects policy.json configuration
|
||||
------------------------------------------
|
||||
|
||||
If projects are trusted to administrate their own QoS policies in
|
||||
your cloud, neutron's file ``policy.json`` can be modified to allow this.
|
||||
|
||||
Modify ``/etc/neutron/policy.json`` policy entries as follows:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"get_policy": "rule:regular_user",
|
||||
"create_policy": "rule:regular_user",
|
||||
"update_policy": "rule:regular_user",
|
||||
"delete_policy": "rule:regular_user",
|
||||
"get_rule_type": "rule:regular_user",
|
||||
|
||||
To enable bandwidth limit rule:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"get_policy_bandwidth_limit_rule": "rule:regular_user",
|
||||
"create_policy_bandwidth_limit_rule": "rule:regular_user",
|
||||
"delete_policy_bandwidth_limit_rule": "rule:regular_user",
|
||||
"update_policy_bandwidth_limit_rule": "rule:regular_user",
|
||||
|
||||
To enable DSCP marking rule:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"get_policy_dscp_marking_rule": "rule:regular_user",
|
||||
"create_dscp_marking_rule": "rule:regular_user",
|
||||
"delete_dscp_marking_rule": "rule:regular_user",
|
||||
"update_dscp_marking_rule": "rule:regular_user",
|
||||
|
||||
To enable minimum bandwidth rule:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
"get_policy_minimum_bandwidth_rule": "rule:regular_user",
|
||||
"create_policy_minimum_bandwidth_rule": "rule:regular_user",
|
||||
"delete_policy_minimum_bandwidth_rule": "rule:regular_user",
|
||||
"update_policy_minimum_bandwidth_rule": "rule:regular_user",
|
||||
|
||||
User workflow
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
QoS policies are only created by admins with the default ``policy.json``.
|
||||
Therefore, you should have the cloud operator set them up on
|
||||
behalf of the cloud projects.
|
||||
|
||||
If projects are trusted to create their own policies, check the trusted
|
||||
projects ``policy.json`` configuration section.
|
||||
|
||||
First, create a QoS policy and its bandwidth limit rule:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos policy create bw-limiter
|
||||
|
||||
Created a new policy:
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| description | |
|
||||
| id | 5df855e9-a833-49a3-9c82-c0839a5f103f |
|
||||
| name | qos1 |
|
||||
| project_id | 4db7c1ed114a4a7fb0f077148155c500 |
|
||||
| rules | [] |
|
||||
| shared | False |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
$ openstack network qos rule create --type bandwidth-limit --max-kbps 3000 \
|
||||
--max-burst-kbits 300 --egress bw-limiter
|
||||
|
||||
Created a new bandwidth_limit_rule:
|
||||
+----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+--------------------------------------+
|
||||
| direction | egress |
|
||||
| id | 92ceb52f-170f-49d0-9528-976e2fee2d6f |
|
||||
| max_burst_kbps | 300 |
|
||||
| max_kbps | 3000 |
|
||||
+----------------+--------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
The QoS implementation requires a burst value to ensure proper behavior of
|
||||
bandwidth limit rules in the Open vSwitch and Linux bridge agents. If you
|
||||
do not provide a value, it defaults to 80% of the bandwidth limit which
|
||||
works for typical TCP traffic.
|
||||
|
||||
Second, associate the created policy with an existing neutron port.
|
||||
In order to do this, user extracts the port id to be associated to
|
||||
the already created policy. In the next example, we will assign the
|
||||
``bw-limiter`` policy to the VM with IP address ``192.0.2.1``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port list
|
||||
|
||||
+--------------------------------------+-----------------------------------+
|
||||
| ID | Fixed IP Addresses |
|
||||
+--------------------------------------+-----------------------------------+
|
||||
| 0271d1d9-1b16-4410-bd74-82cdf6dcb5b3 | { ... , "ip_address": "192.0.2.1"}|
|
||||
| 88101e57-76fa-4d12-b0e0-4fc7634b874a | { ... , "ip_address": "192.0.2.3"}|
|
||||
| e04aab6a-5c6c-4bd9-a600-33333551a668 | { ... , "ip_address": "192.0.2.2"}|
|
||||
+--------------------------------------+-----------------------------------+
|
||||
|
||||
$ openstack port set --qos-policy bw-limiter \
|
||||
88101e57-76fa-4d12-b0e0-4fc7634b874a
|
||||
Updated port: 88101e57-76fa-4d12-b0e0-4fc7634b874a
|
||||
|
||||
In order to detach a port from the QoS policy, simply update again the
|
||||
port configuration.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port unset --no-qos-policy 88101e57-76fa-4d12-b0e0-4fc7634b874a
|
||||
Updated port: 88101e57-76fa-4d12-b0e0-4fc7634b874a
|
||||
|
||||
|
||||
Ports can be created with a policy attached to them too.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create --qos-policy bw-limiter --network private port1
|
||||
|
||||
Created a new port:
|
||||
+-----------------------+--------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+--------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| allowed_address_pairs | |
|
||||
| binding_host_id | |
|
||||
| binding_profile | |
|
||||
| binding_vif_details | |
|
||||
| binding_vif_type | unbound |
|
||||
| binding_vnic_type | normal |
|
||||
| created_at | 2017-05-15T08:43:00Z |
|
||||
| description | |
|
||||
| device_id | |
|
||||
| device_owner | |
|
||||
| dns_assignment | None |
|
||||
| dns_name | None |
|
||||
| extra_dhcp_opts | |
|
||||
| fixed_ips | ip_address='10.0.10.4', subnet_id='292f8c1e-...' |
|
||||
| id | f51562ee-da8d-42de-9578-f6f5cb248226 |
|
||||
| ip_address | None |
|
||||
| mac_address | fa:16:3e:d9:f2:ba |
|
||||
| name | port1 |
|
||||
| network_id | 55dc2f70-0f92-4002-b343-ca34277b0234 |
|
||||
| option_name | None |
|
||||
| option_value | None |
|
||||
| port_security_enabled | False |
|
||||
| project_id | 4db7c1ed114a4a7fb0f077148155c500 |
|
||||
| qos_policy_id | 5df855e9-a833-49a3-9c82-c0839a5f103f |
|
||||
| revision_number | 6 |
|
||||
| security_group_ids | 0531cc1a-19d1-4cc7-ada5-49f8b08245be |
|
||||
| status | DOWN |
|
||||
| subnet_id | None |
|
||||
| updated_at | 2017-05-15T08:43:00Z |
|
||||
+-----------------------+--------------------------------------------------+
|
||||
|
||||
|
||||
You can attach networks to a QoS policy. The meaning of this is that
|
||||
any compute port connected to the network will use the network policy by
|
||||
default unless the port has a specific policy attached to it. Internal network
|
||||
owned ports like DHCP and internal router ports are excluded from network
|
||||
policy application.
|
||||
|
||||
In order to attach a QoS policy to a network, update an existing
|
||||
network, or initially create the network attached to the policy.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network set --qos-policy bw-limiter private
|
||||
Updated network: private
|
||||
|
||||
.. note::
|
||||
|
||||
Configuring the proper burst value is very important. If the burst value is
|
||||
set too low, bandwidth usage will be throttled even with a proper bandwidth
|
||||
limit setting. This issue is discussed in various documentation sources, for
|
||||
example in `Juniper's documentation
|
||||
<http://www.juniper.net/documentation/en_US/junos12.3/topics/concept/policer-mx-m120-m320-burstsize-determining.html>`_.
|
||||
Burst value for TCP traffic can be set as 80% of desired bandwidth limit
|
||||
value. For example, if the bandwidth limit is set to 1000kbps then enough
|
||||
burst value will be 800kbit. If the configured burst value is too low,
|
||||
achieved bandwidth limit will be lower than expected. If the configured burst
|
||||
value is too high, too few packets could be limited and achieved bandwidth
|
||||
limit would be higher than expected.
|
||||
|
||||
Administrator enforcement
|
||||
-------------------------
|
||||
|
||||
Administrators are able to enforce policies on project ports or networks.
|
||||
As long as the policy is not shared, the project is not be able to detach
|
||||
any policy attached to a network or port.
|
||||
|
||||
If the policy is shared, the project is able to attach or detach such
|
||||
policy from its own ports and networks.
|
||||
|
||||
|
||||
Rule modification
|
||||
-----------------
|
||||
You can modify rules at runtime. Rule modifications will be propagated to any
|
||||
attached port.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos rule set --max-kbps 2000 --max-burst-kbps 200 \
|
||||
--ingress bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f
|
||||
Updated bandwidth_limit_rule: 92ceb52f-170f-49d0-9528-976e2fee2d6f
|
||||
|
||||
$ openstack network qos rule show \
|
||||
bw-limiter 92ceb52f-170f-49d0-9528-976e2fee2d6f
|
||||
|
||||
+----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+--------------------------------------+
|
||||
| direction | ingress |
|
||||
| id | 92ceb52f-170f-49d0-9528-976e2fee2d6f |
|
||||
| max_burst_kbps | 200 |
|
||||
| max_kbps | 2000 |
|
||||
+----------------+--------------------------------------+
|
||||
|
||||
Just like with bandwidth limiting, create a policy for DSCP marking rule:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos policy create dscp-marking
|
||||
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| description | |
|
||||
| id | d1f90c76-fbe8-4d6f-bb87-a9aea997ed1e |
|
||||
| name | dscp-marking |
|
||||
| project_id | 4db7c1ed114a4a7fb0f077148155c500 |
|
||||
| rules | [] |
|
||||
| shared | False |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
You can create, update, list, delete, and show DSCP markings
|
||||
with the neutron client:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos rule create --type dscp-marking --dscp-mark 26 \
|
||||
dscp-marking
|
||||
|
||||
Created a new dscp marking rule
|
||||
+----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+--------------------------------------+
|
||||
| id | 115e4f70-8034-4176-8fe9-2c47f8878a7d |
|
||||
| dscp_mark | 26 |
|
||||
+----------------+--------------------------------------+
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos rule set --dscp-mark 22 \
|
||||
dscp-marking 115e4f70-8034-4176-8fe9-2c47f8878a7d
|
||||
Updated dscp_rule: 115e4f70-8034-4176-8fe9-2c47f8878a7d
|
||||
|
||||
$ openstack network qos rule list dscp-marking
|
||||
|
||||
+--------------------------------------+----------------------------------+
|
||||
| ID | DSCP Mark |
|
||||
+--------------------------------------+----------------------------------+
|
||||
| 115e4f70-8034-4176-8fe9-2c47f8878a7d | 22 |
|
||||
+--------------------------------------+----------------------------------+
|
||||
|
||||
$ openstack network qos rule show \
|
||||
dscp-marking 115e4f70-8034-4176-8fe9-2c47f8878a7d
|
||||
|
||||
+----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+--------------------------------------+
|
||||
| id | 115e4f70-8034-4176-8fe9-2c47f8878a7d |
|
||||
| dscp_mark | 22 |
|
||||
+----------------+--------------------------------------+
|
||||
|
||||
$ openstack network qos rule delete \
|
||||
dscp-marking 115e4f70-8034-4176-8fe9-2c47f8878a7d
|
||||
Deleted dscp_rule: 115e4f70-8034-4176-8fe9-2c47f8878a7d
|
||||
|
||||
You can also include minimum bandwidth rules in your policy:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos policy create bandwidth-control
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| description | |
|
||||
| id | 8491547e-add1-4c6c-a50e-42121237256c |
|
||||
| name | bandwidth-control |
|
||||
| project_id | 7cc5a84e415d48e69d2b06aa67b317d8 |
|
||||
| rules | [] |
|
||||
| shared | False |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
$ openstack network qos rule create \
|
||||
--type minimum-bandwidth --min-kbps 1000 --egress bandwidth-control
|
||||
+------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------+--------------------------------------+
|
||||
| direction | egress |
|
||||
| id | da858b32-44bc-43c9-b92b-cf6e2fa836ab |
|
||||
| min_kbps | 1000 |
|
||||
| name | None |
|
||||
| project_id | |
|
||||
+------------+--------------------------------------+
|
||||
|
||||
A policy with a minimum bandwidth ensures best efforts are made to provide
|
||||
no less than the specified bandwidth to each port on which the rule is
|
||||
applied. However, as this feature is not yet integrated with the Compute
|
||||
scheduler, minimum bandwidth cannot be guaranteed.
|
||||
|
||||
It is also possible to combine several rules in one policy:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos rule create --type bandwidth-limit \
|
||||
--max-kbps 50000 --max-burst-kbits 50000 bandwidth-control
|
||||
+----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+--------------------------------------+
|
||||
| id | 0db48906-a762-4d32-8694-3f65214c34a6 |
|
||||
| max_burst_kbps | 50000 |
|
||||
| max_kbps | 50000 |
|
||||
| name | None |
|
||||
| project_id | |
|
||||
+----------------+--------------------------------------+
|
||||
|
||||
$ openstack network qos policy show bandwidth-control
|
||||
+-------------+-------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+-------------------------------------------------------------------+
|
||||
| description | |
|
||||
| id | 8491547e-add1-4c6c-a50e-42121237256c |
|
||||
| name | bandwidth-control |
|
||||
| project_id | 7cc5a84e415d48e69d2b06aa67b317d8 |
|
||||
| rules | [{u'max_kbps': 50000, u'type': u'bandwidth_limit', |
|
||||
| | u'id': u'0db48906-a762-4d32-8694-3f65214c34a6', |
|
||||
| | u'max_burst_kbps': 50000, |
|
||||
| | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}, |
|
||||
| | {u'direction': |
|
||||
| | u'egress', u'min_kbps': 1000, u'type': u'minimum_bandwidth', |
|
||||
| | u'id': u'da858b32-44bc-43c9-b92b-cf6e2fa836ab', |
|
||||
| | u'qos_policy_id': u'8491547e-add1-4c6c-a50e-42121237256c'}] |
|
||||
| shared | False |
|
||||
+-------------+-------------------------------------------------------------------+
|
458
doc/source/admin/config-rbac.rst
Normal file
@ -0,0 +1,458 @@
|
||||
.. _config-rbac:
|
||||
|
||||
================================
|
||||
Role-Based Access Control (RBAC)
|
||||
================================
|
||||
|
||||
The Role-Based Access Control (RBAC) policy framework enables both operators
|
||||
and users to grant access to resources for specific projects.
|
||||
|
||||
|
||||
Supported objects for sharing with specific projects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Currently, the access that can be granted using this feature
|
||||
is supported by:
|
||||
|
||||
* Regular port creation permissions on networks (since Liberty).
|
||||
* Binding QoS policies permissions to networks or ports (since Mitaka).
|
||||
* Attaching router gateways to networks (since Mitaka).
|
||||
|
||||
|
||||
Sharing an object with specific projects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sharing an object with a specific project is accomplished by creating
|
||||
a policy entry that permits the target project the ``access_as_shared``
|
||||
action on that object.
|
||||
|
||||
|
||||
Sharing a network with specific projects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a network to share:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create secret_network
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-25T20:16:40Z |
|
||||
| description | |
|
||||
| dns_domain | None |
|
||||
| id | f55961b9-3eb8-42eb-ac96-b97038b568de |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | None |
|
||||
| mtu | 1450 |
|
||||
| name | secret_network |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 9 |
|
||||
| qos_policy_id | None |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| segments | None |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| updated_at | 2017-01-25T20:16:40Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
Create the policy entry using the :command:`openstack network rbac create`
|
||||
command (in this example, the ID of the project we want to share with is
|
||||
``b87b2fc13e0248a4a031d38e06dc191d``):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac create --target-project \
|
||||
b87b2fc13e0248a4a031d38e06dc191d --action access_as_shared \
|
||||
--type network f55961b9-3eb8-42eb-ac96-b97038b568de
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| action | access_as_shared |
|
||||
| id | f93efdbf-f1e0-41d2-b093-8328959d469e |
|
||||
| name | None |
|
||||
| object_id | f55961b9-3eb8-42eb-ac96-b97038b568de |
|
||||
| object_type | network |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| target_project_id | b87b2fc13e0248a4a031d38e06dc191d |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
The ``target-project`` parameter specifies the project that requires
|
||||
access to the network. The ``action`` parameter specifies what
|
||||
the project is allowed to do. The ``type`` parameter says
|
||||
that the target object is a network. The final parameter is the ID of
|
||||
the network we are granting access to.
|
||||
|
||||
Project ``b87b2fc13e0248a4a031d38e06dc191d`` will now be able to see
|
||||
the network when running :command:`openstack network list` and
|
||||
:command:`openstack network show` and will also be able to create ports
|
||||
on that network. No other users (other than admins and the owner)
|
||||
will be able to see the network.
|
||||
|
||||
To remove access for that project, delete the policy that allows
|
||||
it using the :command:`openstack network rbac delete` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac delete f93efdbf-f1e0-41d2-b093-8328959d469e
|
||||
|
||||
If that project has ports on the network, the server will prevent the
|
||||
policy from being deleted until the ports have been deleted:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac delete f93efdbf-f1e0-41d2-b093-8328959d469e
|
||||
RBAC policy on object f93efdbf-f1e0-41d2-b093-8328959d469e
|
||||
cannot be removed because other objects depend on it.
|
||||
|
||||
This process can be repeated any number of times to share a network
|
||||
with an arbitrary number of projects.
|
||||
|
||||
|
||||
Sharing a QoS policy with specific projects
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create a QoS policy to share:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network qos policy create secret_policy
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| description | |
|
||||
| id | 1f730d69-1c45-4ade-a8f2-89070ac4f046 |
|
||||
| name | secret_policy |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| rules | [] |
|
||||
| shared | False |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
|
||||
Create the RBAC policy entry using the :command:`openstack network rbac create`
|
||||
command (in this example, the ID of the project we want to share with is
|
||||
``be98b82f8fdf46b696e9e01cebc33fd9``):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac create --target-project \
|
||||
be98b82f8fdf46b696e9e01cebc33fd9 --action access_as_shared \
|
||||
--type qos_policy 1f730d69-1c45-4ade-a8f2-89070ac4f046
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| action | access_as_shared |
|
||||
| id | 8828e38d-a0df-4c78-963b-e5f215d3d550 |
|
||||
| name | None |
|
||||
| object_id | 1f730d69-1c45-4ade-a8f2-89070ac4f046 |
|
||||
| object_type | qos_policy |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| target_project_id | be98b82f8fdf46b696e9e01cebc33fd9 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
The ``target-project`` parameter specifies the project that requires
|
||||
access to the QoS policy. The ``action`` parameter specifies what
|
||||
the project is allowed to do. The ``type`` parameter says
|
||||
that the target object is a QoS policy. The final parameter is the ID of
|
||||
the QoS policy we are granting access to.
|
||||
|
||||
Project ``be98b82f8fdf46b696e9e01cebc33fd9`` will now be able to see
|
||||
the QoS policy when running :command:`openstack network qos policy list` and
|
||||
:command:`openstack network qos policy show` and will also be able to bind
|
||||
it to its ports or networks. No other users (other than admins and the owner)
|
||||
will be able to see the QoS policy.
|
||||
|
||||
To remove access for that project, delete the RBAC policy that allows
|
||||
it using the :command:`openstack network rbac delete` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac delete 8828e38d-a0df-4c78-963b-e5f215d3d550
|
||||
|
||||
If that project has ports or networks with the QoS policy applied to them,
|
||||
the server will not delete the RBAC policy until
|
||||
the QoS policy is no longer in use:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac delete 8828e38d-a0df-4c78-963b-e5f215d3d550
|
||||
RBAC policy on object 8828e38d-a0df-4c78-963b-e5f215d3d550
|
||||
cannot be removed because other objects depend on it.
|
||||
|
||||
This process can be repeated any number of times to share a qos-policy
|
||||
with an arbitrary number of projects.
|
||||
|
||||
|
||||
How the 'shared' flag relates to these entries
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
As introduced in other guide entries, neutron provides a means of
|
||||
making an object (``network``, ``qos-policy``) available to every project.
|
||||
This is accomplished using the ``shared`` flag on the supported object:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create global_network --share
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-25T20:32:06Z |
|
||||
| description | |
|
||||
| dns_domain | None |
|
||||
| id | 84a7e627-573b-49da-af66-c9a65244f3ce |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | None |
|
||||
| mtu | 1450 |
|
||||
| name | global_network |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 7 |
|
||||
| qos_policy_id | None |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| segments | None |
|
||||
| shared | True |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| updated_at | 2017-01-25T20:32:07Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
This is the equivalent of creating a policy on the network that permits
|
||||
every project to perform the action ``access_as_shared`` on that network.
|
||||
Neutron treats them as the same thing, so the policy entry for that
|
||||
network should be visible using the :command:`openstack network rbac list`
|
||||
command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac list
|
||||
+-------------------------------+-------------+--------------------------------+
|
||||
| ID | Object Type | Object ID |
|
||||
+-------------------------------+-------------+--------------------------------+
|
||||
| 58a5ee31-2ad6-467d- | qos_policy | 1f730d69-1c45-4ade- |
|
||||
| 8bb8-8c2ae3dd1382 | | a8f2-89070ac4f046 |
|
||||
| 27efbd79-f384-4d89-9dfc- | network | 84a7e627-573b-49da- |
|
||||
| 6c4a606ceec6 | | af66-c9a65244f3ce |
|
||||
+-------------------------------+-------------+--------------------------------+
|
||||
|
||||
|
||||
Use the :command:`neutron rbac-show` command to see the details:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac show 27efbd79-f384-4d89-9dfc-6c4a606ceec6
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| action | access_as_shared |
|
||||
| id | 27efbd79-f384-4d89-9dfc-6c4a606ceec6 |
|
||||
| name | None |
|
||||
| object_id | 84a7e627-573b-49da-af66-c9a65244f3ce |
|
||||
| object_type | network |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| target_project_id | * |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
The output shows that the entry allows the action ``access_as_shared``
|
||||
on object ``84a7e627-573b-49da-af66-c9a65244f3ce`` of type ``network``
|
||||
to target_tenant ``*``, which is a wildcard that represents all projects.
|
||||
|
||||
Currently, the ``shared`` flag is just a mapping to the underlying
|
||||
RBAC policies for a network. Setting the flag to ``True`` on a network
|
||||
creates a wildcard RBAC entry. Setting it to ``False`` removes the
|
||||
wildcard entry.
|
||||
|
||||
When you run :command:`openstack network list` or
|
||||
:command:`openstack network show`, the ``shared`` flag is calculated by the
|
||||
server based on the calling project and the RBAC entries for each network.
|
||||
For QoS objects use :command:`openstack network qos policy list` or
|
||||
:command:`openstack network qos policy show` respectively.
|
||||
If there is a wildcard entry, the ``shared`` flag is always set to ``True``.
|
||||
If there are only entries that share with specific projects, only
|
||||
the projects the object is shared to will see the flag as ``True``
|
||||
and the rest will see the flag as ``False``.
|
||||
|
||||
|
||||
Allowing a network to be used as an external network
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To make a network available as an external network for specific projects
|
||||
rather than all projects, use the ``access_as_external`` action.
|
||||
|
||||
#. Create a network that you want to be available as an external network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create secret_external_network
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-25T20:36:59Z |
|
||||
| description | |
|
||||
| dns_domain | None |
|
||||
| id | 802d4e9e-4649-43e6-9ee2-8d052a880cfb |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | None |
|
||||
| mtu | 1450 |
|
||||
| name | secret_external_network |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| proider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 21 |
|
||||
| qos_policy_id | None |
|
||||
| revision_number | 3 |
|
||||
| router:external | Internal |
|
||||
| segments | None |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| updated_at | 2017-01-25T20:36:59Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
#. Create a policy entry using the :command:`openstack network rbac create`
|
||||
command (in this example, the ID of the project we want to share with is
|
||||
``838030a7bf3c4d04b4b054c0f0b2b17c``):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac create --target-project \
|
||||
838030a7bf3c4d04b4b054c0f0b2b17c --action access_as_external \
|
||||
--type network 802d4e9e-4649-43e6-9ee2-8d052a880cfb
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| action | access_as_external |
|
||||
| id | afdd5b8d-b6f5-4a15-9817-5231434057be |
|
||||
| name | None |
|
||||
| object_id | 802d4e9e-4649-43e6-9ee2-8d052a880cfb |
|
||||
| object_type | network |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| target_project_id | 838030a7bf3c4d04b4b054c0f0b2b17c |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
The ``target-project`` parameter specifies the project that requires
|
||||
access to the network. The ``action`` parameter specifies what
|
||||
the project is allowed to do. The ``type`` parameter indicates
|
||||
that the target object is a network. The final parameter is the ID of
|
||||
the network we are granting external access to.
|
||||
|
||||
Now project ``838030a7bf3c4d04b4b054c0f0b2b17c`` is able to see
|
||||
the network when running :command:`openstack network list`
|
||||
and :command:`openstack network show` and can attach router gateway
|
||||
ports to that network. No other users (other than admins
|
||||
and the owner) are able to see the network.
|
||||
|
||||
To remove access for that project, delete the policy that allows
|
||||
it using the :command:`openstack network rbac delete` command:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac delete afdd5b8d-b6f5-4a15-9817-5231434057be
|
||||
|
||||
If that project has router gateway ports attached to that network,
|
||||
the server prevents the policy from being deleted until the
|
||||
ports have been deleted:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac delete afdd5b8d-b6f5-4a15-9817-5231434057be
|
||||
RBAC policy on object afdd5b8d-b6f5-4a15-9817-5231434057be
|
||||
cannot be removed because other objects depend on it.
|
||||
|
||||
This process can be repeated any number of times to make a network
|
||||
available as external to an arbitrary number of projects.
|
||||
|
||||
If a network is marked as external during creation, it now implicitly
|
||||
creates a wildcard RBAC policy granting everyone access to preserve
|
||||
previous behavior before this feature was added.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create global_external_network --external
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| created_at | 2017-01-25T20:41:44Z |
|
||||
| description | |
|
||||
| dns_domain | None |
|
||||
| id | 72a257a2-a56e-4ac7-880f-94a4233abec6 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| is_default | None |
|
||||
| mtu | 1450 |
|
||||
| name | global_external_network |
|
||||
| port_security_enabled | True |
|
||||
| project_id | 61b7eba037fd41f29cfba757c010faff |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 69 |
|
||||
| qos_policy_id | None |
|
||||
| revision_number | 4 |
|
||||
| router:external | External |
|
||||
| segments | None |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| updated_at | 2017-01-25T20:41:44Z |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
|
||||
In the output above the standard ``router:external`` attribute is
|
||||
``External`` as expected. Now a wildcard policy is visible in the
|
||||
RBAC policy listings:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network rbac list --long -c ID -c Action
|
||||
+--------------------------------------+--------------------+
|
||||
| ID | Action |
|
||||
+--------------------------------------+--------------------+
|
||||
| b694e541-bdca-480d-94ec-eda59ab7d71a | access_as_external |
|
||||
+--------------------------------------+--------------------+
|
||||
|
||||
|
||||
You can modify or delete this policy with the same constraints
|
||||
as any other RBAC ``access_as_external`` policy.
|
||||
|
||||
|
||||
Preventing regular users from sharing objects with each other
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The default ``policy.json`` file will not allow regular
|
||||
users to share objects with every other project using a wildcard;
|
||||
however, it will allow them to share objects with specific project
|
||||
IDs.
|
||||
|
||||
If an operator wants to prevent normal users from doing this, the
|
||||
``"create_rbac_policy":`` entry in ``policy.json`` can be adjusted
|
||||
from ``""`` to ``"rule:admin_only"``.
|
450
doc/source/admin/config-routed-networks.rst
Normal file
@ -0,0 +1,450 @@
|
||||
.. _config-routed-provider-networks:
|
||||
|
||||
========================
|
||||
Routed provider networks
|
||||
========================
|
||||
|
||||
.. note::
|
||||
|
||||
Use of this feature requires the OpenStack client
|
||||
version 3.3 or newer.
|
||||
|
||||
Before routed provider networks, the Networking service could not present a
|
||||
multi-segment layer-3 network as a single entity. Thus, each operator typically
|
||||
chose one of the following architectures:
|
||||
|
||||
* Single large layer-2 network
|
||||
* Multiple smaller layer-2 networks
|
||||
|
||||
Single large layer-2 networks become complex at scale and involve significant
|
||||
failure domains.
|
||||
|
||||
Multiple smaller layer-2 networks scale better and shrink failure domains, but
|
||||
leave network selection to the user. Without additional information, users
|
||||
cannot easily differentiate these networks.
|
||||
|
||||
A routed provider network enables a single provider network to represent
|
||||
multiple layer-2 networks (broadcast domains) or segments and enables the
|
||||
operator to present one network to users. However, the particular IP
|
||||
addresses available to an instance depend on the segment of the network
|
||||
available on the particular compute node.
|
||||
|
||||
Similar to conventional networking, layer-2 (switching) handles transit of
|
||||
traffic between ports on the same segment and layer-3 (routing) handles
|
||||
transit of traffic between segments.
|
||||
|
||||
Each segment requires at least one subnet that explicitly belongs to that
|
||||
segment. The association between a segment and a subnet distinguishes a
|
||||
routed provider network from other types of networks. The Networking service
|
||||
enforces that either zero or all subnets on a particular network associate
|
||||
with a segment. For example, attempting to create a subnet without a segment
|
||||
on a network containing subnets with segments generates an error.
|
||||
|
||||
The Networking service does not provide layer-3 services between segments.
|
||||
Instead, it relies on physical network infrastructure to route subnets.
|
||||
Thus, both the Networking service and physical network infrastructure must
|
||||
contain configuration for routed provider networks, similar to conventional
|
||||
provider networks. In the future, implementation of dynamic routing protocols
|
||||
may ease configuration of routed networks.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Routed provider networks require additional prerequisites over conventional
|
||||
provider networks. We recommend using the following procedure:
|
||||
|
||||
#. Begin with segments. The Networking service defines a segment using the
|
||||
following components:
|
||||
|
||||
* Unique physical network name
|
||||
* Segmentation type
|
||||
* Segmentation ID
|
||||
|
||||
For example, ``provider1``, ``VLAN``, and ``2016``. See the
|
||||
`API reference <https://developer.openstack.org/api-ref/networking/v2/#segments>`__
|
||||
for more information.
|
||||
|
||||
Within a network, use a unique physical network name for each segment which
|
||||
enables reuse of the same segmentation details between subnets. For
|
||||
example, using the same VLAN ID across all segments of a particular
|
||||
provider network. Similar to conventional provider networks, the operator
|
||||
must provision the layer-2 physical network infrastructure accordingly.
|
||||
|
||||
#. Implement routing between segments.
|
||||
|
||||
The Networking service does not provision routing among segments. The
|
||||
operator must implement routing among segments of a provider network.
|
||||
Each subnet on a segment must contain the gateway address of the
|
||||
router interface on that particular subnet. For example:
|
||||
|
||||
=========== ======= ======================= =====================
|
||||
Segment Version Addresses Gateway
|
||||
=========== ======= ======================= =====================
|
||||
segment1 4 203.0.113.0/24 203.0.113.1
|
||||
segment1 6 fd00:203:0:113::/64 fd00:203:0:113::1
|
||||
segment2 4 198.51.100.0/24 198.51.100.1
|
||||
segment2 6 fd00:198:51:100::/64 fd00:198:51:100::1
|
||||
=========== ======= ======================= =====================
|
||||
|
||||
#. Map segments to compute nodes.
|
||||
|
||||
Routed provider networks imply that compute nodes reside on different
|
||||
segments. The operator must ensure that every compute host that is supposed
|
||||
to participate in a router provider network has direct connectivity to one
|
||||
of its segments.
|
||||
|
||||
=========== ====== ================
|
||||
Host Rack Physical Network
|
||||
=========== ====== ================
|
||||
compute0001 rack 1 segment 1
|
||||
compute0002 rack 1 segment 1
|
||||
... ... ...
|
||||
compute0101 rack 2 segment 2
|
||||
compute0102 rack 2 segment 2
|
||||
compute0102 rack 2 segment 2
|
||||
... ... ...
|
||||
=========== ====== ================
|
||||
|
||||
#. Deploy DHCP agents.
|
||||
|
||||
Unlike conventional provider networks, a DHCP agent cannot support more
|
||||
than one segment within a network. The operator must deploy at least one
|
||||
DHCP agent per segment. Consider deploying DHCP agents on compute nodes
|
||||
containing the segments rather than one or more network nodes to reduce
|
||||
node count.
|
||||
|
||||
=========== ====== ================
|
||||
Host Rack Physical Network
|
||||
=========== ====== ================
|
||||
network0001 rack 1 segment 1
|
||||
network0002 rack 2 segment 2
|
||||
... ... ...
|
||||
=========== ====== ================
|
||||
|
||||
#. Configure communication of the Networking service with the Compute
|
||||
scheduler.
|
||||
|
||||
An instance with an interface with an IPv4 address in a routed provider
|
||||
network must be placed by the Compute scheduler in a host that has access to
|
||||
a segment with available IPv4 addresses. To make this possible, the
|
||||
Networking service communicates to the Compute scheduler the inventory of
|
||||
IPv4 addresses associated with each segment of a routed provider network.
|
||||
The operator must configure the authentication credentials that the
|
||||
Networking service will use to communicate with the Compute scheduler's
|
||||
placement API. Please see below an example configuration.
|
||||
|
||||
.. note::
|
||||
|
||||
Coordination between the Networking service and the Compute scheduler is
|
||||
not necessary for IPv6 subnets as a consequence of their large address
|
||||
spaces.
|
||||
|
||||
.. note::
|
||||
|
||||
The coordination between the Networking service and the Compute scheduler
|
||||
requires the following minimum API micro-versions.
|
||||
|
||||
* Compute service API: 2.41
|
||||
* Placement API: 1.1
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. Enable the segments service plug-in by appending ``segments`` to the list
|
||||
of ``service_plugins`` in the ``neutron.conf`` file on all nodes running the
|
||||
``neutron-server`` service:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
# ...
|
||||
service_plugins = ..., segments
|
||||
|
||||
#. Add a ``placement`` section to the ``neutron.conf`` file with authentication
|
||||
credentials for the Compute service placement API:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[placement]
|
||||
auth_uri = http://192.0.2.72/identity
|
||||
project_domain_name = Default
|
||||
project_name = service
|
||||
user_domain_name = Default
|
||||
password = apassword
|
||||
username = nova
|
||||
auth_url = http://192.0.2.72/identity_admin
|
||||
auth_type = password
|
||||
region_name = RegionOne
|
||||
|
||||
#. Restart the ``neutron-server`` service.
|
||||
|
||||
Network or compute nodes
|
||||
------------------------
|
||||
|
||||
* Configure the layer-2 agent on each node to map one or more segments to
|
||||
the appropriate physical network bridge or interface and restart the
|
||||
agent.
|
||||
|
||||
Create a routed provider network
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following steps create a routed provider network with two segments. Each
|
||||
segment contains one IPv4 subnet and one IPv6 subnet.
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Create a VLAN provider network which includes a default segment. In this
|
||||
example, the network uses the ``provider1`` physical network with VLAN ID
|
||||
2016.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create --share --provider-physical-network provider1 \
|
||||
--provider-network-type vlan --provider-segment 2016 multisegment1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| l2_adjacency | True |
|
||||
| mtu | 1500 |
|
||||
| name | multisegment1 |
|
||||
| port_security_enabled | True |
|
||||
| provider:network_type | vlan |
|
||||
| provider:physical_network | provider1 |
|
||||
| provider:segmentation_id | 2016 |
|
||||
| router:external | Internal |
|
||||
| shared | True |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Rename the default segment to ``segment1``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network segment list --network multisegment1
|
||||
+--------------------------------------+----------+--------------------------------------+--------------+---------+
|
||||
| ID | Name | Network | Network Type | Segment |
|
||||
+--------------------------------------+----------+--------------------------------------+--------------+---------+
|
||||
| 43e16869-ad31-48e4-87ce-acf756709e18 | None | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 2016 |
|
||||
+--------------------------------------+----------+--------------------------------------+--------------+---------+
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network segment set --name segment1 43e16869-ad31-48e4-87ce-acf756709e18
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
#. Create a second segment on the provider network. In this example, the
|
||||
segment uses the ``provider2`` physical network with VLAN ID 2016.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network segment create --physical-network provider2 \
|
||||
--network-type vlan --segment 2016 --network multisegment1 segment2
|
||||
+------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+------------------+--------------------------------------+
|
||||
| description | None |
|
||||
| headers | |
|
||||
| id | 053b7925-9a89-4489-9992-e164c8cc8763 |
|
||||
| name | segment2 |
|
||||
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| network_type | vlan |
|
||||
| physical_network | provider2 |
|
||||
| segmentation_id | 2016 |
|
||||
+------------------+--------------------------------------+
|
||||
|
||||
#. Verify that the network contains the ``segment1`` and ``segment2`` segments.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network segment list --network multisegment1
|
||||
+--------------------------------------+----------+--------------------------------------+--------------+---------+
|
||||
| ID | Name | Network | Network Type | Segment |
|
||||
+--------------------------------------+----------+--------------------------------------+--------------+---------+
|
||||
| 053b7925-9a89-4489-9992-e164c8cc8763 | segment2 | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 2016 |
|
||||
| 43e16869-ad31-48e4-87ce-acf756709e18 | segment1 | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 | vlan | 2016 |
|
||||
+--------------------------------------+----------+--------------------------------------+--------------+---------+
|
||||
|
||||
#. Create subnets on the ``segment1`` segment. In this example, the IPv4
|
||||
subnet uses 203.0.113.0/24 and the IPv6 subnet uses fd00:203:0:113::/64.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create \
|
||||
--network multisegment1 --network-segment segment1 \
|
||||
--ip-version 4 --subnet-range 203.0.113.0/24 \
|
||||
multisegment1-segment1-v4
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 203.0.113.2-203.0.113.254 |
|
||||
| cidr | 203.0.113.0/24 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 203.0.113.1 |
|
||||
| id | c428797a-6f8e-4cb1-b394-c404318a2762 |
|
||||
| ip_version | 4 |
|
||||
| name | multisegment1-segment1-v4 |
|
||||
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack subnet create \
|
||||
--network multisegment1 --network-segment segment1 \
|
||||
--ip-version 6 --subnet-range fd00:203:0:113::/64 \
|
||||
--ipv6-address-mode slaac multisegment1-segment1-v6
|
||||
+-------------------+------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+------------------------------------------------------+
|
||||
| allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff |
|
||||
| cidr | fd00:203:0:113::/64 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | fd00:203:0:113::1 |
|
||||
| id | e41cb069-9902-4c01-9e1c-268c8252256a |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | multisegment1-segment1-v6 |
|
||||
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
|
||||
+-------------------+------------------------------------------------------+
|
||||
|
||||
.. note::
|
||||
|
||||
By default, IPv6 subnets on provider networks rely on physical network
|
||||
infrastructure for stateless address autoconfiguration (SLAAC) and
|
||||
router advertisement.
|
||||
|
||||
#. Create subnets on the ``segment2`` segment. In this example, the IPv4
|
||||
subnet uses 198.51.100.0/24 and the IPv6 subnet uses fd00:198:51:100::/64.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create \
|
||||
--network multisegment1 --network-segment segment2 \
|
||||
--ip-version 4 --subnet-range 198.51.100.0/24 \
|
||||
multisegment1-segment2-v4
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 198.51.100.2-198.51.100.254 |
|
||||
| cidr | 198.51.100.0/24 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 198.51.100.1 |
|
||||
| id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 |
|
||||
| ip_version | 4 |
|
||||
| name | multisegment1-segment2-v4 |
|
||||
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
$ openstack subnet create \
|
||||
--network multisegment1 --network-segment segment2 \
|
||||
--ip-version 6 --subnet-range fd00:198:51:100::/64 \
|
||||
--ipv6-address-mode slaac multisegment1-segment2-v6
|
||||
+-------------------+--------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------------------------+
|
||||
| allocation_pools | fd00:198:51:100::2-fd00:198:51:100:ffff:ffff:ffff:ffff |
|
||||
| cidr | fd00:198:51:100::/64 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | fd00:198:51:100::1 |
|
||||
| id | b884c40e-9cfe-4d1b-a085-0a15488e9441 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | multisegment1-segment2-v6 |
|
||||
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
|
||||
+-------------------+--------------------------------------------------------+
|
||||
|
||||
#. Verify that each IPv4 subnet associates with at least one DHCP agent.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron dhcp-agent-list-hosting-net multisegment1
|
||||
+--------------------------------------+-------------+----------------+-------+
|
||||
| id | host | admin_state_up | alive |
|
||||
+--------------------------------------+-------------+----------------+-------+
|
||||
| c904ed10-922c-4c1a-84fd-d928abaf8f55 | compute0001 | True | :-) |
|
||||
| e0b22cc0-d2a6-4f1c-b17c-27558e20b454 | compute0101 | True | :-) |
|
||||
+--------------------------------------+-------------+----------------+-------+
|
||||
|
||||
#. Verify that inventories were created for each segment IPv4 subnet in the
|
||||
Compute service placement API (for the sake of brevity, only one of the
|
||||
segments is shown in this example).
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763
|
||||
$ curl -s -X GET \
|
||||
http://localhost/placement/resource_providers/$SEGMENT_ID/inventories \
|
||||
-H "Content-type: application/json" \
|
||||
-H "X-Auth-Token: $TOKEN" \
|
||||
-H "Openstack-Api-Version: placement 1.1"
|
||||
{
|
||||
"resource_provider_generation": 1,
|
||||
"inventories": {
|
||||
"allocation_ratio": 1,
|
||||
"total": 254,
|
||||
"reserved": 2,
|
||||
"step_size": 1,
|
||||
"min_unit": 1,
|
||||
"max_unit": 1
|
||||
}
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
As of the writing of this guide, there is not placement API CLI client,
|
||||
so the :command:`curl` command is used for this example.
|
||||
|
||||
#. Verify that host aggregates were created for each segment in the Compute
|
||||
service (for the sake of brevity, only one of the segments is shown in this
|
||||
example).
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack aggregate list
|
||||
+----+---------------------------------------------------------+-------------------+
|
||||
| Id | Name | Availability Zone |
|
||||
+----+---------------------------------------------------------+-------------------+
|
||||
| 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None |
|
||||
+----+---------------------------------------------------------+-------------------+
|
||||
|
||||
#. Launch one or more instances. Each instance obtains IP addresses according
|
||||
to the segment it uses on the particular compute node.
|
||||
|
||||
.. note::
|
||||
|
||||
Creating a port and passing it to an instance yields a different
|
||||
behavior than conventional networks. The Networking service
|
||||
defers assignment of IP addresses to the port until the particular
|
||||
compute node becomes apparent. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create --network multisegment1 port1
|
||||
+-----------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| binding_vnic_type | normal |
|
||||
| id | 6181fb47-7a74-4add-9b6b-f9837c1c90c4 |
|
||||
| ip_allocation | deferred |
|
||||
| mac_address | fa:16:3e:34:de:9b |
|
||||
| name | port1 |
|
||||
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
|
||||
| port_security_enabled | True |
|
||||
| security_groups | e4fcef0d-e2c5-40c3-a385-9c33ac9289c5 |
|
||||
| status | DOWN |
|
||||
+-----------------------+--------------------------------------+
|
338
doc/source/admin/config-service-subnets.rst
Normal file
@ -0,0 +1,338 @@
|
||||
.. _config-service-subnets:
|
||||
|
||||
===============
|
||||
Service subnets
|
||||
===============
|
||||
|
||||
Service subnets enable operators to define valid port types for each
|
||||
subnet on a network without limiting networks to one subnet or manually
|
||||
creating ports with a specific subnet ID. Using this feature, operators
|
||||
can ensure that ports for instances and router interfaces, for example,
|
||||
always use different subnets.
|
||||
|
||||
Operation
|
||||
~~~~~~~~~
|
||||
|
||||
Define one or more service types for one or more subnets on a particular
|
||||
network. Each service type must correspond to a valid device owner within
|
||||
the port model in order for it to be used.
|
||||
|
||||
During IP allocation, the :ref:`IPAM <config-ipam>` driver returns an
|
||||
address from a subnet with a service type matching the port device
|
||||
owner. If no subnets match, or all matching subnets lack available IP
|
||||
addresses, the IPAM driver attempts to use a subnet without any service
|
||||
types to preserve compatibility. If all subnets on a network have a
|
||||
service type, the IPAM driver cannot preserve compatibility. However, this
|
||||
feature enables strict IP allocation from subnets with a matching device
|
||||
owner. If multiple subnets contain the same service type, or a subnet
|
||||
without a service type exists, the IPAM driver selects the first subnet
|
||||
with a matching service type. For example, a floating IP agent gateway port
|
||||
uses the following selection process:
|
||||
|
||||
* ``network:floatingip_agent_gateway``
|
||||
* ``None``
|
||||
|
||||
.. note::
|
||||
|
||||
Ports with the device owner ``network:dhcp`` are exempt from the above IPAM
|
||||
logic for subnets with ``dhcp_enabled`` set to ``True``. This preserves the
|
||||
existing automatic DHCP port creation behaviour for DHCP-enabled subnets.
|
||||
|
||||
Creating or updating a port with a specific subnet skips this selection
|
||||
process and explicitly uses the given subnet.
|
||||
|
||||
Usage
|
||||
~~~~~
|
||||
|
||||
.. note::
|
||||
|
||||
Creating a subnet with a service type requires administrative
|
||||
privileges.
|
||||
|
||||
Example 1 - Proof-of-concept
|
||||
----------------------------
|
||||
|
||||
This following example is not typical of an actual deployment. It is shown
|
||||
to allow users to experiment with configuring service subnets.
|
||||
|
||||
#. Create a network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create demo-net1
|
||||
+---------------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| availability_zone_hints | |
|
||||
| availability_zones | |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | b5b729d8-31cc-4d2c-8284-72b3291fec02 |
|
||||
| ipv4_address_scope | None |
|
||||
| ipv6_address_scope | None |
|
||||
| mtu | 1450 |
|
||||
| name | demo-net1 |
|
||||
| port_security_enabled | True |
|
||||
| project_id | a3db43cd0f224242a847ab84d091217d |
|
||||
| provider:network_type | vxlan |
|
||||
| provider:physical_network | None |
|
||||
| provider:segmentation_id | 110 |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
| subnets | |
|
||||
| tags | [] |
|
||||
+---------------------------+--------------------------------------+
|
||||
|
||||
#. Create a subnet on the network with one or more service types. For
|
||||
example, the ``compute:nova`` service type enables instances to use
|
||||
this subnet.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create demo-subnet1 --subnet-range 192.0.2.0/24 \
|
||||
--service-type 'compute:nova' --network demo-net1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| id | 6e38b23f-0b27-4e3c-8e69-fd23a3df1935 |
|
||||
| ip_version | 4 |
|
||||
| cidr | 192.0.2.0/24 |
|
||||
| name | demo-subnet1 |
|
||||
| network_id | b5b729d8-31cc-4d2c-8284-72b3291fec02 |
|
||||
| service_types | ['compute:nova'] |
|
||||
| tenant_id | a8b3054cc1214f18b1186b291525650f |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
#. Optionally, create another subnet on the network with a different service
|
||||
type. For example, the ``compute:foo`` arbitrary service type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create demo-subnet2 --subnet-range 198.51.100.0/24 \
|
||||
--service-type 'compute:foo' --network demo-net1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| id | ea139dcd-17a3-4f0a-8cca-dff8b4e03f8a |
|
||||
| ip_version | 4 |
|
||||
| cidr | 198.51.100.0/24 |
|
||||
| name | demo-subnet2 |
|
||||
| network_id | b5b729d8-31cc-4d2c-8284-72b3291fec02 |
|
||||
| service_types | ['compute:foo'] |
|
||||
| tenant_id | a8b3054cc1214f18b1186b291525650f |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
#. Launch an instance using the network. For example, using the ``cirros``
|
||||
image and ``m1.tiny`` flavor.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create demo-instance1 --flavor m1.tiny \
|
||||
--image cirros --nic net-id=b5b729d8-31cc-4d2c-8284-72b3291fec02
|
||||
+--------------------------------------+-----------------------------------------------+
|
||||
| Field | Value |
|
||||
+--------------------------------------+-----------------------------------------------+
|
||||
| OS-DCF:diskConfig | MANUAL |
|
||||
| OS-EXT-AZ:availability_zone | |
|
||||
| OS-EXT-SRV-ATTR:host | None |
|
||||
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
|
||||
| OS-EXT-SRV-ATTR:instance_name | instance-00000009 |
|
||||
| OS-EXT-STS:power_state | 0 |
|
||||
| OS-EXT-STS:task_state | scheduling |
|
||||
| OS-EXT-STS:vm_state | building |
|
||||
| OS-SRV-USG:launched_at | None |
|
||||
| OS-SRV-USG:terminated_at | None |
|
||||
| accessIPv4 | |
|
||||
| accessIPv6 | |
|
||||
| addresses | |
|
||||
| adminPass | Fn85skabdxBL |
|
||||
| config_drive | |
|
||||
| created | 2016-09-19T15:07:42Z |
|
||||
| flavor | m1.tiny (1) |
|
||||
| hostId | |
|
||||
| id | 04222b73-1a6e-4c2a-9af4-ef3d17d521ff |
|
||||
| image | cirros (4aaec87d-c655-4856-8618-b2dada3a2b11) |
|
||||
| key_name | None |
|
||||
| name | demo-instance1 |
|
||||
| os-extended-volumes:volumes_attached | [] |
|
||||
| progress | 0 |
|
||||
| project_id | d44c19e056674381b86430575184b167 |
|
||||
| properties | |
|
||||
| security_groups | [{u'name': u'default'}] |
|
||||
| status | BUILD |
|
||||
| updated | 2016-09-19T15:07:42Z |
|
||||
| user_id | 331afbeb322d4c559a181e19051ae362 |
|
||||
+--------------------------------------+-----------------------------------------------+
|
||||
|
||||
#. Check the instance status. The ``Networks`` field contains an IP address
|
||||
from the subnet having the ``compute:nova`` service type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------------+---------+---------------------+
|
||||
| ID | Name | Status | Networks |
|
||||
+--------------------------------------+-----------------+---------+---------------------+
|
||||
| 20181f46-5cd2-4af8-9af0-f4cf5c983008 | demo-instance1 | ACTIVE | demo-net1=192.0.2.3 |
|
||||
+--------------------------------------+-----------------+---------+---------------------+
|
||||
|
||||
Example 2 - DVR configuration
|
||||
-----------------------------
|
||||
|
||||
The following example outlines how you can configure service subnets in
|
||||
a DVR-enabled deployment, with the goal of minimizing public IP
|
||||
address consumption. This example uses three subnets on the same external
|
||||
network:
|
||||
|
||||
* 192.0.2.0/24 for instance floating IP addresses
|
||||
* 198.51.100.0/24 for floating IP agent gateway IPs configured on compute nodes
|
||||
* 203.0.113.0/25 for all other IP allocations on the external network
|
||||
|
||||
This example uses again the private network, ``demo-net1``
|
||||
(b5b729d8-31cc-4d2c-8284-72b3291fec02) which was created in
|
||||
`Example 1 - Proof-of-concept`_.
|
||||
|
||||
.. note:
|
||||
|
||||
The output of the commands is not always shown since it
|
||||
is very similar to the above.
|
||||
|
||||
#. Create an external network:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create --external demo-ext-net
|
||||
|
||||
#. Create a subnet on the external network for the instance floating IP
|
||||
addresses. This uses the ``network:floatingip`` service type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create demo-floating-ip-subnet \
|
||||
--subnet-range 192.0.2.0/24 --no-dhcp \
|
||||
--service-type 'network:floatingip' --network demo-ext-net
|
||||
|
||||
#. Create a subnet on the external network for the floating IP agent
|
||||
gateway IP addresses, which are configured by DVR on compute nodes.
|
||||
This will use the ``network:floatingip_agent_gateway`` service type.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create demo-floating-ip-agent-gateway-subnet \
|
||||
--subnet-range 198.51.100.0/24 --no-dhcp \
|
||||
--service-type 'network:floatingip_agent_gateway' \
|
||||
--network demo-ext-net
|
||||
|
||||
#. Create a subnet on the external network for all other IP addresses
|
||||
allocated on the external network. This will not use any service
|
||||
type. It acts as a fall back for allocations that do not match
|
||||
either of the above two service subnets.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create demo-other-subnet \
|
||||
--subnet-range 203.0.113.0/25 --no-dhcp \
|
||||
--network demo-ext-net
|
||||
|
||||
#. Create a router:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router create demo-router
|
||||
|
||||
#. Add an interface to the router on demo-subnet1:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add subnet demo-router demo-subnet1
|
||||
|
||||
#. Set the external gateway for the router, which will create an
|
||||
interface and allocate an IP address on demo-ext-net:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron router-gateway-set demo-router demo-ext-net
|
||||
|
||||
#. Launch an instance on a private network and retrieve the neutron
|
||||
port ID that was allocated. As above, use the ``cirros``
|
||||
image and ``m1.tiny`` flavor:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create demo-instance1 --flavor m1.tiny \
|
||||
--image cirros --nic net-id=b5b729d8-31cc-4d2c-8284-72b3291fec02
|
||||
$ openstack port list --server demo-instance1
|
||||
+--------------------------------------+------+-------------------+--------------------------------------------------+--------+
|
||||
| ID | Name | MAC Address | Fixed IP Addresses | Status |
|
||||
+--------------------------------------+------+-------------------+--------------------------------------------------+--------+
|
||||
| a752bb24-9bf2-4d37-b9d6-07da69c86f19 | | fa:16:3e:99:54:32 | ip_address='203.0.113.130', | ACTIVE |
|
||||
| | | | subnet_id='6e38b23f-0b27-4e3c-8e69-fd23a3df1935' | |
|
||||
+--------------------------------------+------+-------------------+--------------------------------------------------+--------+
|
||||
|
||||
#. Associate a floating IP with the instance port and verify it was
|
||||
allocated an IP address from the correct subnet:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip create --port \
|
||||
a752bb24-9bf2-4d37-b9d6-07da69c86f19 demo-ext-net
|
||||
+---------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+---------------------+--------------------------------------+
|
||||
| fixed_ip_address | 203.0.113.130 |
|
||||
| floating_ip_address | 192.0.2.12 |
|
||||
| floating_network_id | 02d236d5-dad9-4082-bb6b-5245f9f84d13 |
|
||||
| id | f15cae7f-5e05-4b19-bd25-4bb71edcf3de |
|
||||
| port_id | a752bb24-9bf2-4d37-b9d6-07da69c86f19 |
|
||||
| project_id | d44c19e056674381b86430575184b167 |
|
||||
| router_id | 5a8ca19f-3703-4f81-bc29-db6bc2f528d6 |
|
||||
| status | ACTIVE |
|
||||
+---------------------+--------------------------------------+
|
||||
|
||||
#. As the `admin` user, verify the neutron routers are allocated IP
|
||||
addresses from their correct subnets. Use ``openstack port list``
|
||||
to find ports associated with the routers.
|
||||
|
||||
First, the router gateway external port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-show f148ffeb-3c26-4067-bc5f-5c3dfddae2f5
|
||||
+-----------------------+--------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+--------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| device_id | 5a8ca19f-3703-4f81-bc29-db6bc2f528d6 |
|
||||
| device_owner | network:router_gateway |
|
||||
| extra_dhcp_opts | |
|
||||
| fixed_ips | ip_address='203.0.113.11', |
|
||||
| | subnet_id='67c251d9-2b7a-4200-99f6-e13785b0334d' |
|
||||
| id | f148ffeb-3c26-4067-bc5f-5c3dfddae2f5 |
|
||||
| mac_address | fa:16:3e:2c:0f:69 |
|
||||
| network_id | 02d236d5-dad9-4082-bb6b-5245f9f84d13 |
|
||||
| project_id | |
|
||||
| status | ACTIVE |
|
||||
+-----------------------+--------------------------------------------------------------------------+
|
||||
|
||||
Second, the router floating IP agent gateway external port:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-show a2d1e756-8ae1-4f96-9aa1-e7ea16a6a68a
|
||||
+-----------------------+--------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------------+--------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| device_id | 3d0c98eb-bca3-45cc-8aa4-90ae3deb0844 |
|
||||
| device_owner | network:floatingip_agent_gateway |
|
||||
| extra_dhcp_opts | |
|
||||
| fixed_ips | ip_address='198.51.100.10', |
|
||||
| | subnet_id='67c251d9-2b7a-4200-99f6-e13785b0334d' |
|
||||
| id | a2d1e756-8ae1-4f96-9aa1-e7ea16a6a68a |
|
||||
| mac_address | fa:16:3e:f4:5d:fa |
|
||||
| network_id | 02d236d5-dad9-4082-bb6b-5245f9f84d13 |
|
||||
| project_id | |
|
||||
| status | ACTIVE |
|
||||
+-----------------------+--------------------------------------------------------------------------+
|
46
doc/source/admin/config-services-agent.rst
Normal file
@ -0,0 +1,46 @@
|
||||
.. _config-services-agent:
|
||||
|
||||
===================
|
||||
Services and agents
|
||||
===================
|
||||
|
||||
A usual neutron setup consists of multiple services and agents running on one
|
||||
or multiple nodes (though some setups may not need any agents).
|
||||
Each of these services provide some of the networking or API services.
|
||||
Among those of special interest are:
|
||||
|
||||
#. The neutron-server that provides API endpoints and serves as a single point
|
||||
of access to the database. It usually runs on the controller nodes.
|
||||
#. Layer2 agent that can utilize Open vSwitch, Linux Bridge or other
|
||||
vendor-specific technology to provide network segmentation and isolation
|
||||
for project networks.
|
||||
The L2 agent should run on every node where it is deemed
|
||||
responsible for wiring and securing virtual interfaces (usually both
|
||||
compute and network nodes).
|
||||
#. Layer3 agent that runs on network node and provides east-west and
|
||||
north-south routing plus some advanced services such as FWaaS or VPNaaS.
|
||||
|
||||
Configuration options
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The neutron configuration options are segregated between
|
||||
neutron-server and agents. Both services and agents may load the main
|
||||
``neutron.conf`` since this file should contain the oslo.messaging
|
||||
configuration for internal neutron RPCs and may contain host specific
|
||||
configuration, such as file paths. The ``neutron.conf`` contains the
|
||||
database, keystone, nova credentials, and endpoints strictly for
|
||||
neutron-server to use.
|
||||
|
||||
In addition, neutron-server may load a plugin-specific configuration file, yet
|
||||
the agents should not. As the plugin configuration is primarily site wide
|
||||
options and the plugin provides the persistence layer for neutron, agents
|
||||
should be instructed to act upon these values through RPC.
|
||||
|
||||
Each individual agent may have its own configuration file. This file should be
|
||||
loaded after the main ``neutron.conf`` file, so the agent configuration takes
|
||||
precedence. The agent-specific configuration may contain configurations which
|
||||
vary between hosts in a neutron deployment such as the
|
||||
``external_network_bridge`` for an L3 agent. If any agent requires access to
|
||||
additional external services beyond the neutron RPC, those endpoints should be
|
||||
defined in the agent-specific configuration file (for example, nova metadata
|
||||
for metadata agent).
|
331
doc/source/admin/config-sfc.rst
Normal file
@ -0,0 +1,331 @@
|
||||
.. _adv-config-sfc:
|
||||
|
||||
=========================
|
||||
Service function chaining
|
||||
=========================
|
||||
|
||||
Service function chain (SFC) essentially refers to the
|
||||
software-defined networking (SDN) version of
|
||||
policy-based routing (PBR). In many cases, SFC involves security,
|
||||
although it can include a variety of other features.
|
||||
|
||||
Fundamentally, SFC routes packets through one or more service functions
|
||||
instead of conventional routing that routes packets using destination IP
|
||||
address. Service functions essentially emulate a series of physical network
|
||||
devices with cables linking them together.
|
||||
|
||||
A basic example of SFC involves routing packets from one location to another
|
||||
through a firewall that lacks a "next hop" IP address from a conventional
|
||||
routing perspective. A more complex example involves an ordered series of
|
||||
service functions, each implemented using multiple instances (VMs). Packets
|
||||
must flow through one instance and a hashing algorithm distributes flows
|
||||
across multiple instances at each hop.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
All OpenStack Networking services and OpenStack Compute instances connect to
|
||||
a virtual network via ports making it possible to create a traffic steering
|
||||
model for service chaining using only ports. Including these ports in a
|
||||
port chain enables steering of traffic through one or more instances
|
||||
providing service functions.
|
||||
|
||||
A port chain, or service function path, consists of the following:
|
||||
|
||||
* A set of ports that define the sequence of service functions.
|
||||
* A set of flow classifiers that specify the classified traffic flows
|
||||
entering the chain.
|
||||
|
||||
If a service function involves a pair of ports, the first port acts as the
|
||||
ingress port of the service function and the second port acts as the egress
|
||||
port. If both ports use the same value, they function as a single virtual
|
||||
bidirectional port.
|
||||
|
||||
A port chain is a unidirectional service chain. The first port acts as the
|
||||
head of the service function chain and the second port acts as the tail of the
|
||||
service function chain. A bidirectional service function chain consists of
|
||||
two unidirectional port chains.
|
||||
|
||||
A flow classifier can only belong to one port chain to prevent ambiguity as
|
||||
to which chain should handle packets in the flow. A check prevents such
|
||||
ambiguity. However, you can associate multiple flow classifiers with a port
|
||||
chain because multiple flows can request the same service function path.
|
||||
|
||||
Currently, SFC lacks support for multi-project service functions.
|
||||
|
||||
The port chain plug-in supports backing service providers including the OVS
|
||||
driver and a variety of SDN controller drivers. The common driver API enables
|
||||
different drivers to provide different implementations for the service chain
|
||||
path rendering.
|
||||
|
||||
.. image:: figures/port-chain-architecture-diagram.png
|
||||
:alt: Port chain architecture
|
||||
|
||||
.. image:: figures/port-chain-diagram.png
|
||||
:alt: Port chain model
|
||||
|
||||
See the `developer documentation
|
||||
<https://docs.openstack.org/developer/networking-sfc/>`_ for more information.
|
||||
|
||||
Resources
|
||||
~~~~~~~~~
|
||||
|
||||
Port chain
|
||||
----------
|
||||
|
||||
* ``id`` - Port chain ID
|
||||
* ``tenant_id`` - Project ID
|
||||
* ``name`` - Readable name
|
||||
* ``description`` - Readable description
|
||||
* ``port_pair_groups`` - List of port pair group IDs
|
||||
* ``flow_classifiers`` - List of flow classifier IDs
|
||||
* ``chain_parameters`` - Dictionary of chain parameters
|
||||
|
||||
A port chain consists of a sequence of port pair groups. Each port pair group
|
||||
is a hop in the port chain. A group of port pairs represents service functions
|
||||
providing equivalent functionality. For example, a group of firewall service
|
||||
functions.
|
||||
|
||||
A flow classifier identifies a flow. A port chain can contain multiple flow
|
||||
classifiers. Omitting the flow classifier effectively prevents steering of
|
||||
traffic through the port chain.
|
||||
|
||||
The ``chain_parameters`` attribute contains one or more parameters for the
|
||||
port chain. Currently, it only supports a correlation parameter that
|
||||
defaults to ``mpls`` for consistency with Open vSwitch (OVS)
|
||||
capabilities. Future values for the correlation parameter may include
|
||||
the network service header (NSH).
|
||||
|
||||
Port pair group
|
||||
---------------
|
||||
|
||||
* ``id`` - Port pair group ID
|
||||
* ``tenant_id`` - Project ID
|
||||
* ``name`` - Readable name
|
||||
* ``description`` - Readable description
|
||||
* ``port_pairs`` - List of service function port pairs
|
||||
|
||||
A port pair group may contain one or more port pairs. Multiple port
|
||||
pairs enable load balancing/distribution over a set of functionally
|
||||
equivalent service functions.
|
||||
|
||||
Port pair
|
||||
---------
|
||||
|
||||
* ``id`` - Port pair ID
|
||||
* ``tenant_id`` - Project ID
|
||||
* ``name`` - Readable name
|
||||
* ``description`` - Readable description
|
||||
* ``ingress`` - Ingress port
|
||||
* ``egress`` - Egress port
|
||||
* ``service_function_parameters`` - Dictionary of service function parameters
|
||||
|
||||
A port pair represents a service function instance that includes an ingress and
|
||||
egress port. A service function containing a bidirectional port uses the same
|
||||
ingress and egress port.
|
||||
|
||||
The ``service_function_parameters`` attribute includes one or more parameters
|
||||
for the service function. Currently, it only supports a correlation parameter
|
||||
that determines association of a packet with a chain. This parameter defaults
|
||||
to ``none`` for legacy service functions that lack support for correlation such
|
||||
as the NSH. If set to ``none``, the data plane implementation must provide
|
||||
service function proxy functionality.
|
||||
|
||||
Flow classifier
|
||||
---------------
|
||||
|
||||
* ``id`` - Flow classifier ID
|
||||
* ``tenant_id`` - Project ID
|
||||
* ``name`` - Readable name
|
||||
* ``description`` - Readable description
|
||||
* ``ethertype`` - Ethertype (IPv4/IPv6)
|
||||
* ``protocol`` - IP protocol
|
||||
* ``source_port_range_min`` - Minimum source protocol port
|
||||
* ``source_port_range_max`` - Maximum source protocol port
|
||||
* ``destination_port_range_min`` - Minimum destination protocol port
|
||||
* ``destination_port_range_max`` - Maximum destination protocol port
|
||||
* ``source_ip_prefix`` - Source IP address or prefix
|
||||
* ``destination_ip_prefix`` - Destination IP address or prefix
|
||||
* ``logical_source_port`` - Source port
|
||||
* ``logical_destination_port`` - Destination port
|
||||
* ``l7_parameters`` - Dictionary of L7 parameters
|
||||
|
||||
A combination of the source attributes defines the source of the flow. A
|
||||
combination of the destination attributes defines the destination of the flow.
|
||||
The ``l7_parameters`` attribute is a place holder that may be used to support
|
||||
flow classification using layer 7 fields, such as a URL. If unspecified, the
|
||||
``logical_source_port`` and ``logical_destination_port`` attributes default to
|
||||
``none``, the ``ethertype`` attribute defaults to ``IPv4``, and all other
|
||||
attributes default to a wildcard value.
|
||||
|
||||
Operations
|
||||
~~~~~~~~~~
|
||||
|
||||
Create a port chain
|
||||
-------------------
|
||||
|
||||
The following example uses the ``neutron`` command-line interface (CLI) to
|
||||
create a port chain consisting of three service function instances to handle
|
||||
HTTP (TCP) traffic flows from 192.0.2.11:1000 to 198.51.100.11:80.
|
||||
|
||||
* Instance 1
|
||||
|
||||
* Name: vm1
|
||||
* Function: Firewall
|
||||
* Port pair: [p1, p2]
|
||||
|
||||
* Instance 2
|
||||
|
||||
* Name: vm2
|
||||
* Function: Firewall
|
||||
* Port pair: [p3, p4]
|
||||
|
||||
* Instance 3
|
||||
|
||||
* Name: vm3
|
||||
* Function: Intrusion detection system (IDS)
|
||||
* Port pair: [p5, p6]
|
||||
|
||||
.. note::
|
||||
|
||||
The example network ``net1`` must exist before creating ports on it.
|
||||
|
||||
#. Source the credentials of the project that owns the ``net1`` network.
|
||||
|
||||
#. Create ports on network ``net1`` and record the UUID values.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create p1 --network net1
|
||||
$ openstack port create p2 --network net1
|
||||
$ openstack port create p3 --network net1
|
||||
$ openstack port create p4 --network net1
|
||||
$ openstack port create p5 --network net1
|
||||
$ openstack port create p6 --network net1
|
||||
|
||||
#. Launch service function instance ``vm1`` using ports ``p1`` and ``p2``,
|
||||
``vm2`` using ports ``p3`` and ``p4``, and ``vm3`` using ports ``p5``
|
||||
and ``p6``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --nic port-id=P1_ID --nic port-id=P2_ID vm1
|
||||
$ openstack server create --nic port-id=P3_ID --nic port-id=P4_ID vm2
|
||||
$ openstack server create --nic port-id=P5_ID --nic port-id=P6_ID vm3
|
||||
|
||||
Replace ``P1_ID``, ``P2_ID``, ``P3_ID``, ``P4_ID``, ``P5_ID``, and
|
||||
``P6_ID`` with the UUIDs of the respective ports.
|
||||
|
||||
.. note::
|
||||
|
||||
This command requires additional options to successfully launch an
|
||||
instance. See the
|
||||
`CLI reference <https://docs.openstack.org/cli-reference/openstack.html>`_
|
||||
for more information.
|
||||
|
||||
Alternatively, you can launch each instance with one network interface and
|
||||
attach additional ports later.
|
||||
|
||||
#. Create flow classifier ``FC1`` that matches the appropriate packet headers.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron flow-classifier-create \
|
||||
--description "HTTP traffic from 192.0.2.11 to 198.51.100.11" \
|
||||
--ethertype IPv4 \
|
||||
--source-ip-prefix 192.0.2.11/32 \
|
||||
--destination-ip-prefix 198.51.100.11/32 \
|
||||
--protocol tcp \
|
||||
--source-port 1000:1000 \
|
||||
--destination-port 80:80 FC1
|
||||
|
||||
#. Create port pair ``PP1`` with ports ``p1`` and ``p2``, ``PP2`` with ports
|
||||
``p3`` and ``p4``, and ``PP3`` with ports ``p5`` and ``p6``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-pair-create \
|
||||
--description "Firewall SF instance 1" \
|
||||
--ingress p1 \
|
||||
--egress p2 PP1
|
||||
|
||||
$ neutron port-pair-create \
|
||||
--description "Firewall SF instance 2" \
|
||||
--ingress p3 \
|
||||
--egress p4 PP2
|
||||
|
||||
$ neutron port-pair-create \
|
||||
--description "IDS SF instance" \
|
||||
--ingress p5 \
|
||||
--egress p6 PP3
|
||||
|
||||
#. Create port pair group ``PPG1`` with port pair ``PP1`` and ``PP2`` and
|
||||
``PPG2`` with port pair ``PP3``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-pair-group-create \
|
||||
--port-pair PP1 --port-pair PP2 PPG1
|
||||
$ neutron port-pair-group-create \
|
||||
--port-pair PP3 PPG2
|
||||
|
||||
.. note::
|
||||
|
||||
You can repeat the ``--port-pair`` option for multiple port pairs of
|
||||
functionally equivalent service functions.
|
||||
|
||||
#. Create port chain ``PC1`` with port pair groups ``PPG1`` and ``PPG2`` and
|
||||
flow classifier ``FC1``.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-chain-create \
|
||||
--port-pair-group PPG1 --port-pair-group PPG2 \
|
||||
--flow-classifier FC1 PC1
|
||||
|
||||
.. note::
|
||||
|
||||
You can repeat the ``--port-pair-group`` option to specify additional
|
||||
port pair groups in the port chain. A port chain must contain at least
|
||||
one port pair group.
|
||||
|
||||
You can repeat the ``--flow-classifier`` option to specify multiple
|
||||
flow classifiers for a port chain. Each flow classifier identifies
|
||||
a flow.
|
||||
|
||||
Update a port chain or port pair group
|
||||
--------------------------------------
|
||||
|
||||
* Use the :command:`neutron port-chain-update` command to dynamically add or
|
||||
remove port pair groups or flow classifiers on a port chain.
|
||||
|
||||
* For example, add port pair group ``PPG3`` to port chain ``PC1``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-chain-update \
|
||||
--port-pair-group PPG1 --port-pair-group PPG2 --port-pair-group PPG3 \
|
||||
--flow-classifier FC1 PC1
|
||||
|
||||
* For example, add flow classifier ``FC2`` to port chain ``PC1``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-chain-update \
|
||||
--port-pair-group PPG1 --port-pair-group PPG2 \
|
||||
--flow-classifier FC1 --flow-classifier FC2 PC1
|
||||
|
||||
SFC steers traffic matching the additional flow classifier to the
|
||||
port pair groups in the port chain.
|
||||
|
||||
* Use the :command:`neutron port-pair-group-update` command to perform dynamic
|
||||
scale-out or scale-in operations by adding or removing port pairs on a port
|
||||
pair group.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ neutron port-pair-group-update \
|
||||
--port-pair PP1 --port-pair PP2 --port-pair PP4 PPG1
|
||||
|
||||
SFC performs load balancing/distribution over the additional service
|
||||
functions in the port pair group.
|
473
doc/source/admin/config-sriov.rst
Normal file
@ -0,0 +1,473 @@
|
||||
.. _config-sriov:
|
||||
|
||||
======
|
||||
SR-IOV
|
||||
======
|
||||
|
||||
The purpose of this page is to describe how to enable SR-IOV functionality
|
||||
available in OpenStack (using OpenStack Networking). This functionality was
|
||||
first introduced in the OpenStack Juno release. This page intends to serve as
|
||||
a guide for how to configure OpenStack Networking and OpenStack Compute to
|
||||
create SR-IOV ports.
|
||||
|
||||
The basics
|
||||
~~~~~~~~~~
|
||||
|
||||
PCI-SIG Single Root I/O Virtualization and Sharing (SR-IOV) functionality is
|
||||
available in OpenStack since the Juno release. The SR-IOV specification
|
||||
defines a standardized mechanism to virtualize PCIe devices. This mechanism
|
||||
can virtualize a single PCIe Ethernet controller to appear as multiple PCIe
|
||||
devices. Each device can be directly assigned to an instance, bypassing the
|
||||
hypervisor and virtual switch layer. As a result, users are able to achieve
|
||||
low latency and near-line wire speed.
|
||||
|
||||
The following terms are used throughout this document:
|
||||
|
||||
.. list-table::
|
||||
:header-rows: 1
|
||||
:widths: 10 90
|
||||
|
||||
* - Term
|
||||
- Definition
|
||||
* - PF
|
||||
- Physical Function. The physical Ethernet controller that supports
|
||||
SR-IOV.
|
||||
* - VF
|
||||
- Virtual Function. The virtual PCIe device created from a physical
|
||||
Ethernet controller.
|
||||
|
||||
SR-IOV agent
|
||||
------------
|
||||
|
||||
The SR-IOV agent allows you to set the admin state of ports, configure port
|
||||
security (enable and disable spoof checking), and configure QoS rate limiting
|
||||
and minimum bandwidth. You must include the SR-IOV agent on each compute node
|
||||
using SR-IOV ports.
|
||||
|
||||
.. note::
|
||||
|
||||
The SR-IOV agent was optional before Mitaka, and was not enabled by default
|
||||
before Liberty.
|
||||
|
||||
.. note::
|
||||
|
||||
The ability to control port security and QoS rate limit settings was added
|
||||
in Liberty.
|
||||
|
||||
Supported Ethernet controllers
|
||||
------------------------------
|
||||
|
||||
The following manufacturers are known to work:
|
||||
|
||||
- Intel
|
||||
- Mellanox
|
||||
- QLogic
|
||||
|
||||
For information on **Mellanox SR-IOV Ethernet ConnectX-3/ConnectX-3 Pro cards**, see
|
||||
`Mellanox: How To Configure SR-IOV VFs
|
||||
<https://community.mellanox.com/docs/DOC-1484>`_.
|
||||
|
||||
For information on **QLogic SR-IOV Ethernet cards**, see
|
||||
`User's Guide OpenStack Deployment with SR-IOV Configuration
|
||||
<http://www.qlogic.com/solutions/Documents/UsersGuide_OpenStack_SR-IOV.pdf>`_.
|
||||
|
||||
Using SR-IOV interfaces
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In order to enable SR-IOV, the following steps are required:
|
||||
|
||||
#. Create Virtual Functions (Compute)
|
||||
#. Whitelist PCI devices in nova-compute (Compute)
|
||||
#. Configure neutron-server (Controller)
|
||||
#. Configure nova-scheduler (Controller)
|
||||
#. Enable neutron sriov-agent (Compute)
|
||||
|
||||
We recommend using VLAN provider networks for segregation. This way you can
|
||||
combine instances without SR-IOV ports and instances with SR-IOV ports on a
|
||||
single network.
|
||||
|
||||
.. note::
|
||||
|
||||
Throughout this guide, ``eth3`` is used as the PF and ``physnet2`` is used
|
||||
as the provider network configured as a VLAN range. These ports may vary in
|
||||
different environments.
|
||||
|
||||
Create Virtual Functions (Compute)
|
||||
----------------------------------
|
||||
|
||||
Create the VFs for the network interface that will be used for SR-IOV. We use
|
||||
``eth3`` as PF, which is also used as the interface for the VLAN provider
|
||||
network and has access to the private networks of all machines.
|
||||
|
||||
.. note::
|
||||
|
||||
The steps detail how to create VFs using Mellanox ConnectX-4 and newer/Intel
|
||||
SR-IOV Ethernet cards on an Intel system. Steps may differ for different
|
||||
hardware configurations.
|
||||
|
||||
#. Ensure SR-IOV and VT-d are enabled in BIOS.
|
||||
|
||||
#. Enable IOMMU in Linux by adding ``intel_iommu=on`` to the kernel parameters,
|
||||
for example, using GRUB.
|
||||
|
||||
#. On each compute node, create the VFs via the PCI SYS interface:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# echo '8' > /sys/class/net/eth3/device/sriov_numvfs
|
||||
|
||||
.. note::
|
||||
|
||||
On some PCI devices, observe that when changing the amount of VFs you
|
||||
receive the error ``Device or resource busy``. In this case, you must
|
||||
first set ``sriov_numvfs`` to ``0``, then set it to your new value.
|
||||
|
||||
.. note::
|
||||
|
||||
A network interface could be used both for PCI passthrough, using the PF,
|
||||
and SR-IOV, using the VFs. If the PF is used, the VF number stored in
|
||||
the ``sriov_numvfs`` file is lost. If the PF is attached again to the
|
||||
operating system, the number of VFs assigned to this interface will be
|
||||
zero. To keep the number of VFs always assigned to this interface,
|
||||
modify the interfaces configuration file adding an ``ifup`` script
|
||||
command.
|
||||
|
||||
In Ubuntu, modifying the ``/etc/network/interfaces`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
auto eth3
|
||||
iface eth3 inet dhcp
|
||||
pre-up echo '4' > /sys/class/net/eth3/device/sriov_numvfs
|
||||
|
||||
|
||||
In Red Hat, modifying the ``/sbin/ifup-local`` file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
#!/bin/sh
|
||||
if [[ "$1" == "eth3" ]]
|
||||
then
|
||||
echo '4' > /sys/class/net/eth3/device/sriov_numvfs
|
||||
fi
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
Alternatively, you can create VFs by passing the ``max_vfs`` to the
|
||||
kernel module of your network interface. However, the ``max_vfs``
|
||||
parameter has been deprecated, so the PCI SYS interface is the preferred
|
||||
method.
|
||||
|
||||
You can determine the maximum number of VFs a PF can support:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# cat /sys/class/net/eth3/device/sriov_totalvfs
|
||||
63
|
||||
|
||||
#. Verify that the VFs have been created and are in ``up`` state:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# lspci | grep Ethernet
|
||||
82:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
|
||||
82:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
|
||||
82:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:10.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:10.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:10.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:11.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:11.2 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:11.4 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
82:11.6 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip link show eth3
|
||||
8: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000
|
||||
link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff
|
||||
vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 4 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 5 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 6 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
vf 7 MAC 00:00:00:00:00:00, spoof checking on, link-state auto
|
||||
|
||||
If the interfaces are down, set them to ``up`` before launching a guest,
|
||||
otherwise the instance will fail to spawn:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip link set eth3 up
|
||||
|
||||
#. Persist created VFs on reboot:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# echo "echo '7' > /sys/class/net/eth3/device/sriov_numvfs" >> /etc/rc.local
|
||||
|
||||
.. note::
|
||||
|
||||
The suggested way of making PCI SYS settings persistent is through
|
||||
the ``sysfsutils`` tool. However, this is not available by default on
|
||||
many major distributions.
|
||||
|
||||
Whitelist PCI devices nova-compute (Compute)
|
||||
--------------------------------------------
|
||||
|
||||
#. Configure which PCI devices the ``nova-compute`` service may use. Edit
|
||||
the ``nova.conf`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[default]
|
||||
pci_passthrough_whitelist = { "devname": "eth3", "physical_network": "physnet2"}
|
||||
|
||||
This tells the Compute service that all VFs belonging to ``eth3`` are
|
||||
allowed to be passed through to instances and belong to the provider network
|
||||
``physnet2``.
|
||||
|
||||
Alternatively the ``pci_passthrough_whitelist`` parameter also supports
|
||||
whitelisting by:
|
||||
|
||||
- PCI address: The address uses the same syntax as in ``lspci`` and an
|
||||
asterisk (*) can be used to match anything.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
pci_passthrough_whitelist = { "address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]", "physical_network": "physnet2" }
|
||||
|
||||
For example, to match any domain, bus 0a, slot 00, and all functions:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
pci_passthrough_whitelist = { "address": "*:0a:00.*", "physical_network": "physnet2" }
|
||||
|
||||
- PCI ``vendor_id`` and ``product_id`` as displayed by the Linux utility
|
||||
``lspci``.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
pci_passthrough_whitelist = { "vendor_id": "<id>", "product_id": "<id>", "physical_network": "physnet2" }
|
||||
|
||||
If the device defined by the PCI address or ``devname`` corresponds to an
|
||||
SR-IOV PF, all VFs under the PF will match the entry. Multiple
|
||||
``pci_passthrough_whitelist`` entries per host are supported.
|
||||
|
||||
#. Restart the ``nova-compute`` service for the changes to go into effect.
|
||||
|
||||
.. _configure_sriov_neutron_server:
|
||||
|
||||
Configure neutron-server (Controller)
|
||||
-------------------------------------
|
||||
|
||||
#. Add ``sriovnicswitch`` as mechanism driver. Edit the ``ml2_conf.ini`` file
|
||||
on each controller:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
mechanism_drivers = openvswitch,sriovnicswitch
|
||||
|
||||
#. Add the ``ml2_conf_sriov.ini`` file as parameter to the ``neutron-server``
|
||||
service. Edit the appropriate initialization script to configure the
|
||||
``neutron-server`` service to load the SR-IOV configuration file:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
--config-file /etc/neutron/neutron.conf
|
||||
--config-file /etc/neutron/plugin.ini
|
||||
--config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini
|
||||
|
||||
#. Restart the ``neutron-server`` service.
|
||||
|
||||
Configure nova-scheduler (Controller)
|
||||
-------------------------------------
|
||||
|
||||
#. On every controller node running the ``nova-scheduler`` service, add
|
||||
``PciPassthroughFilter`` to ``scheduler_default_filters`` to enable
|
||||
``PciPassthroughFilter`` by default.
|
||||
Also ensure ``scheduler_available_filters`` parameter under the
|
||||
``[DEFAULT]`` section in ``nova.conf`` is set to ``all_filters``
|
||||
to enable all filters provided by the Compute service.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter, PciPassthroughFilter
|
||||
scheduler_available_filters = nova.scheduler.filters.all_filters
|
||||
|
||||
#. Restart the ``nova-scheduler`` service.
|
||||
|
||||
Enable neutron sriov-agent (Compute)
|
||||
-------------------------------------
|
||||
|
||||
#. Install the SR-IOV agent.
|
||||
|
||||
#. Edit the ``sriov_agent.ini`` file on each compute node. For example:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
|
||||
|
||||
[sriov_nic]
|
||||
physical_device_mappings = physnet2:eth3
|
||||
exclude_devices =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``physical_device_mappings`` parameter is not limited to be a 1-1
|
||||
mapping between physical networks and NICs. This enables you to map the
|
||||
same physical network to more than one NIC. For example, if ``physnet2``
|
||||
is connected to ``eth3`` and ``eth4``, then
|
||||
``physnet2:eth3,physnet2:eth4`` is a valid option.
|
||||
|
||||
The ``exclude_devices`` parameter is empty, therefore, all the VFs
|
||||
associated with eth3 may be configured by the agent. To exclude specific
|
||||
VFs, add them to the ``exclude_devices`` parameter as follows:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
exclude_devices = eth1:0000:07:00.2;0000:07:00.3,eth2:0000:05:00.1;0000:05:00.2
|
||||
|
||||
#. Ensure the neutron sriov-agent runs successfully:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# neutron-sriov-nic-agent \
|
||||
--config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/plugins/ml2/sriov_agent.ini
|
||||
|
||||
#. Enable the neutron sriov-agent service.
|
||||
|
||||
If installing from source, you must configure a daemon file for the init
|
||||
system manually.
|
||||
|
||||
(Optional) FDB L2 agent extension
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Forwarding DataBase (FDB) population is an L2 agent extension to OVS agent or
|
||||
Linux bridge. Its objective is to update the FDB table for existing instance
|
||||
using normal port. This enables communication between SR-IOV instances and
|
||||
normal instances. The use cases of the FDB population extension are:
|
||||
|
||||
* Direct port and normal port instances reside on the same compute node.
|
||||
|
||||
* Direct port instance that uses floating IP address and network node
|
||||
are located on the same host.
|
||||
|
||||
For additional information describing the problem, refer to:
|
||||
`Virtual switching technologies and Linux bridge.
|
||||
<http://events.linuxfoundation.org/sites/events/files/slides/LinuxConJapan2014_makita_0.pdf>`_
|
||||
|
||||
#. Edit the ``ovs_agent.ini`` or ``linuxbridge_agent.ini`` file on each compute
|
||||
node. For example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[agent]
|
||||
extensions = fdb
|
||||
|
||||
#. Add the FDB section and the ``shared_physical_device_mappings`` parameter.
|
||||
This parameter maps each physical port to its physical network name. Each
|
||||
physical network can be mapped to several ports:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[FDB]
|
||||
shared_physical_device_mappings = physnet1:p1p1, physnet1:p1p2
|
||||
|
||||
Launching instances with SR-IOV ports
|
||||
-------------------------------------
|
||||
|
||||
Once configuration is complete, you can launch instances with SR-IOV ports.
|
||||
|
||||
#. Get the ``id`` of the network where you want the SR-IOV port to be created:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ net_id=`neutron net-show net04 | grep "\ id\ " | awk '{ print $4 }'`
|
||||
|
||||
#. Create the SR-IOV port. ``vnic_type=direct`` is used here, but other options
|
||||
include ``normal``, ``direct-physical``, and ``macvtap``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ port_id=`neutron port-create $net_id --name sriov_port --binding:vnic_type direct | grep "\ id\ " | awk '{ print $4 }'`
|
||||
|
||||
#. Create the instance. Specify the SR-IOV port created in step two for the
|
||||
NIC:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --flavor m1.large --image ubuntu_14.04 --nic port-id=$port_id test-sriov
|
||||
|
||||
.. note::
|
||||
|
||||
There are two ways to attach VFs to an instance. You can create an SR-IOV
|
||||
port or use the ``pci_alias`` in the Compute service. For more
|
||||
information about using ``pci_alias``, refer to `nova-api configuration
|
||||
<https://docs.openstack.org/admin-guide/compute-pci-passthrough.html#configure-nova-api-controller>`__.
|
||||
|
||||
SR-IOV with InfiniBand
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The support for SR-IOV with InfiniBand allows a Virtual PCI device (VF) to
|
||||
be directly mapped to the guest, allowing higher performance and advanced
|
||||
features such as RDMA (remote direct memory access). To use this feature,
|
||||
you must:
|
||||
|
||||
#. Use InfiniBand enabled network adapters.
|
||||
|
||||
#. Run InfiniBand subnet managers to enable InfiniBand fabric.
|
||||
|
||||
All InfiniBand networks must have a subnet manager running for the network
|
||||
to function. This is true even when doing a simple network of two
|
||||
machines with no switch and the cards are plugged in back-to-back. A
|
||||
subnet manager is required for the link on the cards to come up.
|
||||
It is possible to have more than one subnet manager. In this case, one
|
||||
of them will act as the master, and any other will act as a slave that
|
||||
will take over when the master subnet manager fails.
|
||||
|
||||
#. Install the ``ebrctl`` utility on the compute nodes.
|
||||
|
||||
Check that ``ebrctl`` is listed somewhere in ``/etc/nova/rootwrap.d/*``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ grep 'ebrctl' /etc/nova/rootwrap.d/*
|
||||
|
||||
If ``ebrctl`` does not appear in any of the rootwrap files, add this to the
|
||||
``/etc/nova/rootwrap.d/compute.filters`` file in the ``[Filters]`` section.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[Filters]
|
||||
ebrctl: CommandFilter, ebrctl, root
|
||||
|
||||
Known limitations
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
* When using Quality of Service (QoS), ``max_burst_kbps`` (burst over
|
||||
``max_kbps``) is not supported. In addition, ``max_kbps`` is rounded to
|
||||
Mbps.
|
||||
* Security groups are not supported when using SR-IOV, thus, the firewall
|
||||
driver must be disabled. This can be done in the ``neutron.conf`` file.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
|
||||
|
||||
* SR-IOV is not integrated into the OpenStack Dashboard (horizon). Users must
|
||||
use the CLI or API to configure SR-IOV interfaces.
|
||||
* Live migration is not supported for instances with SR-IOV ports.
|
||||
|
||||
.. note::
|
||||
|
||||
SR-IOV features may require a specific NIC driver version, depending on the vendor.
|
||||
Intel NICs, for example, require ixgbe version 4.4.6 or greater, and ixgbevf version
|
||||
3.2.2 or greater.
|
261
doc/source/admin/config-subnet-pools.rst
Normal file
@ -0,0 +1,261 @@
|
||||
.. _config-subnet-pools:
|
||||
|
||||
============
|
||||
Subnet pools
|
||||
============
|
||||
|
||||
Subnet pools have been made available since the Kilo release. It is a simple
|
||||
feature that has the potential to improve your workflow considerably. It also
|
||||
provides a building block from which other new features will be built in to
|
||||
OpenStack Networking.
|
||||
|
||||
To see if your cloud has this feature available, you can check that it is
|
||||
listed in the supported aliases. You can do this with the OpenStack client.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension list | grep subnet_allocation
|
||||
| Subnet Allocation | subnet_allocation | Enables allocation of subnets
|
||||
from a subnet pool |
|
||||
|
||||
Why you need them
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before Kilo, Networking had no automation around the addresses used to create a
|
||||
subnet. To create one, you had to come up with the addresses on your own
|
||||
without any help from the system. There are valid use cases for this but if you
|
||||
are interested in the following capabilities, then subnet pools might be for
|
||||
you.
|
||||
|
||||
First, would not it be nice if you could turn your pool of addresses over to
|
||||
Neutron to take care of? When you need to create a subnet, you just ask for
|
||||
addresses to be allocated from the pool. You do not have to worry about what
|
||||
you have already used and what addresses are in your pool. Subnet pools can do
|
||||
this.
|
||||
|
||||
Second, subnet pools can manage addresses across projects. The addresses are
|
||||
guaranteed not to overlap. If the addresses come from an externally routable
|
||||
pool then you know that all of the projects have addresses which are *routable*
|
||||
and unique. This can be useful in the following scenarios.
|
||||
|
||||
#. IPv6 since OpenStack Networking has no IPv6 floating IPs.
|
||||
#. Routing directly to a project network from an external network.
|
||||
|
||||
How they work
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
A subnet pool manages a pool of addresses from which subnets can be allocated.
|
||||
It ensures that there is no overlap between any two subnets allocated from the
|
||||
same pool.
|
||||
|
||||
As a regular project in an OpenStack cloud, you can create a subnet pool of
|
||||
your own and use it to manage your own pool of addresses. This does not require
|
||||
any admin privileges. Your pool will not be visible to any other project.
|
||||
|
||||
If you are an admin, you can create a pool which can be accessed by any regular
|
||||
project. Being a shared resource, there is a quota mechanism to arbitrate
|
||||
access.
|
||||
|
||||
Quotas
|
||||
~~~~~~
|
||||
|
||||
Subnet pools have a quota system which is a little bit different than
|
||||
other quotas in Neutron. Other quotas in Neutron count discrete
|
||||
instances of an object against a quota. Each time you create something
|
||||
like a router, network, or a port, it uses one from your total quota.
|
||||
|
||||
With subnets, the resource is the IP address space. Some subnets take
|
||||
more of it than others. For example, 203.0.113.0/24 uses 256 addresses
|
||||
in one subnet but 198.51.100.224/28 uses only 16. If address space is
|
||||
limited, the quota system can encourage efficient use of the space.
|
||||
|
||||
With IPv4, the default_quota can be set to the number of absolute
|
||||
addresses any given project is allowed to consume from the pool. For
|
||||
example, with a quota of 128, I might get 203.0.113.128/26,
|
||||
203.0.113.224/28, and still have room to allocate 48 more addresses in
|
||||
the future.
|
||||
|
||||
With IPv6 it is a little different. It is not practical to count
|
||||
individual addresses. To avoid ridiculously large numbers, the quota is
|
||||
expressed in the number of /64 subnets which can be allocated. For
|
||||
example, with a default_quota of 3, I might get 2001:db8:c18e:c05a::/64,
|
||||
2001:db8:221c:8ef3::/64, and still have room to allocate one more prefix
|
||||
in the future.
|
||||
|
||||
Default subnet pools
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Beginning with Mitaka, a subnet pool can be marked as the default. This
|
||||
is handled with a new extension.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack extension list | grep default-subnetpools
|
||||
| Default Subnetpools | default-subnetpools | Provides ability to mark
|
||||
and use a subnetpool as the default |
|
||||
|
||||
|
||||
An administrator can mark a pool as default. Only one pool from each
|
||||
address family can be marked default.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool set --default 74348864-f8bf-4fc0-ab03-81229d189467
|
||||
|
||||
If there is a default, it can be requested by passing
|
||||
``--use-default-subnetpool`` instead of
|
||||
``--subnet-pool SUBNETPOOL``.
|
||||
|
||||
Demo
|
||||
----
|
||||
|
||||
If you have access to an OpenStack Kilo or later based neutron, you can play
|
||||
with this feature now. Give it a try. All of the following commands work
|
||||
equally as well with IPv6 addresses.
|
||||
|
||||
First, as admin, create a shared subnet pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool create --share --pool-prefix 203.0.113.0/24 \
|
||||
--default-prefix-length 26 demo-subnetpool4
|
||||
+-------------------+--------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------+
|
||||
| address_scope_id | None |
|
||||
| created_at | 2016-12-14T07:21:26Z |
|
||||
| default_prefixlen | 26 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| headers | |
|
||||
| id | d3aefb76-2527-43d4-bc21-0ec253 |
|
||||
| | 908545 |
|
||||
| ip_version | 4 |
|
||||
| is_default | False |
|
||||
| max_prefixlen | 32 |
|
||||
| min_prefixlen | 8 |
|
||||
| name | demo-subnetpool4 |
|
||||
| prefixes | 203.0.113.0/24 |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d |
|
||||
| | 7c |
|
||||
| revision_number | 1 |
|
||||
| shared | True |
|
||||
| updated_at | 2016-12-14T07:21:26Z |
|
||||
+-------------------+--------------------------------+
|
||||
|
||||
The ``default_prefix_length`` defines the subnet size you will get
|
||||
if you do not specify ``--prefix-length`` when creating a subnet.
|
||||
|
||||
Do essentially the same thing for IPv6 and there are now two subnet
|
||||
pools. Regular projects can see them. (the output is trimmed a bit
|
||||
for display)
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool list
|
||||
+------------------+------------------+--------------------+
|
||||
| ID | Name | Prefixes |
|
||||
+------------------+------------------+--------------------+
|
||||
| 2b7cc19f-0114-4e | demo-subnetpool | 2001:db8:a583::/48 |
|
||||
| f4-ad86-c1bb91fc | | |
|
||||
| d1f9 | | |
|
||||
| d3aefb76-2527-43 | demo-subnetpool4 | 203.0.113.0/24 |
|
||||
| d4-bc21-0ec25390 | | |
|
||||
| 8545 | | |
|
||||
+------------------+------------------+--------------------+
|
||||
|
||||
Now, use them. It is easy to create a subnet from a pool:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --ip-version 4 --subnet-pool \
|
||||
demo-subnetpool4 --network demo-network1 demo-subnet1
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 203.0.113.194-203.0.113.254 |
|
||||
| cidr | 203.0.113.192/26 |
|
||||
| created_at | 2016-12-14T07:33:13Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 203.0.113.193 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | 8d4fbae3-076c-4c08-b2dd-2d6175115a5e |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | demo-subnet1 |
|
||||
| network_id | 6b377f77-ce00-4ff6-8676-82343817470d |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | d3aefb76-2527-43d4-bc21-0ec253908545 |
|
||||
| updated_at | 2016-12-14T07:33:13Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
You can request a specific subnet from the pool. You need to specify a subnet
|
||||
that falls within the pool's prefixes. If the subnet is not already allocated,
|
||||
the request succeeds. You can leave off the IP version because it is deduced
|
||||
from the subnet pool.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --subnet-pool demo-subnetpool4 \
|
||||
--network demo-network1 --subnet-range 203.0.113.128/26 subnet2
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| allocation_pools | 203.0.113.130-203.0.113.190 |
|
||||
| cidr | 203.0.113.128/26 |
|
||||
| created_at | 2016-12-14T07:27:40Z |
|
||||
| description | |
|
||||
| dns_nameservers | |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 203.0.113.129 |
|
||||
| headers | |
|
||||
| host_routes | |
|
||||
| id | d32814e3-cf46-4371-80dd-498a80badfba |
|
||||
| ip_version | 4 |
|
||||
| ipv6_address_mode | None |
|
||||
| ipv6_ra_mode | None |
|
||||
| name | subnet2 |
|
||||
| network_id | 6b377f77-ce00-4ff6-8676-82343817470d |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| revision_number | 2 |
|
||||
| service_types | |
|
||||
| subnetpool_id | d3aefb76-2527-43d4-bc21-0ec253908545 |
|
||||
| updated_at | 2016-12-14T07:27:40Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
||||
|
||||
If the pool becomes exhausted, load some more prefixes:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet pool set --pool-prefix \
|
||||
198.51.100.0/24 demo-subnetpool4
|
||||
$ openstack subnet pool show demo-subnetpool4
|
||||
+-------------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+--------------------------------------+
|
||||
| address_scope_id | None |
|
||||
| created_at | 2016-12-14T07:21:26Z |
|
||||
| default_prefixlen | 26 |
|
||||
| default_quota | None |
|
||||
| description | |
|
||||
| id | d3aefb76-2527-43d4-bc21-0ec253908545 |
|
||||
| ip_version | 4 |
|
||||
| is_default | False |
|
||||
| max_prefixlen | 32 |
|
||||
| min_prefixlen | 8 |
|
||||
| name | demo-subnetpool4 |
|
||||
| prefixes | 198.51.100.0/24, 203.0.113.0/24 |
|
||||
| project_id | cfd1889ac7d64ad891d4f20aef9f8d7c |
|
||||
| revision_number | 2 |
|
||||
| shared | True |
|
||||
| updated_at | 2016-12-14T07:30:32Z |
|
||||
+-------------------+--------------------------------------+
|
||||
|
304
doc/source/admin/config-trunking.rst
Normal file
@ -0,0 +1,304 @@
|
||||
.. _config-trunking:
|
||||
|
||||
========
|
||||
Trunking
|
||||
========
|
||||
|
||||
The network trunk service allows multiple networks to be connected to an
|
||||
instance using a single virtual NIC (vNIC). Multiple networks can be presented
|
||||
to an instance by connecting it to a single port.
|
||||
|
||||
Operation
|
||||
~~~~~~~~~
|
||||
|
||||
Network trunking consists of a service plug-in and a set of drivers that
|
||||
manage trunks on different layer-2 mechanism drivers. Users can create a
|
||||
port, associate it with a trunk, and launch an instance on that port. Users
|
||||
can dynamically attach and detach additional networks without disrupting
|
||||
operation of the instance.
|
||||
|
||||
Every trunk has a parent port and can have any number of subports.
|
||||
The parent port is the port that the trunk is associated with. Users
|
||||
create instances and specify the parent port of the trunk when launching
|
||||
instances attached to a trunk.
|
||||
|
||||
The network presented by the subport is the network of the associated
|
||||
port. When creating a subport, a ``segmentation-id`` may be required by
|
||||
the driver. ``segmentation-id`` defines the segmentation ID on which the
|
||||
subport network is presented to the instance. ``segmentation-type`` may be
|
||||
required by certain drivers like OVS, although at this time only ``vlan`` is
|
||||
supported as a ``segmentation-type``.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``segmentation-type`` and ``segmentation-id`` parameters are optional
|
||||
in the Networking API. However, all drivers as of the Newton release
|
||||
require both to be provided when adding a subport to a trunk. Future
|
||||
drivers may be implemented without this requirement.
|
||||
|
||||
The ``segmentation-type`` and ``segmentation-id`` specified by the user on the
|
||||
subports is intentionally decoupled from the ``segmentation-type`` and ID of
|
||||
the networks. For example, it is possible to configure the Networking service
|
||||
with ``tenant_network_types = vxlan`` and still create subports with
|
||||
``segmentation_type = vlan``. The Networking service performs remapping as
|
||||
necessary.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The ML2 plug-in supports trunking with the following mechanism drivers:
|
||||
|
||||
* Open vSwitch (OVS)
|
||||
* Linux bridge
|
||||
* Open Virtual Network (OVN)
|
||||
|
||||
When using a ``segmentation-type`` of ``vlan``, the OVS and Linux bridge
|
||||
drivers present the network of the parent port as the untagged VLAN and all
|
||||
subports as tagged VLANs.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
* In the ``neutron.conf`` file, enable the trunk service plug-in:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = trunk
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials and list the enabled
|
||||
extensions.
|
||||
#. Use the command :command:`openstack extension list --network` to verify
|
||||
that the ``Trunk Extension`` and ``Trunk port details`` extensions are
|
||||
enabled.
|
||||
|
||||
Workflow
|
||||
--------
|
||||
|
||||
At a high level, the basic steps to launching an instance on a trunk are
|
||||
the following:
|
||||
|
||||
#. Create networks and subnets for the trunk and subports
|
||||
#. Create the trunk
|
||||
#. Add subports to the trunk
|
||||
#. Launch an instance on the trunk
|
||||
|
||||
Create networks and subnets for the trunk and subports
|
||||
------------------------------------------------------
|
||||
|
||||
Create the appropriate networks for the trunk and subports that will be added
|
||||
to the trunk. Create subnets on these networks to ensure the desired layer-3
|
||||
connectivity over the trunk.
|
||||
|
||||
Create the trunk
|
||||
----------------
|
||||
|
||||
* Create a parent port for the trunk.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create --network project-net-A trunk-parent
|
||||
+-------------------+-------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+-------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| binding_vif_type | unbound |
|
||||
| binding_vnic_type | normal |
|
||||
| fixed_ips | ip_address='192.0.2.7',subnet_id='8b957198-d3cf-4953-8449-ad4e4dd712cc' |
|
||||
| id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38 |
|
||||
| mac_address | fa:16:3e:dd:c4:d1 |
|
||||
| name | trunk-parent |
|
||||
| network_id | 1b47d3e7-cda5-48e4-b0c8-d20bd7e35f55 |
|
||||
+-------------------+-------------------------------------------------------------------------+
|
||||
|
||||
* Create the trunk using ``--parent-port`` to reference the port from
|
||||
the previous step:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network trunk create --parent-port trunk-parent trunk1
|
||||
+-----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-----------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| id | fdf02fcb-1844-45f1-9d9b-e4c2f522c164 |
|
||||
| name | trunk1 |
|
||||
| port_id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38 |
|
||||
| sub_ports | |
|
||||
+-----------------+--------------------------------------+
|
||||
|
||||
Add subports to the trunk
|
||||
-------------------------
|
||||
|
||||
Subports can be added to a trunk in two ways: creating the trunk with subports
|
||||
or adding subports to an existing trunk.
|
||||
|
||||
* Create trunk with subports:
|
||||
|
||||
This method entails creating the trunk with subports specified at trunk
|
||||
creation.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack port create --network project-net-A trunk-parent
|
||||
+-------------------+-------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+-------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| binding_vif_type | unbound |
|
||||
| binding_vnic_type | normal |
|
||||
| fixed_ips | ip_address='192.0.2.7',subnet_id='8b957198-d3cf-4953-8449-ad4e4dd712cc' |
|
||||
| id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38 |
|
||||
| mac_address | fa:16:3e:dd:c4:d1 |
|
||||
| name | trunk-parent |
|
||||
| network_id | 1b47d3e7-cda5-48e4-b0c8-d20bd7e35f55 |
|
||||
+-------------------+-------------------------------------------------------------------------+
|
||||
|
||||
$ openstack port create --network trunked-net subport1
|
||||
+-------------------+----------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+----------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| binding_vif_type | unbound |
|
||||
| binding_vnic_type | normal |
|
||||
| fixed_ips | ip_address='198.51.100.8',subnet_id='2a860e2c-922b-437b-a149-b269a8c9b120' |
|
||||
| id | 91f9dde8-80a4-4506-b5da-c287feb8f5d8 |
|
||||
| mac_address | fa:16:3e:ba:f0:4d |
|
||||
| name | subport1 |
|
||||
| network_id | aef78ec5-16e3-4445-b82d-b2b98c6a86d9 |
|
||||
+-------------------+----------------------------------------------------------------------------+
|
||||
|
||||
$ openstack network trunk create \
|
||||
--parent-port trunk-parent \
|
||||
--subport port=subport1,segmentation-type=vlan,segmentation-id=100 \
|
||||
trunk1
|
||||
+----------------+-------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+-------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| id | 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3 |
|
||||
| name | trunk1 |
|
||||
| port_id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38 |
|
||||
| sub_ports | port_id='73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38', segmentation_id='100', segmentation_type='vlan' |
|
||||
+----------------+-------------------------------------------------------------------------------------------------+
|
||||
|
||||
* Add subports to an existing trunk:
|
||||
|
||||
This method entails creating a trunk, then adding subports to the trunk
|
||||
after it has already been created.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network trunk set --subport \
|
||||
port=subport1,segmentation-type=vlan,segmentation-id=100 \
|
||||
trunk1
|
||||
|
||||
.. note::
|
||||
|
||||
The command provides no output.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network trunk show trunk1
|
||||
+----------------+-------------------------------------------------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+-------------------------------------------------------------------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| id | 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3 |
|
||||
| name | trunk1 |
|
||||
| port_id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38 |
|
||||
| sub_ports | port_id='73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38', segmentation_id='100', segmentation_type='vlan' |
|
||||
+----------------+-------------------------------------------------------------------------------------------------+
|
||||
|
||||
Launch an instance on the trunk
|
||||
-------------------------------
|
||||
|
||||
* Show trunk details to get the ``port_id`` of the trunk.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network trunk show trunk1
|
||||
+----------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+----------------+--------------------------------------+
|
||||
| admin_state_up | UP |
|
||||
| id | 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3 |
|
||||
| name | trunk |
|
||||
| port_id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38 |
|
||||
| sub_ports | |
|
||||
+----------------+--------------------------------------+
|
||||
|
||||
* Launch the instance by specifying ``port-id`` using the value of ``port_id``
|
||||
from the trunk details. Launching an instance on a subport is not supported.
|
||||
|
||||
Using trunks and subports inside an instance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When configuring instances to use a subport, ensure that the interface on the
|
||||
instance is set to use the MAC address assigned to the port by the Networking
|
||||
service. Instances are not made aware of changes made to the trunk after they
|
||||
are active. For example, when a subport with a ``segmentation-type`` of
|
||||
``vlan`` is added to a trunk, any operations specific to the instance operating
|
||||
system that allow the instance to send and receive traffic on the new VLAN must
|
||||
be handled outside of the Networking service.
|
||||
|
||||
When creating subports, the MAC address of the trunk parent port can be set
|
||||
on the subport. This will allow VLAN subinterfaces inside an instance launched
|
||||
on a trunk to be configured without explicitly setting a MAC address. Although
|
||||
unique MAC addresses can be used for subports, this can present issues with
|
||||
ARP spoof protections and the native OVS firewall driver. If the native OVS
|
||||
firewall driver is to be used, we recommend that the MAC address of the parent
|
||||
port be re-used on all subports.
|
||||
|
||||
Trunk states
|
||||
~~~~~~~~~~~~
|
||||
|
||||
* ``ACTIVE``
|
||||
|
||||
The trunk is ``ACTIVE`` when both the logical and physical resources have
|
||||
been created. This means that all operations within the Networking and
|
||||
Compute services have completed and the trunk is ready for use.
|
||||
|
||||
* ``DOWN``
|
||||
|
||||
A trunk is ``DOWN`` when it is first created without an instance launched on
|
||||
it, or when the instance associated with the trunk has been deleted.
|
||||
|
||||
* ``DEGRADED``
|
||||
|
||||
A trunk can be in a ``DEGRADED`` state when a temporary failure during
|
||||
the provisioning process is encountered. This includes situations where a
|
||||
subport add or remove operation fails. When in a degraded state, the trunk
|
||||
is still usable and some subports may be usable as well. Operations that
|
||||
cause the trunk to go into a ``DEGRADED`` state can be retried to fix
|
||||
temporary failures and move the trunk into an ``ACTIVE`` state.
|
||||
|
||||
* ``ERROR``
|
||||
|
||||
A trunk is in ``ERROR`` state if the request leads to a conflict or an
|
||||
error that cannot be fixed by retrying the request. The ``ERROR`` status
|
||||
can be encountered if the network is not compatible with the trunk
|
||||
configuration or the binding process leads to a persistent failure. When
|
||||
a trunk is in ``ERROR`` state, it must be brought to a sane state
|
||||
(``ACTIVE``), or else requests to add subports will be rejected.
|
||||
|
||||
* ``BUILD``
|
||||
|
||||
A trunk is in ``BUILD`` state while the resources associated with the
|
||||
trunk are in the process of being provisioned. Once the trunk and all of
|
||||
the subports have been provisioned successfully, the trunk transitions
|
||||
to ``ACTIVE``. If there was a partial failure, the trunk transitions
|
||||
to ``DEGRADED``.
|
||||
|
||||
When ``admin_state`` is set to ``DOWN``, the user is blocked from performing
|
||||
operations on the trunk. ``admin_state`` is set by the user and should not be
|
||||
used to monitor the health of the trunk.
|
||||
|
||||
Limitations and issues
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
* See `bugs <https://bugs.launchpad.net/neutron/+bugs?field.tag=trunk>`__ for
|
||||
more information.
|
39
doc/source/admin/config.rst
Normal file
@ -0,0 +1,39 @@
|
||||
.. _config:
|
||||
|
||||
=============
|
||||
Configuration
|
||||
=============
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
config-services-agent
|
||||
config-ml2
|
||||
config-address-scopes
|
||||
config-auto-allocation
|
||||
config-az
|
||||
config-bgp-dynamic-routing
|
||||
config-dhcp-ha
|
||||
config-dns-int
|
||||
config-dns-res
|
||||
config-dvr-ha-snat
|
||||
config-ipam
|
||||
config-ipv6
|
||||
config-lbaas
|
||||
config-macvtap
|
||||
config-mtu
|
||||
config-ovs-dpdk
|
||||
config-ovsfwdriver
|
||||
config-qos
|
||||
config-rbac
|
||||
config-routed-networks
|
||||
config-sfc
|
||||
config-sriov
|
||||
config-subnet-pools
|
||||
config-service-subnets
|
||||
config-trunking
|
||||
|
||||
.. note::
|
||||
|
||||
For general configuration, see the `Configuration Reference
|
||||
<https://docs.openstack.org/ocata/config-reference/>`_.
|
178
doc/source/admin/deploy-lb-ha-vrrp.rst
Normal file
@ -0,0 +1,178 @@
|
||||
.. _deploy-lb-ha-vrrp:
|
||||
|
||||
==========================================
|
||||
Linux bridge: High availability using VRRP
|
||||
==========================================
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp.txt
|
||||
|
||||
.. warning::
|
||||
|
||||
This high-availability mechanism is not compatible with the layer-2
|
||||
population mechanism. You must disable layer-2 population in the
|
||||
``linuxbridge_agent.ini`` file and restart the Linux bridge agent
|
||||
on all existing network and compute nodes prior to deploying the example
|
||||
configuration.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network nodes.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-ha-vrrp-overview.png
|
||||
:alt: High-availability using Linux bridge with VRRP - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. The master router resides on network
|
||||
node 1. In this particular case, the instance resides on the same compute
|
||||
node as the DHCP agent for the network. If the DHCP agent resides on another
|
||||
compute node, the latter only contains a DHCP namespace and Linux bridge
|
||||
with a port on the overlay physical network interface.
|
||||
|
||||
.. image:: figures/deploy-lb-ha-vrrp-compconn1.png
|
||||
:alt: High-availability using Linux bridge with VRRP - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using VRRP to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable VRRP.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
l3_ha = True
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node 1
|
||||
--------------
|
||||
|
||||
No changes.
|
||||
|
||||
Network node 2
|
||||
--------------
|
||||
|
||||
#. Install the Networking service Linux bridge layer-2 agent and layer-3
|
||||
agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | True | UP | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | True | UP | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
| 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | | True | UP | neutron-linuxbridge-agent |
|
||||
| 670e5805-340b-4182-9825-fa8319c99f23 | Linux bridge agent | network2 | | True | UP | neutron-linuxbridge-agent |
|
||||
| 96224e89-7c15-42e9-89c4-8caac7abdd54 | L3 agent | network2 | nova | True | UP | neutron-l3-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt
|
||||
|
||||
Verify failover operation
|
||||
-------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt
|
||||
|
||||
Keepalived VRRP health check
|
||||
----------------------------
|
||||
|
||||
.. include:: shared/keepalived-vrrp-healthcheck.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This high-availability mechanism simply augments :ref:`deploy-lb-selfservice`
|
||||
with failover of layer-3 services to another router if the master router
|
||||
fails. Thus, you can reference :ref:`Self-service network traffic flow
|
||||
<deploy-lb-selfservice-networktrafficflow>` for normal operation.
|
365
doc/source/admin/deploy-lb-provider.rst
Normal file
@ -0,0 +1,365 @@
|
||||
.. _deploy-lb-provider:
|
||||
|
||||
===============================
|
||||
Linux bridge: Provider networks
|
||||
===============================
|
||||
|
||||
The provider networks architecture example provides layer-2 connectivity
|
||||
between instances and the physical network infrastructure using VLAN
|
||||
(802.1q) tagging. It supports one untagged (flat) network and and up to
|
||||
4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends
|
||||
on the physical network infrastructure. For more information on provider
|
||||
networks, see :ref:`intro-os-networking-provider`.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
One controller node with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking server service and ML2 plug-in.
|
||||
|
||||
Two compute nodes with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking Linux bridge layer-2 agent, DHCP agent, metadata agent,
|
||||
and any dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
Larger deployments typically deploy the DHCP and metadata agents on a
|
||||
subset of compute nodes to increase performance and redundancy. However,
|
||||
too many agents can overwhelm the message bus. Also, to further simplify
|
||||
any deployment, you can omit the metadata agent and use a configuration
|
||||
drive to provide metadata to instances.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-provider-overview.png
|
||||
:alt: Provider networks using Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one untagged
|
||||
(flat) network. In this particular case, the instance resides on the
|
||||
same compute node as the DHCP agent for the network. If the DHCP agent
|
||||
resides on another compute node, the latter only contains a DHCP namespace
|
||||
and Linux bridge with a port on the provider physical network interface.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-compconn1.png
|
||||
:alt: Provider networks using Linux bridge - components and connectivity - one network
|
||||
|
||||
The following figure describes virtual connectivity among components for
|
||||
two tagged (VLAN) networks. Essentially, each network uses a separate
|
||||
bridge that contains a port on the VLAN sub-interface on the provider
|
||||
physical network interface. Similar to the single untagged network case,
|
||||
the DHCP agent may reside on a different compute node.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-compconn2.png
|
||||
:alt: Provider networks using Linux bridge - components and connectivity - multiple networks
|
||||
|
||||
.. note::
|
||||
|
||||
These figures omit the controller node because it does not handle instance
|
||||
network traffic.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to deploy provider
|
||||
networks in your environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. Install the Networking service components that provides the
|
||||
``neutron-server`` service and ML2 plug-in.
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
* Disable service plug-ins because provider networks do not require
|
||||
any. However, this breaks portions of the dashboard that manage
|
||||
the Networking service. See the
|
||||
`Ocata Install Tutorials and Guides <https://docs.openstack.org/project-install-guide/ocata>`__
|
||||
for more information.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins =
|
||||
|
||||
* Enable two DHCP agents per network so both compute nodes can
|
||||
provide DHCP service provider networks.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dhcp_agents_per_network = 2
|
||||
|
||||
* If necessary, :ref:`configure MTU <config-mtu>`.
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Configure drivers and network types:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan
|
||||
tenant_network_types =
|
||||
mechanism_drivers = linuxbridge
|
||||
extension_drivers = port_security
|
||||
|
||||
* Configure network mappings:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider
|
||||
|
||||
.. note::
|
||||
|
||||
The ``tenant_network_types`` option contains no value because the
|
||||
architecture does not support self-service networks.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN
|
||||
ID ranges to support use of arbitrary VLAN IDs.
|
||||
|
||||
#. Populate the database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service Linux bridge layer-2 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the Linux bridge agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = False
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
#. In the ``dhcp_agent.ini`` file, configure the DHCP agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
enable_isolated_metadata = True
|
||||
force_metadata = True
|
||||
|
||||
.. note::
|
||||
|
||||
The ``force_metadata`` option forces the DHCP agent to provide
|
||||
a host route to the metadata service on ``169.254.169.254``
|
||||
regardless of whether the subnet contains an interface on a
|
||||
router, thus maintaining similar and predictable metadata behavior
|
||||
among subnets.
|
||||
|
||||
#. In the ``metadata_agent.ini`` file, configure the metadata agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
nova_metadata_ip = controller
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
The value of ``METADATA_SECRET`` must match the value of the same option
|
||||
in the ``[neutron]`` section of the ``nova.conf`` file.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* DHCP agent
|
||||
* Metadata agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | True | UP | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | True | UP | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-provider-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-provider-verifynetworkoperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-provider-networktrafficflow.txt
|
||||
|
||||
North-south scenario: Instance with a fixed IP address
|
||||
------------------------------------------------------
|
||||
|
||||
* The instance resides on compute node 1 and uses provider network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1.
|
||||
|
||||
#. The instance interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (7).
|
||||
#. The router routes the packet from the provider network (8) to the
|
||||
external network (9) and forwards the packet to the switch (10).
|
||||
#. The switch forwards the packet to the external network (11).
|
||||
#. The external network (12) receives the packet.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowns1.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - north/south
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances on the same network communicate directly between compute nodes
|
||||
containing those instances.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 2 and uses provider network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch forwards the packet from compute node 1 to compute node 2 (7).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The physical network interface (8) removes VLAN tag 101 from the packet
|
||||
and forwards it to the VLAN sub-interface port (9) on the provider bridge.
|
||||
#. Security group rules (10) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The provider bridge instance port (11) forwards the packet to
|
||||
the instance 2 interface (12) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowew1.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances communicate via router on the physical network infrastructure.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 1 and uses provider network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VLAN
|
||||
tagging enables multiple logical layer-2 networks to use the same
|
||||
physical layer-2 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the provider
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The VLAN sub-interface port (4) on the provider bridge forwards
|
||||
the packet to the physical network interface (5).
|
||||
#. The physical network interface (5) adds VLAN tag 101 to the packet and
|
||||
forwards it to the physical network infrastructure switch (6).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (7).
|
||||
#. The router routes the packet from provider network 1 (8) to provider
|
||||
network 2 (9).
|
||||
#. The router forwards the packet to the switch (10).
|
||||
#. The switch adds VLAN tag 102 to the packet and forwards it to compute
|
||||
node 1 (11).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network interface (12) removes VLAN tag 102 from the packet
|
||||
and forwards it to the VLAN sub-interface port (13) on the provider bridge.
|
||||
#. Security group rules (14) on the provider bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The provider bridge instance port (15) forwards the packet to
|
||||
the instance 2 interface (16) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-lb-provider-flowew2.png
|
||||
:alt: Provider networks using Linux bridge - network traffic flow - east/west scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
422
doc/source/admin/deploy-lb-selfservice.rst
Normal file
@ -0,0 +1,422 @@
|
||||
.. _deploy-lb-selfservice:
|
||||
|
||||
===================================
|
||||
Linux bridge: Self-service networks
|
||||
===================================
|
||||
|
||||
This architecture example augments :ref:`deploy-lb-provider` to support
|
||||
a nearly limitless quantity of entirely virtual networks. Although the
|
||||
Networking service supports VLAN self-service networks, this example
|
||||
focuses on VXLAN self-service networks. For more information on
|
||||
self-service networks, see :ref:`intro-os-networking-selfservice`.
|
||||
|
||||
.. note::
|
||||
|
||||
The Linux bridge agent lacks support for other overlay protocols such
|
||||
as GRE and Geneve.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Linux bridge layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Add one network interface: overlay.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network node.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-overview.png
|
||||
:alt: Self-service networks using Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) provider network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace and Linux bridge with a port on the overlay physical network
|
||||
interface.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-compconn1.png
|
||||
:alt: Self-service networks using Linux bridge - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
self-service networks to an existing operational environment that supports
|
||||
provider networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable routing and allow overlapping IP address ranges.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``vxlan`` to type drivers and project network types.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan
|
||||
tenant_network_types = vxlan
|
||||
|
||||
* Enable the layer-2 population mechanism driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = linuxbridge,l2population
|
||||
|
||||
* Configure the VXLAN network ID (VNI) range.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
vni_ranges = VNI_START:VNI_END
|
||||
|
||||
Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical
|
||||
values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. Install the Networking service layer-3 agent.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[linux_bridge]
|
||||
physical_interface_mappings = provider:PROVIDER_INTERFACE
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
l2_population = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = linuxbridge
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. In the ``linuxbridge_agent.ini`` file, enable VXLAN support including
|
||||
layer-2 population.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[vxlan]
|
||||
enable_vxlan = True
|
||||
l2_population = True
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Linux bridge agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 09de6af6-c5f1-4548-8b09-18801f068c57 | Linux bridge agent | compute2 | | True | UP | neutron-linuxbridge-agent |
|
||||
| 188945d1-9e70-4803-a276-df924e0788a4 | Linux bridge agent | compute1 | | True | UP | neutron-linuxbridge-agent |
|
||||
| e76c440d-d5f6-4316-a674-d689630b629e | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e67367de-6657-11e6-86a4-931cd04404bb | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| e8174cae-6657-11e6-89f0-534ac6d0cb5c | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| ece49ec6-6657-11e6-bafb-c7560f19197d | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
| 598f6357-4331-4da5-a420-0f5be000bec9 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| f4734e0f-bcd5-4922-a19d-e31d56b0a7ae | Linux bridge agent | network1 | | True | UP | neutron-linuxbridge-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-verifynetworkoperation.txt
|
||||
|
||||
.. _deploy-lb-selfservice-networktrafficflow:
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
For instances with a fixed IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from self-service to external networks
|
||||
such as the Internet. For instances with a fixed IPv6 address, the network
|
||||
node performs conventional routing of traffic between self-service and
|
||||
external networks.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the self-service
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle
|
||||
firewalling and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (6).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. The self-service bridge router port (9) forwards the packet to the
|
||||
self-service network interface (10) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs SNAT on the packet which changes the
|
||||
source IP address to the router IP address on the provider network
|
||||
and sends it to the gateway IP address on the provider network via
|
||||
the gateway interface on the provider network (11).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the provider network, via the
|
||||
provider gateway interface (11).
|
||||
|
||||
#. The router forwards the packet to the provider bridge router
|
||||
port (12).
|
||||
#. The VLAN sub-interface port (13) on the provider bridge forwards
|
||||
the packet to the provider physical network interface (14).
|
||||
#. The provider physical network interface (14) adds VLAN tag 101 to the packet
|
||||
and forwards it to the Internet via physical network infrastructure (15).
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse. However, without a
|
||||
floating IPv4 address, hosts on the provider or external networks cannot
|
||||
originate connections to instances on the self-service network.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowns1.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
|
||||
Thus, the network node routes IPv6 traffic in this scenario.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface removes VLAN tag 101 and forwards
|
||||
the packet to the VLAN sub-interface on the provider bridge.
|
||||
#. The provider bridge forwards the packet to the self-service
|
||||
router gateway port on the provider network (5).
|
||||
|
||||
* For IPv4, the router performs DNAT on the packet which changes the
|
||||
destination IP address to the instance IP address on the self-service
|
||||
network and sends it to the gateway IP address on the self-service
|
||||
network via the self-service interface (6).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the self-service network, via
|
||||
the self-service interface (6).
|
||||
|
||||
#. The router forwards the packet to the self-service bridge router
|
||||
port (7).
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (8)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (11) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (12) which unwraps the packet.
|
||||
#. Security group rules (13) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (14) forwards the packet to
|
||||
the instance interface (15) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Egress instance traffic flows similar to north-south scenario 1, except SNAT
|
||||
changes the source IP address of the packet to the floating IPv4 address
|
||||
rather than the router IP address on the provider network.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowns2.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - north/south scenario 2
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances with a fixed IPv4/IPv6 or floating IPv4 address on the same network
|
||||
communicate directly between compute nodes containing those instances.
|
||||
|
||||
By default, the VXLAN protocol lacks knowledge of target location
|
||||
and uses multicast to discover it. After discovery, it stores the
|
||||
location in the local forwarding database. In large deployments,
|
||||
the discovery process can generate a significant amount of network
|
||||
that all nodes must process. To eliminate the latter and generally
|
||||
increase efficiency, the Networking service includes the layer-2
|
||||
population mechanism driver that automatically populates the
|
||||
forwarding database for VXLAN interfaces. The example configuration
|
||||
enables this driver. For more information, see :ref:`config-plugin-ml2`.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 2 and uses self-service network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the
|
||||
self-service bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to compute node 2 via the overlay network (6).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. Security group rules (9) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (10) forwards the packet to
|
||||
the instance 1 interface (11) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowew1.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 1
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate
|
||||
via router on the network node. The self-service networks must reside on the
|
||||
same router.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 1 and uses self-service network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VXLAN
|
||||
enables multiple overlays to use the same layer-3 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the self-service
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the self-service bridge handle
|
||||
firewalling and connection tracking for the packet.
|
||||
#. The self-service bridge forwards the packet to the VXLAN interface (4)
|
||||
which wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (5) for the VXLAN interface forwards
|
||||
the packet to the network node via the overlay network (6).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (7) for the VXLAN interface forwards
|
||||
the packet to the VXLAN interface (8) which unwraps the packet.
|
||||
#. The self-service bridge router port (9) forwards the packet to the
|
||||
self-service network 1 interface (10) in the router namespace.
|
||||
#. The router sends the packet to the next-hop IP address, typically the
|
||||
gateway IP address on self-service network 2, via the self-service
|
||||
network 2 interface (11).
|
||||
#. The router forwards the packet to the self-service network 2 bridge router
|
||||
port (12).
|
||||
#. The self-service network 2 bridge forwards the packet to the VXLAN
|
||||
interface (13) which wraps the packet using VNI 102.
|
||||
#. The physical network interface (14) for the VXLAN interface sends the
|
||||
packet to the compute node via the overlay network (15).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (16) for the VXLAN interface sends
|
||||
the packet to the VXLAN interface (17) which unwraps the packet.
|
||||
#. Security group rules (18) on the self-service bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The self-service bridge instance port (19) forwards the packet to
|
||||
the instance 2 interface (20) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-lb-selfservice-flowew2.png
|
||||
:alt: Self-service networks using Linux bridge - network traffic flow - east/west scenario 2
|
17
doc/source/admin/deploy-lb.rst
Normal file
@ -0,0 +1,17 @@
|
||||
.. _deploy-lb:
|
||||
|
||||
=============================
|
||||
Linux bridge mechanism driver
|
||||
=============================
|
||||
|
||||
The Linux bridge mechanism driver uses only Linux bridges and ``veth`` pairs
|
||||
as interconnection devices. A layer-2 agent manages Linux bridges on each
|
||||
compute node and any other node that provides layer-3 (routing), DHCP,
|
||||
metadata, or other network services.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-lb-provider
|
||||
deploy-lb-selfservice
|
||||
deploy-lb-ha-vrrp
|
506
doc/source/admin/deploy-ovs-ha-dvr.rst
Normal file
@ -0,0 +1,506 @@
|
||||
.. _deploy-ovs-ha-dvr:
|
||||
|
||||
=========================================
|
||||
Open vSwitch: High availability using DVR
|
||||
=========================================
|
||||
|
||||
This architecture example augments the self-service deployment example
|
||||
with the Distributed Virtual Router (DVR) high-availability mechanism that
|
||||
provides connectivity between self-service and provider networks on compute
|
||||
nodes rather than network nodes for specific scenarios. For instances with a
|
||||
floating IPv4 address, routing between self-service and provider networks
|
||||
resides completely on the compute nodes to eliminate single point of
|
||||
failure and performance issues with network nodes. Routing also resides
|
||||
completely on the compute nodes for instances with a fixed or floating IPv4
|
||||
address using self-service networks on the same distributed virtual router.
|
||||
However, instances with a fixed IP address still rely on the network node for
|
||||
routing and SNAT services between self-service and provider networks.
|
||||
|
||||
Consider the following attributes of this high-availability mechanism to
|
||||
determine practicality in your environment:
|
||||
|
||||
* Only provides connectivity to an instance via the compute node on which
|
||||
the instance resides if the instance resides on a self-service network
|
||||
with a floating IPv4 address. Instances on self-service networks with
|
||||
only an IPv6 address or both IPv4 and IPv6 addresses rely on the network
|
||||
node for IPv6 connectivity.
|
||||
|
||||
* The instance of a router on each compute node consumes an IPv4 address
|
||||
on the provider network on which it contains a gateway.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Install the OpenStack Networking layer-3 agent.
|
||||
|
||||
.. note::
|
||||
|
||||
Consider adding at least one additional network node to provide
|
||||
high-availability for instances with a fixed IP address. See
|
||||
See :ref:`config-dvr-snat-ha-ovs` for more information.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-overview.png
|
||||
:alt: High-availability using Open vSwitch with DVR - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace with a port on the OVS integration bridge.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-compconn1.png
|
||||
:alt: High-availability using Open vSwitch with DVR - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using DVR to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable distributed routing by default for all routers.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
router_distributed = True
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. In the ``openswitch_agent.ini`` file, enable distributed routing.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enable_distributed_routing = True
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent to provide
|
||||
SNAT services.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
agent_mode = dvr_snat
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service layer-3 agent.
|
||||
|
||||
#. In the ``openswitch_agent.ini`` file, enable distributed routing.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
enable_distributed_routing = True
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
agent_mode = dvr
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 05d980f2-a4fc-4815-91e7-a7f7e118c0db | L3 agent | compute1 | nova | True | UP | neutron-l3-agent |
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
| 2a2e9a90-51b8-4163-a7d6-3e199ba2374b | L3 agent | compute2 | nova | True | UP | neutron-l3-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | True | UP | neutron-openvswitch-agent |
|
||||
| 513caa68-0391-4e53-a530-082e2c23e819 | Linux bridge agent | compute1 | | True | UP | neutron-linuxbridge-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | True | UP | neutron-openvswitch-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | True | UP | neutron-openvswitch-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
Similar to the self-service deployment example, this configuration supports
|
||||
multiple VXLAN self-service networks. After enabling high-availability, all
|
||||
additional routers use distributed routing. The following procedure creates
|
||||
an additional self-service network and router. The Networking service also
|
||||
supports adding distributed routing to existing routers.
|
||||
|
||||
#. Source a regular (non-administrative) project credentials.
|
||||
#. Create a self-service network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network create selfservice2
|
||||
+-------------------------+--------------+
|
||||
| Field | Value |
|
||||
+-------------------------+--------------+
|
||||
| admin_state_up | UP |
|
||||
| mtu | 1450 |
|
||||
| name | selfservice2 |
|
||||
| port_security_enabled | True |
|
||||
| router:external | Internal |
|
||||
| shared | False |
|
||||
| status | ACTIVE |
|
||||
+-------------------------+--------------+
|
||||
|
||||
#. Create a IPv4 subnet on the self-service network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --subnet-range 192.0.2.0/24 \
|
||||
--network selfservice2 --dns-nameserver 8.8.4.4 selfservice2-v4
|
||||
+-------------------+---------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+---------------------------+
|
||||
| allocation_pools | 192.0.2.2-192.0.2.254 |
|
||||
| cidr | 192.0.2.0/24 |
|
||||
| dns_nameservers | 8.8.4.4 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | 192.0.2.1 |
|
||||
| ip_version | 4 |
|
||||
| name | selfservice2-v4 |
|
||||
+-------------------+---------------------------+
|
||||
|
||||
#. Create a IPv6 subnet on the self-service network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack subnet create --subnet-range fd00:192:0:2::/64 --ip-version 6 \
|
||||
--ipv6-ra-mode slaac --ipv6-address-mode slaac --network selfservice2 \
|
||||
--dns-nameserver 2001:4860:4860::8844 selfservice2-v6
|
||||
+-------------------+------------------------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------------+------------------------------------------------------+
|
||||
| allocation_pools | fd00:192:0:2::2-fd00:192:0:2:ffff:ffff:ffff:ffff |
|
||||
| cidr | fd00:192:0:2::/64 |
|
||||
| dns_nameservers | 2001:4860:4860::8844 |
|
||||
| enable_dhcp | True |
|
||||
| gateway_ip | fd00:192:0:2::1 |
|
||||
| ip_version | 6 |
|
||||
| ipv6_address_mode | slaac |
|
||||
| ipv6_ra_mode | slaac |
|
||||
| name | selfservice2-v6 |
|
||||
+-------------------+------------------------------------------------------+
|
||||
|
||||
#. Create a router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router create router2
|
||||
+-----------------------+---------+
|
||||
| Field | Value |
|
||||
+-----------------------+---------+
|
||||
| admin_state_up | UP |
|
||||
| name | router2 |
|
||||
| status | ACTIVE |
|
||||
+-----------------------+---------+
|
||||
|
||||
#. Add the IPv4 and IPv6 subnets as interfaces on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router add subnet router2 selfservice2-v4
|
||||
$ openstack router add subnet router2 selfservice2-v6
|
||||
|
||||
.. note::
|
||||
|
||||
These commands provide no output.
|
||||
|
||||
#. Add the provider network as a gateway on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router set router2 --external-gateway provider1
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify distributed routing on the router.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack router show router2
|
||||
+-------------------------+---------+
|
||||
| Field | Value |
|
||||
+-------------------------+---------+
|
||||
| admin_state_up | UP |
|
||||
| distributed | True |
|
||||
| ha | False |
|
||||
| name | router2 |
|
||||
| status | ACTIVE |
|
||||
+-------------------------+---------+
|
||||
|
||||
#. On each compute node, verify creation of a ``qrouter`` namespace with
|
||||
the same ID.
|
||||
|
||||
Compute node 1:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
|
||||
Compute node 2:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
|
||||
#. On the network node, verify creation of the ``snat`` and ``qrouter``
|
||||
namespaces with the same ID.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
snat-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
|
||||
|
||||
.. note::
|
||||
|
||||
The namespace for router 1 from :ref:`deploy-ovs-selfservice` should
|
||||
also appear on network node 1 because of creation prior to enabling
|
||||
distributed routing.
|
||||
|
||||
#. Launch an instance with an interface on the addtional self-service network.
|
||||
For example, a CirrOS image using flavor ID 1.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server create --flavor 1 --image cirros --nic net-id=NETWORK_ID selfservice-instance2
|
||||
|
||||
Replace ``NETWORK_ID`` with the ID of the additional self-service
|
||||
network.
|
||||
|
||||
#. Determine the IPv4 and IPv6 addresses of the instance.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server list
|
||||
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
|
||||
| ID | Name | Status | Networks |
|
||||
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
|
||||
| bde64b00-77ae-41b9-b19a-cd8e378d9f8b | selfservice-instance2 | ACTIVE | selfservice2=fd00:192:0:2:f816:3eff:fe71:e93e, 192.0.2.4 |
|
||||
+--------------------------------------+-----------------------+--------+---------------------------------------------------------------------------+
|
||||
|
||||
#. Create a floating IPv4 address on the provider network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack floating ip create provider1
|
||||
+-------------+--------------------------------------+
|
||||
| Field | Value |
|
||||
+-------------+--------------------------------------+
|
||||
| fixed_ip | None |
|
||||
| id | 0174056a-fa56-4403-b1ea-b5151a31191f |
|
||||
| instance_id | None |
|
||||
| ip | 203.0.113.17 |
|
||||
| pool | provider1 |
|
||||
+-------------+--------------------------------------+
|
||||
|
||||
#. Associate the floating IPv4 address with the instance.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack server add floating ip selfservice-instance2 203.0.113.17
|
||||
|
||||
.. note::
|
||||
|
||||
This command provides no output.
|
||||
|
||||
#. On the compute node containing the instance, verify creation of the
|
||||
``fip`` namespace with the same ID as the provider network.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# ip netns
|
||||
fip-4bfa3075-b4b2-4f7d-b88e-df1113942d43
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
This section only contains flow scenarios that benefit from distributed
|
||||
virtual routing or that differ from conventional operation. For other
|
||||
flow scenarios, see :ref:`deploy-ovs-selfservice-networktrafficflow`.
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
Similar to :ref:`deploy-ovs-selfservice-networktrafficflow-ns1`, except
|
||||
the router namespace on the network node becomes the SNAT namespace. The
|
||||
network node still contains the router namespace, but it serves no purpose
|
||||
in this case.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-flowns1.png
|
||||
:alt: High-availability using Open vSwitch with DVR - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address using a self-service network
|
||||
on a distributed router, the compute node containing the instance performs
|
||||
SNAT on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to
|
||||
IPv6. Thus, the network node routes IPv6 traffic in this scenario.
|
||||
north-south traffic passing between the instance and external networks
|
||||
such as the Internet.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface forwards the packet to the
|
||||
OVS provider bridge provider network port (3).
|
||||
#. The OVS provider bridge swaps actual VLAN tag 101 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS provider bridge ``phy-br-provider`` port (4) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` port (5).
|
||||
#. The OVS integration bridge port for the provider network (6) removes
|
||||
the internal VLAN tag and forwards the packet to the provider network
|
||||
interface (7) in the floating IP namespace. This interface responds
|
||||
to any ARP requests for the instance floating IPv4 address.
|
||||
#. The floating IP namespace routes the packet (8) to the distributed
|
||||
router namespace (9) using a pair of IP addresses on the DVR internal
|
||||
network. This namespace contains the instance floating IPv4 address.
|
||||
#. The router performs DNAT on the packet which changes the destination
|
||||
IP address to the instance IP address on the self-service network via
|
||||
the self-service network interface (10).
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
the self-service network (11).
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (12) forwards the packet
|
||||
to the security group bridge OVS port (13) via ``veth`` pair.
|
||||
#. Security group rules (14) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (15) forwards the packet to the
|
||||
instance interface (16) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-flowns2.png
|
||||
:alt: High-availability using Open vSwitch with DVR - network traffic flow - north/south scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Egress traffic follows similar steps in reverse, except SNAT changes
|
||||
the source IPv4 address of the packet to the floating IPv4 address.
|
||||
|
||||
East-west scenario 1: Instances on different networks on the same router
|
||||
------------------------------------------------------------------------
|
||||
|
||||
Instances with fixed IPv4/IPv6 address or floating IPv4 address on the
|
||||
same compute node communicate via router on the compute node. Instances
|
||||
on different compute nodes communicate via an instance of the router on
|
||||
each compute node.
|
||||
|
||||
.. note::
|
||||
|
||||
This scenario places the instances on different compute nodes to
|
||||
show the most complex situation.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge port for self-service network 1 (6) removes the
|
||||
internal VLAN tag and forwards the packet to the self-service network 1
|
||||
interface in the distributed router namespace (6).
|
||||
#. The distributed router namespace routes the packet to self-service network
|
||||
2.
|
||||
#. The self-service network 2 interface in the distributed router namespace
|
||||
(8) forwards the packet to the OVS integration bridge port for
|
||||
self-service network 2 (9).
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an
|
||||
internal tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` port (10) forwards the packet
|
||||
to the OVS tunnel bridge ``patch-int`` port (11).
|
||||
#. The OVS tunnel bridge (12) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (13) for overlay networks forwards
|
||||
the packet to compute node 2 via the overlay network (14).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (15) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (16).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (18).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (19) forwards the packet
|
||||
to the security group bridge OVS port (20) via ``veth`` pair.
|
||||
#. Security group rules (21) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (22) forwards the packet to the
|
||||
instance 2 interface (23) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Routing between self-service networks occurs on the compute node containing
|
||||
the instance sending the packet. In this scenario, routing occurs on
|
||||
compute node 1 for packets from instance 1 to instance 2 and on compute
|
||||
node 2 for packets from instance 2 to instance 1.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-dvr-flowew1.png
|
||||
:alt: High-availability using Open vSwitch with DVR - network traffic flow - east/west scenario 2
|
179
doc/source/admin/deploy-ovs-ha-vrrp.rst
Normal file
@ -0,0 +1,179 @@
|
||||
.. _deploy-ovs-ha-vrrp:
|
||||
|
||||
==========================================
|
||||
Open vSwitch: High availability using VRRP
|
||||
==========================================
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp.txt
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking layer-2 agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network nodes.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-vrrp-overview.png
|
||||
:alt: High-availability using VRRP with Linux bridge - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) network. The master router resides on network
|
||||
node 1. In this particular case, the instance resides on the same compute
|
||||
node as the DHCP agent for the network. If the DHCP agent resides on another
|
||||
compute node, the latter only contains a DHCP namespace and Linux bridge
|
||||
with a port on the overlay physical network interface.
|
||||
|
||||
.. image:: figures/deploy-ovs-ha-vrrp-compconn1.png
|
||||
:alt: High-availability using VRRP with Linux bridge - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
high-availability using VRRP to an existing operational environment that
|
||||
supports self-service networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable VRRP.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
l3_ha = True
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Network node 1
|
||||
--------------
|
||||
|
||||
No changes.
|
||||
|
||||
Network node 2
|
||||
--------------
|
||||
|
||||
#. Install the Networking service OVS layer-2 agent and layer-3 agent.
|
||||
|
||||
#. Install OVS.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS
|
||||
|
||||
#. Create the OVS provider bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-br br-provider
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:br-provider
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = true
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
No changes.
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | True | UP | neutron-openvswitch-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | True | UP | neutron-openvswitch-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | True | UP | neutron-openvswitch-agent |
|
||||
| 7f00d759-f2c9-494a-9fbf-fd9118104d03 | Open vSwitch agent | network2 | | True | UP | neutron-openvswitch-agent |
|
||||
| b28d8818-9e32-4888-930b-29addbdd2ef9 | L3 agent | network2 | nova | True | UP | neutron-l3-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifynetworkoperation.txt
|
||||
|
||||
Verify failover operation
|
||||
-------------------------
|
||||
|
||||
.. include:: shared/deploy-ha-vrrp-verifyfailoveroperation.txt
|
||||
|
||||
Keepalived VRRP health check
|
||||
----------------------------
|
||||
|
||||
.. include:: shared/keepalived-vrrp-healthcheck.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
This high-availability mechanism simply augments :ref:`deploy-ovs-selfservice`
|
||||
with failover of layer-3 services to another router if the master router
|
||||
fails. Thus, you can reference :ref:`Self-service network traffic flow
|
||||
<deploy-ovs-selfservice-networktrafficflow>` for normal operation.
|
428
doc/source/admin/deploy-ovs-provider.rst
Normal file
@ -0,0 +1,428 @@
|
||||
.. _deploy-ovs-provider:
|
||||
|
||||
===============================
|
||||
Open vSwitch: Provider networks
|
||||
===============================
|
||||
|
||||
This architecture example provides layer-2 connectivity between instances
|
||||
and the physical network infrastructure using VLAN (802.1q) tagging. It
|
||||
supports one untagged (flat) network and up to 4095 tagged (VLAN) networks.
|
||||
The actual quantity of VLAN networks depends on the physical network
|
||||
infrastructure. For more information on provider networks, see
|
||||
:ref:`intro-os-networking-provider`.
|
||||
|
||||
.. warning::
|
||||
|
||||
Linux distributions often package older releases of Open vSwitch that can
|
||||
introduce issues during operation with the Networking service. We recommend
|
||||
using at least the latest long-term stable (LTS) release of Open vSwitch
|
||||
for the best experience and support from Open vSwitch. See
|
||||
`<http://www.openvswitch.org>`__ for available releases and the
|
||||
`installation instructions
|
||||
<https://github.com/openvswitch/ovs/blob/master/INSTALL.md>`__ for
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
One controller node with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking server service and ML2 plug-in.
|
||||
|
||||
Two compute nodes with the following components:
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* OpenStack Networking Open vSwitch (OVS) layer-2 agent, DHCP agent, metadata
|
||||
agent, and any dependencies including OVS.
|
||||
|
||||
.. note::
|
||||
|
||||
Larger deployments typically deploy the DHCP and metadata agents on a
|
||||
subset of compute nodes to increase performance and redundancy. However,
|
||||
too many agents can overwhelm the message bus. Also, to further simplify
|
||||
any deployment, you can omit the metadata agent and use a configuration
|
||||
drive to provide metadata to instances.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-overview.png
|
||||
:alt: Provider networks using OVS - overview
|
||||
|
||||
The following figure shows components and connectivity for one untagged
|
||||
(flat) network. In this particular case, the instance resides on the
|
||||
same compute node as the DHCP agent for the network. If the DHCP agent
|
||||
resides on another compute node, the latter only contains a DHCP namespace
|
||||
with a port on the OVS integration bridge.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-compconn1.png
|
||||
:alt: Provider networks using OVS - components and connectivity - one network
|
||||
|
||||
The following figure describes virtual connectivity among components for
|
||||
two tagged (VLAN) networks. Essentially, all networks use a single OVS
|
||||
integration bridge with different internal VLAN tags. The internal VLAN
|
||||
tags almost always differ from the network VLAN assignment in the Networking
|
||||
service. Similar to the untagged network case, the DHCP agent may reside on
|
||||
a different compute node.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-compconn2.png
|
||||
:alt: Provider networks using OVS - components and connectivity - multiple networks
|
||||
|
||||
.. note::
|
||||
|
||||
These figures omit the controller node because it does not handle instance
|
||||
network traffic.
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to deploy provider
|
||||
networks in your environment.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. Install the Networking service components that provide the
|
||||
``neutron-server`` service and ML2 plug-in.
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
* Disable service plug-ins because provider networks do not require
|
||||
any. However, this breaks portions of the dashboard that manage
|
||||
the Networking service. See the
|
||||
`Ocata Install Tutorials and Guides
|
||||
<https://docs.openstack.org/project-install-guide/ocata>`__ for more
|
||||
information.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins =
|
||||
|
||||
* Enable two DHCP agents per network so both compute nodes can
|
||||
provide DHCP service provider networks.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
dhcp_agents_per_network = 2
|
||||
|
||||
* If necessary, :ref:`configure MTU <config-mtu>`.
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Configure drivers and network types:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan
|
||||
tenant_network_types =
|
||||
mechanism_drivers = openvswitch
|
||||
extension_drivers = port_security
|
||||
|
||||
* Configure network mappings:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_flat]
|
||||
flat_networks = provider
|
||||
|
||||
[ml2_type_vlan]
|
||||
network_vlan_ranges = provider
|
||||
|
||||
.. note::
|
||||
|
||||
The ``tenant_network_types`` option contains no value because the
|
||||
architecture does not support self-service networks.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``provider`` value in the ``network_vlan_ranges`` option lacks VLAN
|
||||
ID ranges to support use of arbitrary VLAN IDs.
|
||||
|
||||
#. Populate the database.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Server
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. Install the Networking service OVS layer-2 agent, DHCP agent, and
|
||||
metadata agent.
|
||||
|
||||
#. Install OVS.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, configure the OVS agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:br-provider
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
#. In the ``dhcp_agent.ini`` file, configure the DHCP agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
enable_isolated_metadata = True
|
||||
force_metadata = True
|
||||
|
||||
.. note::
|
||||
|
||||
The ``force_metadata`` option forces the DHCP agent to provide
|
||||
a host route to the metadata service on ``169.254.169.254``
|
||||
regardless of whether the subnet contains an interface on a
|
||||
router, thus maintaining similar and predictable metadata behavior
|
||||
among subnets.
|
||||
|
||||
#. In the ``metadata_agent.ini`` file, configure the metadata agent:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
nova_metadata_ip = controller
|
||||
metadata_proxy_shared_secret = METADATA_SECRET
|
||||
|
||||
The value of ``METADATA_SECRET`` must match the value of the same option
|
||||
in the ``[neutron]`` section of the ``nova.conf`` file.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS
|
||||
|
||||
#. Create the OVS provider bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-br br-provider
|
||||
|
||||
#. Add the provider network interface as a port on the OVS provider
|
||||
bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
|
||||
|
||||
Replace ``PROVIDER_INTERFACE`` with the name of the underlying interface
|
||||
that handles provider networks. For example, ``eth1``.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS agent
|
||||
* DHCP agent
|
||||
* Metadata agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | True | UP | neutron-openvswitch-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | True | UP | neutron-openvswitch-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-provider-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-provider-verifynetworkoperation.txt
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-provider-networktrafficflow.txt
|
||||
|
||||
North-south
|
||||
-----------
|
||||
|
||||
* The instance resides on compute node 1 and uses provider network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1.
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (7).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (8) forwards the packet to
|
||||
the physical network interface (9).
|
||||
#. The physical network interface forwards the packet to the physical
|
||||
network infrastructure switch (10).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (11).
|
||||
#. The router routes the packet from the provider network (12) to the
|
||||
external network (13) and forwards the packet to the switch (14).
|
||||
#. The switch forwards the packet to the external network (15).
|
||||
#. The external network (16) receives the packet.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-flowns1.png
|
||||
:alt: Provider networks using Open vSwitch - network traffic flow - north/south
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances on the same network communicate directly between compute nodes
|
||||
containing those instances.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 2 and uses provider network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (7).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (8) forwards the packet to
|
||||
the physical network interface (9).
|
||||
#. The physical network interface forwards the packet to the physical
|
||||
network infrastructure switch (10).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch forwards the packet from compute node 1 to compute node 2 (11).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The physical network interface (12) forwards the packet to the OVS
|
||||
provider bridge provider network port (13).
|
||||
#. The OVS provider bridge ``phy-br-provider`` patch port (14) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` patch port (15).
|
||||
#. The OVS integration bridge swaps the actual VLAN tag 101 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS integration bridge security group port (16) forwards the packet
|
||||
to the security group bridge OVS port (17).
|
||||
#. Security group rules (18) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (19) forwards the packet to the
|
||||
instance 2 interface (20) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-flowew1.png
|
||||
:alt: Provider networks using Open vSwitch - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances communicate via router on the physical network infrastructure.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses provider network 1.
|
||||
* Instance 2 resides on compute node 1 and uses provider network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VLAN
|
||||
tagging enables multiple logical layer-2 networks to use the same
|
||||
physical layer-2 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (6) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (7).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (8) forwards the packet to
|
||||
the physical network interface (9).
|
||||
#. The physical network interface forwards the packet to the physical
|
||||
network infrastructure switch (10).
|
||||
|
||||
The following steps involve the physical network infrastructure:
|
||||
|
||||
#. The switch removes VLAN tag 101 from the packet and forwards it to the
|
||||
router (11).
|
||||
#. The router routes the packet from provider network 1 (12) to provider
|
||||
network 2 (13).
|
||||
#. The router forwards the packet to the switch (14).
|
||||
#. The switch adds VLAN tag 102 to the packet and forwards it to compute
|
||||
node 1 (15).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The physical network interface (16) forwards the packet to the OVS
|
||||
provider bridge provider network port (17).
|
||||
#. The OVS provider bridge ``phy-br-provider`` patch port (18) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` patch port (19).
|
||||
#. The OVS integration bridge swaps the actual VLAN tag 102 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS integration bridge security group port (20) removes the internal
|
||||
VLAN tag and forwards the packet to the security group bridge OVS port
|
||||
(21).
|
||||
#. Security group rules (22) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (23) forwards the packet to the
|
||||
instance 2 interface (24) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-provider-flowew2.png
|
||||
:alt: Provider networks using Open vSwitch - network traffic flow - east/west scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
508
doc/source/admin/deploy-ovs-selfservice.rst
Normal file
@ -0,0 +1,508 @@
|
||||
.. _deploy-ovs-selfservice:
|
||||
|
||||
===================================
|
||||
Open vSwitch: Self-service networks
|
||||
===================================
|
||||
|
||||
This architecture example augments :ref:`deploy-ovs-provider` to support
|
||||
a nearly limitless quantity of entirely virtual networks. Although the
|
||||
Networking service supports VLAN self-service networks, this example
|
||||
focuses on VXLAN self-service networks. For more information on
|
||||
self-service networks, see :ref:`intro-os-networking-selfservice`.
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Add one network node with the following components:
|
||||
|
||||
* Three network interfaces: management, provider, and overlay.
|
||||
* OpenStack Networking Open vSwitch (OVS) layer-2 agent, layer-3 agent, and
|
||||
any including OVS.
|
||||
|
||||
Modify the compute nodes with the following components:
|
||||
|
||||
* Add one network interface: overlay.
|
||||
|
||||
.. note::
|
||||
|
||||
You can keep the DHCP and metadata agents on each compute node or
|
||||
move them to the network node.
|
||||
|
||||
Architecture
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-overview.png
|
||||
:alt: Self-service networks using OVS - overview
|
||||
|
||||
The following figure shows components and connectivity for one self-service
|
||||
network and one untagged (flat) provider network. In this particular case, the
|
||||
instance resides on the same compute node as the DHCP agent for the network.
|
||||
If the DHCP agent resides on another compute node, the latter only contains
|
||||
a DHCP namespace and with a port on the OVS integration bridge.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-compconn1.png
|
||||
:alt: Self-service networks using OVS - components and connectivity - one network
|
||||
|
||||
Example configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Use the following example configuration as a template to add support for
|
||||
self-service networks to an existing operational environment that supports
|
||||
provider networks.
|
||||
|
||||
Controller node
|
||||
---------------
|
||||
|
||||
#. In the ``neutron.conf`` file:
|
||||
|
||||
* Enable routing and allow overlapping IP address ranges.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
service_plugins = router
|
||||
allow_overlapping_ips = True
|
||||
|
||||
#. In the ``ml2_conf.ini`` file:
|
||||
|
||||
* Add ``vxlan`` to type drivers and project network types.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
type_drivers = flat,vlan,vxlan
|
||||
tenant_network_types = vxlan
|
||||
|
||||
* Enable the layer-2 population mechanism driver.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2]
|
||||
mechanism_drivers = openvswitch,l2population
|
||||
|
||||
* Configure the VXLAN network ID (VNI) range.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ml2_type_vxlan]
|
||||
vni_ranges = VNI_START:VNI_END
|
||||
|
||||
Replace ``VNI_START`` and ``VNI_END`` with appropriate numerical
|
||||
values.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Neutron Server
|
||||
* Open vSwitch agent
|
||||
|
||||
Network node
|
||||
------------
|
||||
|
||||
#. Install the Networking service OVS layer-2 agent and layer-3 agent.
|
||||
|
||||
#. Install OVS.
|
||||
|
||||
#. In the ``neutron.conf`` file, configure common options:
|
||||
|
||||
.. include:: shared/deploy-config-neutron-common.txt
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* OVS
|
||||
|
||||
#. Create the OVS provider bridge ``br-provider``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ovs-vsctl add-br br-provider
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, configure the layer-2 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
bridge_mappings = provider:br-provider
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = True
|
||||
|
||||
[securitygroup]
|
||||
firewall_driver = iptables_hybrid
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. In the ``l3_agent.ini`` file, configure the layer-3 agent.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[DEFAULT]
|
||||
interface_driver = openvswitch
|
||||
external_network_bridge =
|
||||
|
||||
.. note::
|
||||
|
||||
The ``external_network_bridge`` option intentionally contains
|
||||
no value.
|
||||
|
||||
#. Start the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
* Layer-3 agent
|
||||
|
||||
Compute nodes
|
||||
-------------
|
||||
|
||||
#. In the ``openvswitch_agent.ini`` file, enable VXLAN support including
|
||||
layer-2 population.
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[ovs]
|
||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
||||
|
||||
[agent]
|
||||
tunnel_types = vxlan
|
||||
l2_population = True
|
||||
|
||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
||||
interface that handles VXLAN overlays for self-service networks.
|
||||
|
||||
#. Restart the following services:
|
||||
|
||||
* Open vSwitch agent
|
||||
|
||||
Verify service operation
|
||||
------------------------
|
||||
|
||||
#. Source the administrative project credentials.
|
||||
#. Verify presence and operation of the agents.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ openstack network agent list
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
| 1236bbcb-e0ba-48a9-80fc-81202ca4fa51 | Metadata agent | compute2 | | True | UP | neutron-metadata-agent |
|
||||
| 457d6898-b373-4bb3-b41f-59345dcfb5c5 | Open vSwitch agent | compute2 | | True | UP | neutron-openvswitch-agent |
|
||||
| 71f15e84-bc47-4c2a-b9fb-317840b2d753 | DHCP agent | compute2 | nova | True | UP | neutron-dhcp-agent |
|
||||
| 8805b962-de95-4e40-bdc2-7a0add7521e8 | L3 agent | network1 | nova | True | UP | neutron-l3-agent |
|
||||
| a33cac5a-0266-48f6-9cac-4cef4f8b0358 | Open vSwitch agent | network1 | | True | UP | neutron-openvswitch-agent |
|
||||
| a6c69690-e7f7-4e56-9831-1282753e5007 | Metadata agent | compute1 | | True | UP | neutron-metadata-agent |
|
||||
| af11f22f-a9f4-404f-9fd8-cd7ad55c0f68 | DHCP agent | compute1 | nova | True | UP | neutron-dhcp-agent |
|
||||
| bcfc977b-ec0e-4ba9-be62-9489b4b0e6f1 | Open vSwitch agent | compute1 | | True | UP | neutron-openvswitch-agent |
|
||||
+--------------------------------------+--------------------+----------+-------------------+-------+-------+---------------------------+
|
||||
|
||||
Create initial networks
|
||||
-----------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-initialnetworks.txt
|
||||
|
||||
Verify network operation
|
||||
------------------------
|
||||
|
||||
.. include:: shared/deploy-selfservice-verifynetworkoperation.txt
|
||||
|
||||
.. _deploy-ovs-selfservice-networktrafficflow:
|
||||
|
||||
Network traffic flow
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: shared/deploy-selfservice-networktrafficflow.txt
|
||||
|
||||
.. _deploy-ovs-selfservice-networktrafficflow-ns1:
|
||||
|
||||
North-south scenario 1: Instance with a fixed IP address
|
||||
--------------------------------------------------------
|
||||
|
||||
For instances with a fixed IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from self-service to external networks
|
||||
such as the Internet. For instances with a fixed IPv6 address, the network
|
||||
node performs conventional routing of traffic between self-service and
|
||||
external networks.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* The instance sends a packet to a host on the Internet.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge patch port (6) forwards the packet to the
|
||||
OVS tunnel bridge patch port (7).
|
||||
#. The OVS tunnel bridge (8) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for overlay networks forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (11) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (12).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge patch port (13) forwards the packet to the OVS
|
||||
integration bridge patch port (14).
|
||||
#. The OVS integration bridge port for the self-service network (15)
|
||||
removes the internal VLAN tag and forwards the packet to the self-service
|
||||
network interface (16) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs SNAT on the packet which changes the
|
||||
source IP address to the router IP address on the provider network
|
||||
and sends it to the gateway IP address on the provider network via
|
||||
the gateway interface on the provider network (17).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the provider network, via the
|
||||
provider gateway interface (17).
|
||||
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
the provider network (18).
|
||||
#. The OVS integration bridge adds the internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge ``int-br-provider`` patch port (19) forwards
|
||||
the packet to the OVS provider bridge ``phy-br-provider`` patch port (20).
|
||||
#. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag
|
||||
101.
|
||||
#. The OVS provider bridge provider network port (21) forwards the packet to
|
||||
the physical network interface (22).
|
||||
#. The physical network interface forwards the packet to the Internet via
|
||||
physical network infrastructure (23).
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse. However, without a
|
||||
floating IPv4 address, hosts on the provider or external networks cannot
|
||||
originate connections to instances on the self-service network.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowns1.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - north/south scenario 1
|
||||
|
||||
North-south scenario 2: Instance with a floating IPv4 address
|
||||
-------------------------------------------------------------
|
||||
|
||||
For instances with a floating IPv4 address, the network node performs SNAT
|
||||
on north-south traffic passing from the instance to external networks
|
||||
such as the Internet and DNAT on north-south traffic passing from external
|
||||
networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
|
||||
Thus, the network node routes IPv6 traffic in this scenario.
|
||||
|
||||
* The instance resides on compute node 1 and uses self-service network 1.
|
||||
* A host on the Internet sends a packet to the instance.
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The physical network infrastructure (1) forwards the packet to the
|
||||
provider physical network interface (2).
|
||||
#. The provider physical network interface forwards the packet to the
|
||||
OVS provider bridge provider network port (3).
|
||||
#. The OVS provider bridge swaps actual VLAN tag 101 with the internal
|
||||
VLAN tag.
|
||||
#. The OVS provider bridge ``phy-br-provider`` port (4) forwards the
|
||||
packet to the OVS integration bridge ``int-br-provider`` port (5).
|
||||
#. The OVS integration bridge port for the provider network (6) removes
|
||||
the internal VLAN tag and forwards the packet to the provider network
|
||||
interface (6) in the router namespace.
|
||||
|
||||
* For IPv4, the router performs DNAT on the packet which changes the
|
||||
destination IP address to the instance IP address on the self-service
|
||||
network and sends it to the gateway IP address on the self-service
|
||||
network via the self-service interface (7).
|
||||
* For IPv6, the router sends the packet to the next-hop IP address,
|
||||
typically the gateway IP address on the self-service network, via
|
||||
the self-service interface (8).
|
||||
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
the self-service network (9).
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` patch port (10) forwards the
|
||||
packet to the OVS tunnel bridge ``patch-int`` patch port (11).
|
||||
#. The OVS tunnel bridge (12) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (13) for overlay networks forwards
|
||||
the packet to the network node via the overlay network (14).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (15) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (16).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (17) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (18).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (19) forwards the packet
|
||||
to the security group bridge OVS port (20) via ``veth`` pair.
|
||||
#. Security group rules (21) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (22) forwards the packet to the
|
||||
instance interface (23) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowns2.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - north/south scenario 2
|
||||
|
||||
.. note::
|
||||
|
||||
Egress instance traffic flows similar to north-south scenario 1, except SNAT
|
||||
changes the source IP address of the packet to the floating IPv4 address
|
||||
rather than the router IP address on the provider network.
|
||||
|
||||
East-west scenario 1: Instances on the same network
|
||||
---------------------------------------------------
|
||||
|
||||
Instances with a fixed IPv4/IPv6 address or floating IPv4 address on the
|
||||
same network communicate directly between compute nodes containing those
|
||||
instances.
|
||||
|
||||
By default, the VXLAN protocol lacks knowledge of target location
|
||||
and uses multicast to discover it. After discovery, it stores the
|
||||
location in the local forwarding database. In large deployments,
|
||||
the discovery process can generate a significant amount of network
|
||||
that all nodes must process. To eliminate the latter and generally
|
||||
increase efficiency, the Networking service includes the layer-2
|
||||
population mechanism driver that automatically populates the
|
||||
forwarding database for VXLAN interfaces. The example configuration
|
||||
enables this driver. For more information, see :ref:`config-plugin-ml2`.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 2 and uses self-service network 1.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
The following steps involve compute node 1:
|
||||
|
||||
#. The instance 1 interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge patch port (6) forwards the packet to the
|
||||
OVS tunnel bridge patch port (7).
|
||||
#. The OVS tunnel bridge (8) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for overlay networks forwards
|
||||
the packet to compute node 2 via the overlay network (10).
|
||||
|
||||
The following steps involve compute node 2:
|
||||
|
||||
#. The underlying physical interface (11) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (12).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (14).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (15) forwards the packet
|
||||
to the security group bridge OVS port (16) via ``veth`` pair.
|
||||
#. Security group rules (17) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (18) forwards the packet to the
|
||||
instance 2 interface (19) via ``veth`` pair.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowew1.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - east/west scenario 1
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
East-west scenario 2: Instances on different networks
|
||||
-----------------------------------------------------
|
||||
|
||||
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate
|
||||
via router on the network node. The self-service networks must reside on the
|
||||
same router.
|
||||
|
||||
* Instance 1 resides on compute node 1 and uses self-service network 1.
|
||||
* Instance 2 resides on compute node 1 and uses self-service network 2.
|
||||
* Instance 1 sends a packet to instance 2.
|
||||
|
||||
.. note::
|
||||
|
||||
Both instances reside on the same compute node to illustrate how VXLAN
|
||||
enables multiple overlays to use the same layer-3 network.
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The instance interface (1) forwards the packet to the security group
|
||||
bridge instance port (2) via ``veth`` pair.
|
||||
#. Security group rules (3) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge OVS port (4) forwards the packet to the OVS
|
||||
integration bridge security group port (5) via ``veth`` pair.
|
||||
#. The OVS integration bridge adds an internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` patch port (6) forwards the
|
||||
packet to the OVS tunnel bridge ``patch-int`` patch port (7).
|
||||
#. The OVS tunnel bridge (8) wraps the packet using VNI 101.
|
||||
#. The underlying physical interface (9) for overlay networks forwards
|
||||
the packet to the network node via the overlay network (10).
|
||||
|
||||
The following steps involve the network node:
|
||||
|
||||
#. The underlying physical interface (11) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (12).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID
|
||||
to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (13) forwards the packet to
|
||||
the OVS integration bridge ``patch-tun`` patch port (14).
|
||||
#. The OVS integration bridge port for self-service network 1 (15)
|
||||
removes the internal VLAN tag and forwards the packet to the self-service
|
||||
network 1 interface (16) in the router namespace.
|
||||
#. The router sends the packet to the next-hop IP address, typically the
|
||||
gateway IP address on self-service network 2, via the self-service
|
||||
network 2 interface (17).
|
||||
#. The router forwards the packet to the OVS integration bridge port for
|
||||
self-service network 2 (18).
|
||||
#. The OVS integration bridge adds the internal VLAN tag to the packet.
|
||||
#. The OVS integration bridge exchanges the internal VLAN tag for an internal
|
||||
tunnel ID.
|
||||
#. The OVS integration bridge ``patch-tun`` patch port (19) forwards the
|
||||
packet to the OVS tunnel bridge ``patch-int`` patch port (20).
|
||||
#. The OVS tunnel bridge (21) wraps the packet using VNI 102.
|
||||
#. The underlying physical interface (22) for overlay networks forwards
|
||||
the packet to the compute node via the overlay network (23).
|
||||
|
||||
The following steps involve the compute node:
|
||||
|
||||
#. The underlying physical interface (24) for overlay networks forwards
|
||||
the packet to the OVS tunnel bridge (25).
|
||||
#. The OVS tunnel bridge unwraps the packet and adds an internal tunnel
|
||||
ID to it.
|
||||
#. The OVS tunnel bridge exchanges the internal tunnel ID for an internal
|
||||
VLAN tag.
|
||||
#. The OVS tunnel bridge ``patch-int`` patch port (26) forwards the packet
|
||||
to the OVS integration bridge ``patch-tun`` patch port (27).
|
||||
#. The OVS integration bridge removes the internal VLAN tag from the packet.
|
||||
#. The OVS integration bridge security group port (28) forwards the packet
|
||||
to the security group bridge OVS port (29) via ``veth`` pair.
|
||||
#. Security group rules (30) on the security group bridge handle firewalling
|
||||
and connection tracking for the packet.
|
||||
#. The security group bridge instance port (31) forwards the packet to the
|
||||
instance interface (32) via ``veth`` pair.
|
||||
|
||||
.. note::
|
||||
|
||||
Return traffic follows similar steps in reverse.
|
||||
|
||||
.. image:: figures/deploy-ovs-selfservice-flowew2.png
|
||||
:alt: Self-service networks using Open vSwitch - network traffic flow - east/west scenario 2
|
21
doc/source/admin/deploy-ovs.rst
Normal file
@ -0,0 +1,21 @@
|
||||
.. _deploy-ovs:
|
||||
|
||||
=============================
|
||||
Open vSwitch mechanism driver
|
||||
=============================
|
||||
|
||||
The Open vSwitch (OVS) mechanism driver uses a combination of OVS and Linux
|
||||
bridges as interconnection devices. However, optionally enabling the OVS
|
||||
native implementation of security groups removes the dependency on Linux
|
||||
bridges.
|
||||
|
||||
We recommend using Open vSwitch version 2.4 or higher. Optional features
|
||||
may require a higher minimum version.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
deploy-ovs-provider
|
||||
deploy-ovs-selfservice
|
||||
deploy-ovs-ha-vrrp
|
||||
deploy-ovs-ha-dvr
|
140
doc/source/admin/deploy.rst
Normal file
@ -0,0 +1,140 @@
|
||||
.. _deploy:
|
||||
|
||||
===================
|
||||
Deployment examples
|
||||
===================
|
||||
|
||||
The following deployment examples provide building blocks of increasing
|
||||
architectural complexity using the Networking service reference architecture
|
||||
which implements the Modular Layer 2 (ML2) plug-in and either the Open
|
||||
vSwitch (OVS) or Linux bridge mechanism drivers. Both mechanism drivers support
|
||||
the same basic features such as provider networks, self-service networks,
|
||||
and routers. However, more complex features often require a particular
|
||||
mechanism driver. Thus, you should consider the requirements (or goals) of
|
||||
your cloud before choosing a mechanism driver.
|
||||
|
||||
After choosing a :ref:`mechanism driver <deploy-mechanism-drivers>`, the
|
||||
deployment examples generally include the following building blocks:
|
||||
|
||||
#. Provider (public/external) networks using IPv4 and IPv6
|
||||
|
||||
#. Self-service (project/private/internal) networks including routers using
|
||||
IPv4 and IPv6
|
||||
|
||||
#. High-availability features
|
||||
|
||||
#. Other features such as BGP dynamic routing
|
||||
|
||||
Prerequisites
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Prerequisites, typically hardware requirements, generally increase with each
|
||||
building block. Each building block depends on proper deployment and operation
|
||||
of prior building blocks. For example, the first building block (provider
|
||||
networks) only requires one controller and two compute nodes, the second
|
||||
building block (self-service networks) adds a network node, and the
|
||||
high-availability building blocks typically add a second network node for a
|
||||
total of five nodes. Each building block could also require additional
|
||||
infrastructure or changes to existing infrastructure such as networks.
|
||||
|
||||
For basic configuration of prerequisites, see the
|
||||
`Ocata Install Tutorials and Guides <https://docs.openstack.org/project-install-guide/ocata>`__.
|
||||
|
||||
.. note::
|
||||
|
||||
Example commands using the ``openstack`` client assume version 3.2.0 or
|
||||
higher.
|
||||
|
||||
Nodes
|
||||
-----
|
||||
|
||||
The deployment examples refer one or more of the following nodes:
|
||||
|
||||
* Controller: Contains control plane components of OpenStack services
|
||||
and their dependencies.
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* Operational SQL server with databases necessary for each OpenStack
|
||||
service.
|
||||
* Operational message queue service.
|
||||
* Operational OpenStack Identity (keystone) service.
|
||||
* Operational OpenStack Image Service (glance).
|
||||
* Operational management components of the OpenStack Compute (nova) service
|
||||
with appropriate configuration to use the Networking service.
|
||||
* OpenStack Networking (neutron) server service and ML2 plug-in.
|
||||
|
||||
* Network: Contains the OpenStack Networking service layer-3 (routing)
|
||||
component. High availability options may include additional components.
|
||||
|
||||
* Three network interfaces: management, overlay, and provider.
|
||||
* OpenStack Networking layer-2 (switching) agent, layer-3 agent, and any
|
||||
dependencies.
|
||||
|
||||
* Compute: Contains the hypervisor component of the OpenStack Compute service
|
||||
and the OpenStack Networking layer-2, DHCP, and metadata components.
|
||||
High-availability options may include additional components.
|
||||
|
||||
* Two network interfaces: management and provider.
|
||||
* Operational hypervisor components of the OpenStack Compute (nova) service
|
||||
with appropriate configuration to use the Networking service.
|
||||
* OpenStack Networking layer-2 agent, DHCP agent, metadata agent, and any
|
||||
dependencies.
|
||||
|
||||
Each building block defines the quantity and types of nodes including the
|
||||
components on each node.
|
||||
|
||||
.. note::
|
||||
|
||||
You can virtualize these nodes for demonstration, training, or
|
||||
proof-of-concept purposes. However, you must use physical hosts for
|
||||
evaluation of performance or scaling.
|
||||
|
||||
Networks and network interfaces
|
||||
-------------------------------
|
||||
|
||||
The deployment examples refer to one or more of the following networks
|
||||
and network interfaces:
|
||||
|
||||
* Management: Handles API requests from clients and control plane traffic for
|
||||
OpenStack services including their dependencies.
|
||||
* Overlay: Handles self-service networks using an overlay protocol such as
|
||||
VXLAN or GRE.
|
||||
* Provider: Connects virtual and physical networks at layer-2. Typically
|
||||
uses physical network infrastructure for switching/routing traffic to
|
||||
external networks such as the Internet.
|
||||
|
||||
.. note::
|
||||
|
||||
For best performance, 10+ Gbps physical network infrastructure should
|
||||
support jumbo frames.
|
||||
|
||||
For illustration purposes, the configuration examples typically reference
|
||||
the following IP address ranges:
|
||||
|
||||
* Provider network 1:
|
||||
|
||||
* IPv4: 203.0.113.0/24
|
||||
* IPv6: fd00:203:0:113::/64
|
||||
|
||||
* Provider network 2:
|
||||
|
||||
* IPv4: 192.0.2.0/24
|
||||
* IPv6: fd00:192:0:2::/64
|
||||
|
||||
* Self-service networks:
|
||||
|
||||
* IPv4: 198.51.100.0/24 in /24 segments
|
||||
* IPv6: fd00:198:51::/48 in /64 segments
|
||||
|
||||
You may change them to work with your particular network infrastructure.
|
||||
|
||||
.. _deploy-mechanism-drivers:
|
||||
|
||||
Mechanism drivers
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
deploy-lb
|
||||
deploy-ovs
|
BIN
doc/source/admin/figures/NetworkTypes.png
Normal file
After Width: | Height: | Size: 71 KiB |
20392
doc/source/admin/figures/NetworkTypes.svg
Normal file
After Width: | Height: | Size: 594 KiB |
BIN
doc/source/admin/figures/bgp-dynamic-routing-example1.graffle
Normal file
BIN
doc/source/admin/figures/bgp-dynamic-routing-example1.png
Normal file
After Width: | Height: | Size: 110 KiB |
After Width: | Height: | Size: 26 KiB |
BIN
doc/source/admin/figures/bgp-dynamic-routing-example2.graffle
Normal file
BIN
doc/source/admin/figures/bgp-dynamic-routing-example2.png
Normal file
After Width: | Height: | Size: 130 KiB |
After Width: | Height: | Size: 31 KiB |
BIN
doc/source/admin/figures/bgp-dynamic-routing-overview.graffle
Normal file
BIN
doc/source/admin/figures/bgp-dynamic-routing-overview.png
Normal file
After Width: | Height: | Size: 71 KiB |
After Width: | Height: | Size: 23 KiB |
BIN
doc/source/admin/figures/config-macvtap-compute1.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/source/admin/figures/config-macvtap-compute2.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
doc/source/admin/figures/demo_multiple_dhcp_agents.png
Normal file
After Width: | Height: | Size: 51 KiB |
BIN
doc/source/admin/figures/deploy-lb-ha-vrrp-compconn1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-ha-vrrp-compconn1.png
Normal file
After Width: | Height: | Size: 212 KiB |
3
doc/source/admin/figures/deploy-lb-ha-vrrp-compconn1.svg
Normal file
After Width: | Height: | Size: 55 KiB |
BIN
doc/source/admin/figures/deploy-lb-ha-vrrp-overview.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-ha-vrrp-overview.png
Normal file
After Width: | Height: | Size: 176 KiB |
3
doc/source/admin/figures/deploy-lb-ha-vrrp-overview.svg
Normal file
After Width: | Height: | Size: 43 KiB |
BIN
doc/source/admin/figures/deploy-lb-provider-compconn1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-provider-compconn1.png
Normal file
After Width: | Height: | Size: 84 KiB |
After Width: | Height: | Size: 23 KiB |
BIN
doc/source/admin/figures/deploy-lb-provider-compconn2.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-provider-compconn2.png
Normal file
After Width: | Height: | Size: 118 KiB |
After Width: | Height: | Size: 37 KiB |
BIN
doc/source/admin/figures/deploy-lb-provider-flowew1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-provider-flowew1.png
Normal file
After Width: | Height: | Size: 96 KiB |
3
doc/source/admin/figures/deploy-lb-provider-flowew1.svg
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
doc/source/admin/figures/deploy-lb-provider-flowew2.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-provider-flowew2.png
Normal file
After Width: | Height: | Size: 102 KiB |
3
doc/source/admin/figures/deploy-lb-provider-flowew2.svg
Normal file
After Width: | Height: | Size: 33 KiB |
BIN
doc/source/admin/figures/deploy-lb-provider-flowns1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-provider-flowns1.png
Normal file
After Width: | Height: | Size: 72 KiB |
3
doc/source/admin/figures/deploy-lb-provider-flowns1.svg
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
doc/source/admin/figures/deploy-lb-provider-overview.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-provider-overview.png
Normal file
After Width: | Height: | Size: 115 KiB |
3
doc/source/admin/figures/deploy-lb-provider-overview.svg
Normal file
After Width: | Height: | Size: 31 KiB |
BIN
doc/source/admin/figures/deploy-lb-selfservice-compconn1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-selfservice-compconn1.png
Normal file
After Width: | Height: | Size: 140 KiB |
After Width: | Height: | Size: 39 KiB |
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowew1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowew1.png
Normal file
After Width: | Height: | Size: 74 KiB |
After Width: | Height: | Size: 24 KiB |
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowew2.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowew2.png
Normal file
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 36 KiB |
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowns1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowns1.png
Normal file
After Width: | Height: | Size: 103 KiB |
After Width: | Height: | Size: 32 KiB |
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowns2.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-selfservice-flowns2.png
Normal file
After Width: | Height: | Size: 103 KiB |
After Width: | Height: | Size: 32 KiB |
BIN
doc/source/admin/figures/deploy-lb-selfservice-overview.graffle
Normal file
BIN
doc/source/admin/figures/deploy-lb-selfservice-overview.png
Normal file
After Width: | Height: | Size: 179 KiB |
After Width: | Height: | Size: 43 KiB |
BIN
doc/source/admin/figures/deploy-ovs-ha-dvr-compconn1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-ovs-ha-dvr-compconn1.png
Normal file
After Width: | Height: | Size: 255 KiB |
3
doc/source/admin/figures/deploy-ovs-ha-dvr-compconn1.svg
Normal file
After Width: | Height: | Size: 79 KiB |
BIN
doc/source/admin/figures/deploy-ovs-ha-dvr-flowew1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-ovs-ha-dvr-flowew1.png
Normal file
After Width: | Height: | Size: 166 KiB |
3
doc/source/admin/figures/deploy-ovs-ha-dvr-flowew1.svg
Normal file
After Width: | Height: | Size: 51 KiB |
BIN
doc/source/admin/figures/deploy-ovs-ha-dvr-flowns1.graffle
Normal file
BIN
doc/source/admin/figures/deploy-ovs-ha-dvr-flowns1.png
Normal file
After Width: | Height: | Size: 146 KiB |