diff --git a/doc/source/admin/config-trunking.rst b/doc/source/admin/config-trunking.rst
index 5991bcaba98..5dfc1fb11ba 100644
--- a/doc/source/admin/config-trunking.rst
+++ b/doc/source/admin/config-trunking.rst
@@ -231,6 +231,16 @@ or adding subports to an existing trunk.
| tags | [] |
+----------------+-------------------------------------------------------------------------------------------------+
+* When using the OVN driver, additional logical switch port information
+ is available using the following commands:
+
+ .. code-block:: console
+
+ $ ovn-nbctl lsp-get-parent 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3
+ 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38
+
+ $ ovn-nbctl lsp-get-tag 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3
+
Launch an instance on the trunk
-------------------------------
diff --git a/doc/source/admin/index.rst b/doc/source/admin/index.rst
index 3df52894fc8..d5facfe0ded 100644
--- a/doc/source/admin/index.rst
+++ b/doc/source/admin/index.rst
@@ -19,4 +19,5 @@ manage OpenStack Networking (neutron).
ops
migration
misc
+ ovn/index
archives/index
diff --git a/doc/source/admin/ovn/dpdk.rst b/doc/source/admin/ovn/dpdk.rst
new file mode 100644
index 00000000000..2fd2e2dfd37
--- /dev/null
+++ b/doc/source/admin/ovn/dpdk.rst
@@ -0,0 +1,29 @@
+.. _ovn_dpdk:
+
+===================
+DPDK Support in OVN
+===================
+
+Configuration Settings
+----------------------
+
+The following configuration parameter needs to be set in the Neutron ML2
+plugin configuration file under the 'ovn' section to enable DPDK support.
+
+**vhost_sock_dir**
+ This is the directory path in which vswitch daemon in all the compute
+ nodes creates the virtio socket. Follow the instructions in
+ INSTALL.DPDK.md in openvswitch source tree to know how to configure DPDK
+ support in vswitch daemons.
+
+Configuration Settings in compute hosts
+---------------------------------------
+
+Compute nodes configured with OVS DPDK should set the datapath_type as
+"netdev" for the integration bridge (managed by OVN) and all other bridges if
+connected to the integration bridge via patch ports. The below command can be
+used to set the datapath_type.
+
+.. code-block:: console
+
+ $ sudo ovs-vsctl set Bridge br-int datapath_type=netdev
diff --git a/doc/source/admin/ovn/features.rst b/doc/source/admin/ovn/features.rst
new file mode 100644
index 00000000000..492609646c5
--- /dev/null
+++ b/doc/source/admin/ovn/features.rst
@@ -0,0 +1,102 @@
+.. _features:
+
+Features
+========
+
+Open Virtual Network (OVN) offers the following virtual network
+services:
+
+* Layer-2 (switching)
+
+ Native implementation. Replaces the conventional Open vSwitch (OVS)
+ agent.
+
+* Layer-3 (routing)
+
+ Native implementation that supports distributed routing. Replaces the
+ conventional Neutron L3 agent. This includes transparent L3HA :doc::`routing`
+ support, based on BFD monitorization integrated in core OVN.
+
+* DHCP
+
+ Native distributed implementation. Replaces the conventional Neutron DHCP
+ agent. Note that the native implementation does not yet support DNS
+ features.
+
+* DPDK
+
+ OVN and ovn mechanism driver may be used with OVS using either the Linux
+ kernel datapath or the DPDK datapath.
+
+* Trunk driver
+
+ Uses OVN's functionality of parent port and port tagging to support trunk
+ service plugin. One has to enable the 'trunk' service plugin in neutron
+ configuration files to use this feature.
+
+* VLAN tenant networks
+
+ The ovn driver does support VLAN tenant networks when used
+ with OVN version 2.11 (or higher).
+
+* DNS
+
+ Native implementation. Since the version 2.8 OVN contains a built-in
+ DNS implementation.
+
+
+The following Neutron API extensions are supported with OVN:
+
++----------------------------------+---------------------------+
+| Extension Name | Extension Alias |
++==================================+===========================+
+| Allowed Address Pairs | allowed-address-pairs |
++----------------------------------+---------------------------+
+| Auto Allocated Topology Services | auto-allocated-topology |
++----------------------------------+---------------------------+
+| Availability Zone | availability_zone |
++----------------------------------+---------------------------+
+| Default Subnetpools | default-subnetpools |
++----------------------------------+---------------------------+
+| Multi Provider Network | multi-provider |
++----------------------------------+---------------------------+
+| Network IP Availability | network-ip-availability |
++----------------------------------+---------------------------+
+| Neutron external network | external-net |
++----------------------------------+---------------------------+
+| Neutron Extra DHCP opts | extra_dhcp_opt |
++----------------------------------+---------------------------+
+| Neutron Extra Route | extraroute |
++----------------------------------+---------------------------+
+| Neutron L3 external gateway | ext-gw-mode |
++----------------------------------+---------------------------+
+| Neutron L3 Router | router |
++----------------------------------+---------------------------+
+| Network MTU | net-mtu |
++----------------------------------+---------------------------+
+| Port Binding | binding |
++----------------------------------+---------------------------+
+| Port Security | port-security |
++----------------------------------+---------------------------+
+| Provider Network | provider |
++----------------------------------+---------------------------+
+| Quality of Service | qos |
++----------------------------------+---------------------------+
+| Quota management support | quotas |
++----------------------------------+---------------------------+
+| RBAC Policies | rbac-policies |
++----------------------------------+---------------------------+
+| Resource revision numbers | standard-attr-revisions |
++----------------------------------+---------------------------+
+| security-group | security-group |
++----------------------------------+---------------------------+
+| standard-attr-description | standard-attr-description |
++----------------------------------+---------------------------+
+| Subnet Allocation | subnet_allocation |
++----------------------------------+---------------------------+
+| Tag support | standard-attr-tag |
++----------------------------------+---------------------------+
+| Time Stamp Fields | standard-attr-timestamp |
++----------------------------------+---------------------------+
+| Domain Name System (DNS) | dns_integration |
++----------------------------------+---------------------------+
diff --git a/doc/source/admin/ovn/figures/ovn-east-west-2.png b/doc/source/admin/ovn/figures/ovn-east-west-2.png
new file mode 100644
index 00000000000..2e780a738b6
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-east-west-2.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-east-west-2.svg b/doc/source/admin/ovn/figures/ovn-east-west-2.svg
new file mode 100644
index 00000000000..be636c959c4
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-east-west-2.svg
@@ -0,0 +1,2849 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-east-west-3.png b/doc/source/admin/ovn/figures/ovn-east-west-3.png
new file mode 100644
index 00000000000..d63a61e8aed
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-east-west-3.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-east-west-3.svg b/doc/source/admin/ovn/figures/ovn-east-west-3.svg
new file mode 100644
index 00000000000..a5691ccc78f
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-east-west-3.svg
@@ -0,0 +1,2850 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-east-west.png b/doc/source/admin/ovn/figures/ovn-east-west.png
new file mode 100644
index 00000000000..fdba86c4774
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-east-west.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-east-west.svg b/doc/source/admin/ovn/figures/ovn-east-west.svg
new file mode 100644
index 00000000000..bdc47d8984e
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-east-west.svg
@@ -0,0 +1,2779 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-l3ha-bfd-3gw.png b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-3gw.png
new file mode 100644
index 00000000000..983e4d0c358
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-3gw.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-l3ha-bfd-3gw.svg b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-3gw.svg
new file mode 100644
index 00000000000..204170ef3b8
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-3gw.svg
@@ -0,0 +1,3836 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-l3ha-bfd-failover.png b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-failover.png
new file mode 100644
index 00000000000..e6c74b2a8b1
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-failover.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-l3ha-bfd-failover.svg b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-failover.svg
new file mode 100644
index 00000000000..c2979b1aa21
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-l3ha-bfd-failover.svg
@@ -0,0 +1,2599 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-l3ha-bfd.png b/doc/source/admin/ovn/figures/ovn-l3ha-bfd.png
new file mode 100644
index 00000000000..aec9662d968
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-l3ha-bfd.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-l3ha-bfd.svg b/doc/source/admin/ovn/figures/ovn-l3ha-bfd.svg
new file mode 100644
index 00000000000..4f4ce54f016
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-l3ha-bfd.svg
@@ -0,0 +1,2516 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-north-south-distributed-fip.png b/doc/source/admin/ovn/figures/ovn-north-south-distributed-fip.png
new file mode 100644
index 00000000000..9c7ae4b726c
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-north-south-distributed-fip.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-north-south-distributed-fip.svg b/doc/source/admin/ovn/figures/ovn-north-south-distributed-fip.svg
new file mode 100644
index 00000000000..369de952503
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-north-south-distributed-fip.svg
@@ -0,0 +1,3090 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/figures/ovn-north-south.png b/doc/source/admin/ovn/figures/ovn-north-south.png
new file mode 100644
index 00000000000..c33f1aff3e6
Binary files /dev/null and b/doc/source/admin/ovn/figures/ovn-north-south.png differ
diff --git a/doc/source/admin/ovn/figures/ovn-north-south.svg b/doc/source/admin/ovn/figures/ovn-north-south.svg
new file mode 100644
index 00000000000..7d0d42655b0
--- /dev/null
+++ b/doc/source/admin/ovn/figures/ovn-north-south.svg
@@ -0,0 +1,2991 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/index.rst b/doc/source/admin/ovn/index.rst
new file mode 100644
index 00000000000..adcfe64278a
--- /dev/null
+++ b/doc/source/admin/ovn/index.rst
@@ -0,0 +1,14 @@
+===============================
+OVN Driver Administration Guide
+===============================
+
+.. toctree::
+ :maxdepth: 1
+
+ ovn
+ features
+ routing
+ tutorial
+ refarch/refarch
+ dpdk
+ troubleshooting
diff --git a/doc/source/admin/ovn/ovn.rst b/doc/source/admin/ovn/ovn.rst
new file mode 100644
index 00000000000..4083f882fee
--- /dev/null
+++ b/doc/source/admin/ovn/ovn.rst
@@ -0,0 +1,72 @@
+.. _ovn_ovn:
+
+===============
+OVN information
+===============
+
+The original OVN project announcement can be found here:
+
+* https://networkheresy.com/2015/01/13/ovn-bringing-native-virtual-networking-to-ovs/
+
+The OVN architecture is described here:
+
+* http://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html
+
+Here are two tutorials that help with learning different aspects of OVN:
+
+* http://blog.spinhirne.com/p/blog-series.html#introToOVN
+* http://docs.openvswitch.org/en/stable/tutorials/ovn-sandbox/
+
+There is also an in depth tutorial on using OVN with OpenStack:
+
+* http://docs.openvswitch.org/en/stable/tutorials/ovn-openstack/
+
+OVN DB schemas and other man pages:
+
+* http://www.openvswitch.org/support/dist-docs/ovn-nb.5.html
+* http://www.openvswitch.org/support/dist-docs/ovn-sb.5.html
+* http://www.openvswitch.org/support/dist-docs/ovn-nbctl.8.html
+* http://www.openvswitch.org/support/dist-docs/ovn-sbctl.8.html
+* http://www.openvswitch.org/support/dist-docs/ovn-northd.8.html
+* http://www.openvswitch.org/support/dist-docs/ovn-controller.8.html
+* http://www.openvswitch.org/support/dist-docs/ovn-controller-vtep.8.html
+
+or find a full list of OVS and OVN man pages here:
+
+* http://docs.openvswitch.org/en/latest/ref/
+
+The openvswitch web page includes a list of presentations, some of which are
+about OVN:
+
+* http://openvswitch.org/support/
+
+Here are some direct links to past OVN presentations:
+
+* `OVN talk at OpenStack Summit in Boston, Spring 2017
+ `_
+* `OVN talk at OpenStack Summit in Barcelona, Fall 2016
+ `_
+* `OVN talk at OpenStack Summit in Austin, Spring 2016
+ `_
+* OVN Project Update at the OpenStack Summit in Tokyo, Fall 2015 -
+ `Slides `__ -
+ `Video `__
+* OVN at OpenStack Summit in Vancouver, Sping 2015 -
+ `Slides `__ -
+ `Video `__
+* `OVS Conference 2015 `_
+
+These blog resources may also help with testing and understanding OVN:
+
+* http://networkop.co.uk/blog/2016/11/27/ovn-part1/
+* http://networkop.co.uk/blog/2016/12/10/ovn-part2/
+* https://blog.russellbryant.net/2016/12/19/comparing-openstack-neutron-ml2ovs-and-ovn-control-plane/
+* https://blog.russellbryant.net/2016/11/11/ovn-logical-flows-and-ovn-trace/
+* https://blog.russellbryant.net/2016/09/29/ovs-2-6-and-the-first-release-of-ovn/
+* http://galsagie.github.io/2015/11/23/ovn-l3-deepdive/
+* http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/
+* http://galsagie.github.io/sdn/openstack/ovs/2015/05/30/ovn-deep-dive/
+* http://blog.russellbryant.net/2015/05/14/an-ez-bake-ovn-for-openstack/
+* http://galsagie.github.io/sdn/openstack/ovs/2015/04/26/ovn-containers/
+* http://blog.russellbryant.net/2015/04/21/ovn-and-openstack-status-2015-04-21/
+* http://blog.russellbryant.net/2015/04/08/ovn-and-openstack-integration-development-update/
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-architecture1.png b/doc/source/admin/ovn/refarch/figures/ovn-architecture1.png
new file mode 100644
index 00000000000..23721801be0
Binary files /dev/null and b/doc/source/admin/ovn/refarch/figures/ovn-architecture1.png differ
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-architecture1.svg b/doc/source/admin/ovn/refarch/figures/ovn-architecture1.svg
new file mode 100644
index 00000000000..6ff72685b78
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/figures/ovn-architecture1.svg
@@ -0,0 +1,1568 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-compute1.png b/doc/source/admin/ovn/refarch/figures/ovn-compute1.png
new file mode 100644
index 00000000000..3b2f3d84d29
Binary files /dev/null and b/doc/source/admin/ovn/refarch/figures/ovn-compute1.png differ
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-compute1.svg b/doc/source/admin/ovn/refarch/figures/ovn-compute1.svg
new file mode 100644
index 00000000000..141a5f6f725
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/figures/ovn-compute1.svg
@@ -0,0 +1,982 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-hw.png b/doc/source/admin/ovn/refarch/figures/ovn-hw.png
new file mode 100644
index 00000000000..1a5368c2a80
Binary files /dev/null and b/doc/source/admin/ovn/refarch/figures/ovn-hw.png differ
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-hw.svg b/doc/source/admin/ovn/refarch/figures/ovn-hw.svg
new file mode 100644
index 00000000000..62442291035
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/figures/ovn-hw.svg
@@ -0,0 +1,1170 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-services.png b/doc/source/admin/ovn/refarch/figures/ovn-services.png
new file mode 100644
index 00000000000..bf32f7cebed
Binary files /dev/null and b/doc/source/admin/ovn/refarch/figures/ovn-services.png differ
diff --git a/doc/source/admin/ovn/refarch/figures/ovn-services.svg b/doc/source/admin/ovn/refarch/figures/ovn-services.svg
new file mode 100644
index 00000000000..f4db5f2f11f
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/figures/ovn-services.svg
@@ -0,0 +1,860 @@
+
+
+
+
diff --git a/doc/source/admin/ovn/refarch/launch-instance-provider-network.rst b/doc/source/admin/ovn/refarch/launch-instance-provider-network.rst
new file mode 100644
index 00000000000..5b6cbfbc838
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/launch-instance-provider-network.rst
@@ -0,0 +1,774 @@
+.. _refarch-launch-instance-provider-network:
+
+Launch an instance on a provider network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. On the controller node, source the credentials for a regular
+ (non-privileged) project. The following example uses the ``demo``
+ project.
+
+#. On the controller node, launch an instance using the UUID of the
+ provider network.
+
+ .. code-block:: console
+
+ $ openstack server create --flavor m1.tiny --image cirros \
+ --nic net-id=0243277b-4aa8-46d8-9e10-5c9ad5e01521 \
+ --security-group default --key-name mykey provider-instance
+ +--------------------------------------+-----------------------------------------------+
+ | Property | Value |
+ +--------------------------------------+-----------------------------------------------+
+ | OS-DCF:diskConfig | MANUAL |
+ | OS-EXT-AZ:availability_zone | nova |
+ | OS-EXT-STS:power_state | 0 |
+ | OS-EXT-STS:task_state | scheduling |
+ | OS-EXT-STS:vm_state | building |
+ | OS-SRV-USG:launched_at | - |
+ | OS-SRV-USG:terminated_at | - |
+ | accessIPv4 | |
+ | accessIPv6 | |
+ | adminPass | hdF4LMQqC5PB |
+ | config_drive | |
+ | created | 2015-09-17T21:58:18Z |
+ | flavor | m1.tiny (1) |
+ | hostId | |
+ | id | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf |
+ | image | cirros (38047887-61a7-41ea-9b49-27987d5e8bb9) |
+ | key_name | mykey |
+ | metadata | {} |
+ | name | provider-instance |
+ | os-extended-volumes:volumes_attached | [] |
+ | progress | 0 |
+ | security_groups | default |
+ | status | BUILD |
+ | tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
+ | updated | 2015-09-17T21:58:18Z |
+ | user_id | 684286a9079845359882afc3aa5011fb |
+ +--------------------------------------+-----------------------------------------------+
+
+OVN operations
+^^^^^^^^^^^^^^
+
+The OVN mechanism driver and OVN perform the following operations when
+launching an instance.
+
+#. The OVN mechanism driver creates a logical port for the instance.
+
+ .. code-block:: console
+
+ _uuid : cc891503-1259-47a1-9349-1c0293876664
+ addresses : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "cafd4862-c69c-46e4-b3d2-6141ce06b205"
+ options : {}
+ parent_name : []
+ port_security : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
+ tag : []
+ type : ""
+ up : true
+
+#. The OVN mechanism driver updates the appropriate Address Set
+ entry with the address of this instance:
+
+ .. code-block:: console
+
+ _uuid : d0becdea-e1ed-48c4-9afc-e278cdef4629
+ addresses : ["203.0.113.103"]
+ external_ids : {"neutron:security_group_name"=default}
+ name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+#. The OVN mechanism driver creates ACL entries for this port and
+ any other ports in the project.
+
+ .. code-block:: console
+
+ _uuid : f8d27bfc-4d74-4e73-8fac-c84585443efd
+ action : drop
+ direction : from-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip"
+ priority : 1001
+
+ _uuid : a61d0068-b1aa-4900-9882-e0671d1fc131
+ action : allow
+ direction : to-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4 && ip4.src == 203.0.113.0/24 && udp && udp.src == 67 && udp.dst == 68"
+ priority : 1002
+
+ _uuid : a5a787b8-7040-4b63-a20a-551bd73eb3d1
+ action : allow-related
+ direction : from-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip6"
+ priority : 1002
+
+ _uuid : 7b3f63b8-e69a-476c-ad3d-37de043232b2
+ action : allow-related
+ direction : to-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4 && ip4.src = $as_ip4_90a78a43_b5649_4bee_8822_21fcccab58dc"
+ priority : 1002
+
+ _uuid : 36dbb1b1-cd30-4454-a0bf-923646eb7c3f
+ action : allow
+ direction : from-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4 && (ip4.dst == 255.255.255.255 || ip4.dst == 203.0.113.0/24) && udp && udp.src == 68 && udp.dst == 67"
+ priority : 1002
+
+ _uuid : 05a92f66-be48-461e-a7f1-b07bfbd3e667
+ action : allow-related
+ direction : from-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4"
+ priority : 1002
+
+ _uuid : 37f18377-d6c3-4c44-9e4d-2170710e50ff
+ action : drop
+ direction : to-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip"
+ priority : 1001
+
+ _uuid : 6d4db3cf-c1f1-4006-ad66-ae582a6acd21
+ action : allow-related
+ direction : to-lport
+ external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
+ log : false
+ match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip6 && ip6.src = $as_ip6_90a78a43_b5649_4bee_8822_21fcccab58dc"
+ priority : 1002
+
+#. The OVN mechanism driver updates the logical switch information with
+ the UUIDs of these objects.
+
+ .. code-block:: console
+
+ _uuid : 924500c4-8580-4d5f-a7ad-8769f6e58ff5
+ acls : [05a92f66-be48-461e-a7f1-b07bfbd3e667,
+ 36dbb1b1-cd30-4454-a0bf-923646eb7c3f,
+ 37f18377-d6c3-4c44-9e4d-2170710e50ff,
+ 7b3f63b8-e69a-476c-ad3d-37de043232b2,
+ a5a787b8-7040-4b63-a20a-551bd73eb3d1,
+ a61d0068-b1aa-4900-9882-e0671d1fc131,
+ f8d27bfc-4d74-4e73-8fac-c84585443efd]
+ external_ids : {"neutron:network_name"=provider}
+ name : "neutron-670efade-7cd0-4d87-8a04-27f366eb8941"
+ ports : [38cf8b52-47c4-4e93-be8d-06bf71f6a7c9,
+ 5e144ab9-3e08-4910-b936-869bbbf254c8,
+ a576b812-9c3e-4cfb-9752-5d8500b3adf9,
+ cc891503-1259-47a1-9349-1c0293876664]
+
+#. The OVN northbound service creates port bindings for the logical
+ ports and adds them to the appropriate multicast group.
+
+ * Port bindings
+
+ .. code-block:: console
+
+ _uuid : e73e3fcd-316a-4418-bbd5-a8a42032b1c3
+ chassis : fc5ab9e7-bc28-40e8-ad52-2949358cc088
+ datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
+ logical_port : "cafd4862-c69c-46e4-b3d2-6141ce06b205"
+ mac : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
+ options : {}
+ parent_port : []
+ tag : []
+ tunnel_key : 4
+ type : ""
+
+ * Multicast groups
+
+ .. code-block:: console
+
+ _uuid : 39b32ccd-fa49-4046-9527-13318842461e
+ datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
+ name : _MC_flood
+ ports : [030024f4-61c3-4807-859b-07727447c427,
+ 904c3108-234d-41c0-b93c-116b7e352a75,
+ cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46,
+ e73e3fcd-316a-4418-bbd5-a8a42032b1c3]
+ tunnel_key : 65535
+
+#. The OVN northbound service translates the Address Set change into
+ the new Address Set in the OVN southbound database.
+
+ .. code-block:: console
+
+ _uuid : 2addbee3-7084-4fff-8f7b-15b1efebdaff
+ addresses : ["203.0.113.103"]
+ name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+#. The OVN northbound service translates the ACL and logical port objects
+ into logical flows in the OVN southbound database.
+
+ .. code-block:: console
+
+ Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.src == {fa:16:3e:1c:ca:6a}),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 90,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.src == fa:16:3e:1c:ca:6a && ip4.src == {203.0.113.103}),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 90,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.src == fa:16:3e:1c:ca:6a && ip4.src == 0.0.0.0 &&
+ ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 67),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 80,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.src == fa:16:3e:1c:ca:6a && ip),
+ action=(drop;)
+ table= 2( ls_in_port_sec_nd), priority= 90,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.src == fa:16:3e:1c:ca:6a &&
+ arp.sha == fa:16:3e:1c:ca:6a && (arp.spa == 203.0.113.103 )),
+ action=(next;)
+ table= 2( ls_in_port_sec_nd), priority= 80,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ (arp || nd)),
+ action=(drop;)
+ table= 3( ls_in_pre_acl), priority= 110,
+ match=(nd),
+ action=(next;)
+ table= 3( ls_in_pre_acl), priority= 100,
+ match=(ip),
+ action=(reg0[0] = 1; next;)
+ table= 6( ls_in_acl), priority=65535,
+ match=(ct.inv),
+ action=(drop;)
+ table= 6( ls_in_acl), priority=65535,
+ match=(nd),
+ action=(next;)
+ table= 6( ls_in_acl), priority=65535,
+ match=(ct.est && !ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 6( ls_in_acl), priority=65535,
+ match=(!ct.est && ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 6( ls_in_acl), priority= 2002,
+ match=(ct.new && (inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205"
+ && ip6)),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_in_acl), priority= 2002,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip4 &&
+ (ip4.dst == 255.255.255.255 || ip4.dst == 203.0.113.0/24) &&
+ udp && udp.src == 68 && udp.dst == 67),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_in_acl), priority= 2002,
+ match=(ct.new && (inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ ip4)),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_in_acl), priority= 2001,
+ match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip),
+ action=(drop;)
+ table= 6( ls_in_acl), priority= 1,
+ match=(ip),
+ action=(reg0[1] = 1; next;)
+ table= 9( ls_in_arp_rsp), priority= 50,
+ match=(arp.tpa == 203.0.113.103 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:1c:ca:6a;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:1c:ca:6a; arp.tpa = arp.spa;
+ arp.spa = 203.0.113.103; outport = inport;
+ inport = ""; /* Allow sending out inport. */ output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:1c:ca:6a),
+ action=(outport = "cafd4862-c69c-46e4-b3d2-6141ce06b205"; output;)
+ Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: egress
+ table= 1( ls_out_pre_acl), priority= 110,
+ match=(nd),
+ action=(next;)
+ table= 1( ls_out_pre_acl), priority= 100,
+ match=(ip),
+ action=(reg0[0] = 1; next;)
+ table= 4( ls_out_acl), priority=65535,
+ match=(!ct.est && ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 4( ls_out_acl), priority=65535,
+ match=(ct.est && !ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 4( ls_out_acl), priority=65535,
+ match=(ct.inv),
+ action=(drop;)
+ table= 4( ls_out_acl), priority=65535,
+ match=(nd),
+ action=(next;)
+ table= 4( ls_out_acl), priority= 2002,
+ match=(ct.new &&
+ (outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip6 &&
+ ip6.src == $as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc)),
+ action=(reg0[1] = 1; next;)
+ table= 4( ls_out_acl), priority= 2002,
+ match=(ct.new &&
+ (outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip4 &&
+ ip4.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc)),
+ action=(reg0[1] = 1; next;)
+ table= 4( ls_out_acl), priority= 2002,
+ match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip4 &&
+ ip4.src == 203.0.113.0/24 && udp && udp.src == 67 &&
+ udp.dst == 68),
+ action=(reg0[1] = 1; next;)
+ table= 4( ls_out_acl), priority= 2001,
+ match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip),
+ action=(drop;)
+ table= 4( ls_out_acl), priority= 1,
+ match=(ip),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_out_port_sec_ip), priority= 90,
+ match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.dst == fa:16:3e:1c:ca:6a &&
+ ip4.dst == {255.255.255.255, 224.0.0.0/4, 203.0.113.103}),
+ action=(next;)
+ table= 6( ls_out_port_sec_ip), priority= 80,
+ match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.dst == fa:16:3e:1c:ca:6a && ip),
+ action=(drop;)
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
+ eth.dst == {fa:16:3e:1c:ca:6a}),
+ action=(output;)
+
+#. The OVN controller service on each compute node translates these objects
+ into flows on the integration bridge ``br-int``. Exact flows depend on
+ whether the compute node containing the instance also contains a DHCP agent
+ on the subnet.
+
+ * On the compute node containing the instance, the Compute service creates
+ a port that connects the instance to the integration bridge and OVN
+ creates the following flows:
+
+ .. code-block:: console
+
+ # ovs-ofctl show br-int
+ OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
+ n_tables:254, n_buffers:256
+ capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
+ actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
+ 9(tapcafd4862-c6): addr:fe:16:3e:1c:ca:6a
+ config: 0
+ state: 0
+ current: 10MB-FD COPPER
+ speed: 10 Mbps now, 0 Mbps max
+
+ .. code-block:: console
+
+ cookie=0x0, duration=184.992s, table=0, n_packets=175, n_bytes=15270,
+ idle_age=15, priority=100,in_port=9
+ actions=load:0x3->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
+ load:0x4->NXM_NX_REG6[],resubmit(,16)
+ cookie=0x0, duration=191.687s, table=16, n_packets=175, n_bytes=15270,
+ idle_age=15, priority=50,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a
+ actions=resubmit(,17)
+ cookie=0x0, duration=191.687s, table=17, n_packets=2, n_bytes=684,
+ idle_age=112, priority=90,udp,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a,nw_src=0.0.0.0,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=resubmit(,18)
+ cookie=0x0, duration=191.687s, table=17, n_packets=146, n_bytes=12780,
+ idle_age=20, priority=90,ip,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a,nw_src=203.0.113.103
+ actions=resubmit(,18)
+ cookie=0x0, duration=191.687s, table=17, n_packets=17, n_bytes=1386,
+ idle_age=92, priority=80,ipv6,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=191.687s, table=17, n_packets=0, n_bytes=0,
+ idle_age=191, priority=80,ip,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=191.687s, table=18, n_packets=10, n_bytes=420,
+ idle_age=15, priority=90,arp,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a,arp_spa=203.0.113.103,
+ arp_sha=fa:16:3e:1c:ca:6a
+ actions=resubmit(,19)
+ cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
+ idle_age=191, priority=80,icmp6,reg6=0x4,metadata=0x4,
+ icmp_type=136,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
+ idle_age=191, priority=80,icmp6,reg6=0x4,metadata=0x4,
+ icmp_type=135,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
+ idle_age=191, priority=80,arp,reg6=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.033s, table=19, n_packets=0, n_bytes=0,
+ idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=75.032s, table=19, n_packets=0, n_bytes=0,
+ idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=75.032s, table=19, n_packets=34, n_bytes=5170,
+ idle_age=49, priority=100,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=75.032s, table=19, n_packets=0, n_bytes=0,
+ idle_age=75, priority=100,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=13, n_bytes=1118,
+ idle_age=49, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,ct_state=+inv+trk,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.033s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2002,ct_state=+new+trk,ipv6,reg6=0x4,
+ metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=15, n_bytes=1816,
+ idle_age=49, priority=2002,ct_state=+new+trk,ip,reg6=0x4,
+ metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2002,udp,reg6=0x4,metadata=0x4,
+ nw_dst=203.0.113.0/24,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2002,udp,reg6=0x4,metadata=0x4,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=75.033s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2001,ip,reg6=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2001,ipv6,reg6=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.032s, table=22, n_packets=6, n_bytes=2236,
+ idle_age=54, priority=1,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
+ idle_age=75, priority=1,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=67.064s, table=25, n_packets=0, n_bytes=0,
+ idle_age=67, priority=50,arp,metadata=0x4,arp_tpa=203.0.113.103,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:1c:ca:6a,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163ed63dca->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a81268->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=75.033s, table=26, n_packets=19, n_bytes=2776,
+ idle_age=44, priority=50,metadata=0x4,dl_dst=fa:16:3e:1c:ca:6a
+ actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=221031.310s, table=33, n_packets=72, n_bytes=6292,
+ idle_age=20, hard_age=65534, priority=100,reg7=0x3,metadata=0x4
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
+ cookie=0x0, duration=184.992s, table=34, n_packets=2, n_bytes=684,
+ idle_age=112, priority=100,reg6=0x4,reg7=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.034s, table=49, n_packets=0, n_bytes=0,
+ idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=75.033s, table=49, n_packets=0, n_bytes=0,
+ idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=75.033s, table=49, n_packets=38, n_bytes=6566,
+ idle_age=49, priority=100,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=75.033s, table=49, n_packets=0, n_bytes=0,
+ idle_age=75, priority=100,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=13, n_bytes=1118,
+ idle_age=49, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=65535,ct_state=+inv+trk,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.034s, table=52, n_packets=4, n_bytes=1538,
+ idle_age=54, priority=2002,udp,reg7=0x4,metadata=0x4,
+ nw_src=203.0.113.0/24,tp_src=67,tp_dst=68
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
+ metadata=0x4,nw_src=203.0.113.103
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=2.041s, table=52, n_packets=0, n_bytes=0,
+ idle_age=2, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
+ metadata=0x4,ipv6_src=::2/::2
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=2, n_bytes=698,
+ idle_age=54, priority=2001,ip,reg7=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=2001,ipv6,reg7=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=75.034s, table=52, n_packets=0, n_bytes=0,
+ idle_age=75, priority=1,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=75.033s, table=52, n_packets=19, n_bytes=3212,
+ idle_age=49, priority=1,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=75.034s, table=54, n_packets=17, n_bytes=2656,
+ idle_age=49, priority=90,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a,nw_dst=203.0.113.103
+ actions=resubmit(,55)
+ cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
+ idle_age=75, priority=90,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a,nw_dst=255.255.255.255
+ actions=resubmit(,55)
+ cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
+ idle_age=75, priority=90,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a,nw_dst=224.0.0.0/4
+ actions=resubmit(,55)
+ cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
+ idle_age=75, priority=80,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
+ idle_age=75, priority=80,ipv6,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=75.033s, table=55, n_packets=21, n_bytes=2860,
+ idle_age=44, priority=50,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a
+ actions=resubmit(,64)
+ cookie=0x0, duration=184.992s, table=64, n_packets=166, n_bytes=15088,
+ idle_age=15, priority=100,reg7=0x4,metadata=0x4
+ actions=output:9
+
+ * For each compute node that only contains a DHCP agent on the subnet, OVN
+ creates the following flows:
+
+ .. code-block:: console
+
+ cookie=0x0, duration=189.649s, table=16, n_packets=0, n_bytes=0,
+ idle_age=189, priority=50,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a
+ actions=resubmit(,17)
+ cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
+ idle_age=189, priority=90,udp,reg6=0x4,metadata=0x4,
+ dl_src=fa:14:3e:1c:ca:6a,nw_src=0.0.0.0,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=resubmit(,18)
+ cookie=0x0, duration=189.649s, table=17, n_packets=0, n_bytes=0,
+ idle_age=189, priority=90,ip,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a,nw_src=203.0.113.103
+ actions=resubmit(,18)
+ cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
+ idle_age=189, priority=80,ipv6,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
+ idle_age=189, priority=80,ip,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=189.650s, table=18, n_packets=0, n_bytes=0,
+ idle_age=189, priority=90,arp,reg6=0x4,metadata=0x4,
+ dl_src=fa:16:3e:1c:ca:6a,arp_spa=203.0.113.103,
+ arp_sha=fa:16:3e:1c:ca:6a
+ actions=resubmit(,19)
+ cookie=0x0, duration=189.650s, table=18, n_packets=0, n_bytes=0,
+ idle_age=189, priority=80,icmp6,reg6=0x4,metadata=0x4,
+ icmp_type=136,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=189.650s, table=18, n_packets=0, n_bytes=0,
+ idle_age=189, priority=80,icmp6,reg6=0x4,metadata=0x4,
+ icmp_type=135,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=189.649s, table=18, n_packets=0, n_bytes=0,
+ idle_age=189, priority=80,arp,reg6=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.452s, table=19, n_packets=0, n_bytes=0,
+ idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=79.450s, table=19, n_packets=0, n_bytes=0,
+ idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=79.452s, table=19, n_packets=0, n_bytes=0,
+ idle_age=79, priority=100,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=79.450s, table=19, n_packets=18, n_bytes=3164,
+ idle_age=57, priority=100,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=79.450s, table=22, n_packets=6, n_bytes=510,
+ idle_age=57, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,ct_state=+inv+trk,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.453s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2002,ct_state=+new+trk,ipv6,reg6=0x4,
+ metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2002,ct_state=+new+trk,ip,reg6=0x4,
+ metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2002,udp,reg6=0x4,metadata=0x4,
+ nw_dst=203.0.113.0/24,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2002,udp,reg6=0x4,metadata=0x4,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=79.452s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2001,ip,reg6=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2001,ipv6,reg6=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
+ idle_age=79, priority=1,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=79.450s, table=22, n_packets=12, n_bytes=2654,
+ idle_age=57, priority=1,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=71.483s, table=25, n_packets=0, n_bytes=0,
+ idle_age=71, priority=50,arp,metadata=0x4,arp_tpa=203.0.113.103,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:1c:ca:6a,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163ed63dca->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a81268->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=79.450s, table=26, n_packets=8, n_bytes=1258,
+ idle_age=57, priority=50,metadata=0x4,dl_dst=fa:16:3e:1c:ca:6a
+ actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=182.952s, table=33, n_packets=74, n_bytes=7040,
+ idle_age=18, priority=100,reg7=0x4,metadata=0x4
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
+ cookie=0x0, duration=79.451s, table=49, n_packets=0, n_bytes=0,
+ idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=79.450s, table=49, n_packets=0, n_bytes=0,
+ idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=79.450s, table=49, n_packets=18, n_bytes=3164,
+ idle_age=57, priority=100,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=79.450s, table=49, n_packets=0, n_bytes=0,
+ idle_age=79, priority=100,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=6, n_bytes=510,
+ idle_age=57, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x4
+ actions=resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=65535,ct_state=+inv+trk,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.452s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2002,udp,reg7=0x4,metadata=0x4,
+ nw_src=203.0.113.0/24,tp_src=67,tp_dst=68
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
+ metadata=0x4,nw_src=203.0.113.103
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=71.483s, table=52, n_packets=0, n_bytes=0,
+ idle_age=71, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
+ metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2001,ipv6,reg7=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=2001,ip,reg7=0x4,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=79.453s, table=52, n_packets=0, n_bytes=0,
+ idle_age=79, priority=1,ipv6,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=79.450s, table=52, n_packets=12, n_bytes=2654,
+ idle_age=57, priority=1,ip,metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
+ idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a,nw_dst=255.255.255.255
+ actions=resubmit(,55)
+ cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
+ idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a,nw_dst=203.0.113.103
+ actions=resubmit(,55)
+ cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
+ idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a,nw_dst=224.0.0.0/4
+ actions=resubmit(,55)
+ cookie=0x0, duration=79.450s, table=54, n_packets=0, n_bytes=0,
+ idle_age=79, priority=80,ip,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=79.450s, table=54, n_packets=0, n_bytes=0,
+ idle_age=79, priority=80,ipv6,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a
+ actions=drop
+ cookie=0x0, duration=79.450s, table=55, n_packets=0, n_bytes=0,
+ idle_age=79, priority=50,reg7=0x4,metadata=0x4,
+ dl_dst=fa:16:3e:1c:ca:6a
+ actions=resubmit(,64)
diff --git a/doc/source/admin/ovn/refarch/launch-instance-selfservice-network.rst b/doc/source/admin/ovn/refarch/launch-instance-selfservice-network.rst
new file mode 100644
index 00000000000..673fed125fa
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/launch-instance-selfservice-network.rst
@@ -0,0 +1,757 @@
+.. _refarch-launch-instance-selfservice-network:
+
+Launch an instance on a self-service network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To launch an instance on a self-service network, follow the same steps as
+:ref:`launching an instance on the provider network
+`, but using the UUID of the
+self-service network.
+
+OVN operations
+^^^^^^^^^^^^^^
+
+The OVN mechanism driver and OVN perform the following operations when
+launching an instance.
+
+#. The OVN mechanism driver creates a logical port for the instance.
+
+ .. code-block:: console
+
+ _uuid : c754d1d2-a7fb-4dd0-b14c-c076962b06b9
+ addresses : ["fa:16:3e:15:7d:13 192.168.1.5"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"
+ options : {}
+ parent_name : []
+ port_security : ["fa:16:3e:15:7d:13 192.168.1.5"]
+ tag : []
+ type : ""
+ up : true
+
+#. The OVN mechanism driver updates the appropriate Address Set object(s)
+ with the address of the new instance:
+
+ .. code-block:: console
+
+ _uuid : d0becdea-e1ed-48c4-9afc-e278cdef4629
+ addresses : ["192.168.1.5", "203.0.113.103"]
+ external_ids : {"neutron:security_group_name"=default}
+ name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+#. The OVN mechanism driver creates ACL entries for this port and
+ any other ports in the project.
+
+ .. code-block:: console
+
+ _uuid : 00ecbe8f-c82a-4e18-b688-af2a1941cff7
+ action : allow
+ direction : from-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && (ip4.dst == 255.255.255.255 || ip4.dst == 192.168.1.0/24) && udp && udp.src == 68 && udp.dst == 67"
+ priority : 1002
+
+ _uuid : 2bf5b7ed-008e-4676-bba5-71fe58897886
+ action : allow-related
+ direction : from-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4"
+ priority : 1002
+
+ _uuid : 330b4e27-074f-446a-849b-9ab0018b65c5
+ action : allow
+ direction : to-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && ip4.src == 192.168.1.0/24 && udp && udp.src == 67 && udp.dst == 68"
+ priority : 1002
+
+ _uuid : 683f52f2-4be6-4bd7-a195-6c782daa7840
+ action : allow-related
+ direction : from-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip6"
+ priority : 1002
+
+ _uuid : 8160f0b4-b344-43d5-bbd4-ca63a71aa4fc
+ action : drop
+ direction : to-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip"
+ priority : 1001
+
+ _uuid : 97c6b8ca-14ea-4812-8571-95d640a88f4f
+ action : allow-related
+ direction : to-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip6"
+ priority : 1002
+
+ _uuid : 9cfd8eb5-5daa-422e-8fe8-bd22fd7fa826
+ action : allow-related
+ direction : to-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && ip4.src == 0.0.0.0/0 && icmp4"
+ priority : 1002
+
+ _uuid : f72c2431-7a64-4cea-b84a-118bdc761be2
+ action : drop
+ direction : from-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip"
+ priority : 1001
+
+ _uuid : f94133fa-ed27-4d5e-a806-0d528e539cb3
+ action : allow-related
+ direction : to-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && ip4.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+ priority : 1002
+
+ _uuid : 7f7a92ff-b7e9-49b0-8be0-0dc388035df3
+ action : allow-related
+ direction : to-lport
+ external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
+ log : false
+ match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip6 && ip6.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+ priority : 1002
+
+#. The OVN mechanism driver updates the logical switch information with
+ the UUIDs of these objects.
+
+ .. code-block:: console
+
+ _uuid : 15e2c80b-1461-4003-9869-80416cd97de5
+ acls : [00ecbe8f-c82a-4e18-b688-af2a1941cff7,
+ 2bf5b7ed-008e-4676-bba5-71fe58897886,
+ 330b4e27-074f-446a-849b-9ab0018b65c5,
+ 683f52f2-4be6-4bd7-a195-6c782daa7840,
+ 7f7a92ff-b7e9-49b0-8be0-0dc388035df3,
+ 8160f0b4-b344-43d5-bbd4-ca63a71aa4fc,
+ 97c6b8ca-14ea-4812-8571-95d640a88f4f,
+ 9cfd8eb5-5daa-422e-8fe8-bd22fd7fa826,
+ f72c2431-7a64-4cea-b84a-118bdc761be2,
+ f94133fa-ed27-4d5e-a806-0d528e539cb3]
+ external_ids : {"neutron:network_name"="selfservice"}
+ name : "neutron-6cc81cae-8c5f-4c09-aaf2-35d0aa95c084"
+ ports : [2df457a5-f71c-4a2f-b9ab-d9e488653872,
+ 67c2737c-b380-492b-883b-438048b48e56,
+ c754d1d2-a7fb-4dd0-b14c-c076962b06b9]
+
+#. With address sets, it is no longer necessary for the OVN mechanism
+ driver to create separate ACLs for other instances in the project.
+ That is handled automagically via address sets.
+
+#. The OVN northbound service translates the updated Address Set object(s)
+ into updated Address Set objects in the OVN southbound database:
+
+ .. code-block:: console
+
+ _uuid : 2addbee3-7084-4fff-8f7b-15b1efebdaff
+ addresses : ["192.168.1.5", "203.0.113.103"]
+ name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+#. The OVN northbound service adds a Port Binding for the new Logical
+ Switch Port object:
+
+ .. code-block:: console
+
+ _uuid : 7a558e7b-ed7a-424f-a0cf-ab67d2d832d7
+ chassis : b67d6da9-0222-4ab1-a852-ab2607610bf8
+ datapath : 3f6e16b5-a03a-48e5-9b60-7b7a0396c425
+ logical_port : "e9cb7857-4cb1-4e91-aae5-165a7ab5b387"
+ mac : ["fa:16:3e:b6:91:70 192.168.1.5"]
+ options : {}
+ parent_port : []
+ tag : []
+ tunnel_key : 3
+ type : ""
+
+#. The OVN northbound service updates the flooding multicast group
+ for the logical datapath with the new port binding:
+
+ .. code-block:: console
+
+ _uuid : c08d0102-c414-4a47-98d9-dd3fa9f9901c
+ datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
+ name : _MC_flood
+ ports : [3e463ca0-951c-46fd-b6cf-05392fa3aa1f,
+ 794a6f03-7941-41ed-b1c6-0e00c1e18da0,
+ fa7b294d-2a62-45ae-8de3-a41c002de6de]
+ tunnel_key : 65535
+
+#. The OVN northbound service adds Logical Flows based on the updated
+ Address Set, ACL and Logical_Switch_Port objects:
+
+ .. code-block:: console
+
+ Datapath: 3f6e16b5-a03a-48e5-9b60-7b7a0396c425 Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.src == {fa:16:3e:b6:a3:54}),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 90,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.src == fa:16:3e:b6:a3:54 && ip4.src == 0.0.0.0 &&
+ ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 67),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 90,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.src == fa:16:3e:b6:a3:54 && ip4.src == {192.168.1.5}),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 80,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.src == fa:16:3e:b6:a3:54 && ip),
+ action=(drop;)
+ table= 2( ls_in_port_sec_nd), priority= 90,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.src == fa:16:3e:b6:a3:54 && arp.sha == fa:16:3e:b6:a3:54 &&
+ (arp.spa == 192.168.1.5 )),
+ action=(next;)
+ table= 2( ls_in_port_sec_nd), priority= 80,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ (arp || nd)),
+ action=(drop;)
+ table= 3( ls_in_pre_acl), priority= 110, match=(nd),
+ action=(next;)
+ table= 3( ls_in_pre_acl), priority= 100, match=(ip),
+ action=(reg0[0] = 1; next;)
+ table= 6( ls_in_acl), priority=65535,
+ match=(!ct.est && ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 6( ls_in_acl), priority=65535,
+ match=(ct.est && !ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 6( ls_in_acl), priority=65535, match=(ct.inv),
+ action=(drop;)
+ table= 6( ls_in_acl), priority=65535, match=(nd),
+ action=(next;)
+ table= 6( ls_in_acl), priority= 2002,
+ match=(ct.new && (inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ ip6)),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_in_acl), priority= 2002,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip4 &&
+ (ip4.dst == 255.255.255.255 || ip4.dst == 192.168.1.0/24) &&
+ udp && udp.src == 68 && udp.dst == 67),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_in_acl), priority= 2002,
+ match=(ct.new && (inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ ip4)),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_in_acl), priority= 2001,
+ match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip),
+ action=(drop;)
+ table= 6( ls_in_acl), priority= 1, match=(ip),
+ action=(reg0[1] = 1; next;)
+ table= 9( ls_in_arp_nd_rsp), priority= 50,
+ match=(arp.tpa == 192.168.1.5 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:b6:a3:54; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = fa:16:3e:b6:a3:54; arp.tpa = arp.spa; arp.spa = 192.168.1.5; outport = inport; inport = ""; /* Allow sending out inport. */ output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:b6:a3:54),
+ action=(outport = "e9cb7857-4cb1-4e91-aae5-165a7ab5b387"; output;)
+ Datapath: 3f6e16b5-a03a-48e5-9b60-7b7a0396c425 Pipeline: egress
+ table= 1( ls_out_pre_acl), priority= 110, match=(nd),
+ action=(next;)
+ table= 1( ls_out_pre_acl), priority= 100, match=(ip),
+ action=(reg0[0] = 1; next;)
+ table= 4( ls_out_acl), priority=65535, match=(nd),
+ action=(next;)
+ table= 4( ls_out_acl), priority=65535,
+ match=(!ct.est && ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 4( ls_out_acl), priority=65535,
+ match=(ct.est && !ct.rel && !ct.new && !ct.inv),
+ action=(next;)
+ table= 4( ls_out_acl), priority=65535, match=(ct.inv),
+ action=(drop;)
+ table= 4( ls_out_acl), priority= 2002,
+ match=(ct.new &&
+ (outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip6 &&
+ ip6.src == $as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc)),
+ action=(reg0[1] = 1; next;)
+ table= 4( ls_out_acl), priority= 2002,
+ match=(ct.new &&
+ (outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip4 &&
+ ip4.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc)),
+ action=(reg0[1] = 1; next;)
+ table= 4( ls_out_acl), priority= 2002,
+ match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip4 &&
+ ip4.src == 192.168.1.0/24 && udp && udp.src == 67 && udp.dst == 68),
+ action=(reg0[1] = 1; next;)
+ table= 4( ls_out_acl), priority= 2001,
+ match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip),
+ action=(drop;)
+ table= 4( ls_out_acl), priority= 1, match=(ip),
+ action=(reg0[1] = 1; next;)
+ table= 6( ls_out_port_sec_ip), priority= 90,
+ match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.dst == fa:16:3e:b6:a3:54 &&
+ ip4.dst == {255.255.255.255, 224.0.0.0/4, 192.168.1.5}),
+ action=(next;)
+ table= 6( ls_out_port_sec_ip), priority= 80,
+ match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.dst == fa:16:3e:b6:a3:54 && ip),
+ action=(drop;)
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
+ eth.dst == {fa:16:3e:b6:a3:54}),
+ action=(output;)
+
+#. The OVN controller service on each compute node translates these objects
+ into flows on the integration bridge ``br-int``. Exact flows depend on
+ whether the compute node containing the instance also contains a DHCP agent
+ on the subnet.
+
+ * On the compute node containing the instance, the Compute service creates
+ a port that connects the instance to the integration bridge and OVN
+ creates the following flows:
+
+ .. code-block:: console
+
+ # ovs-ofctl show br-int
+ OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
+ n_tables:254, n_buffers:256
+ capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
+ actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
+ 12(tapeaf36f62-56): addr:fe:16:3e:15:7d:13
+ config: 0
+ state: 0
+ current: 10MB-FD COPPER
+
+ .. code-block:: console
+
+ cookie=0x0, duration=179.460s, table=0, n_packets=122, n_bytes=10556,
+ idle_age=1, priority=100,in_port=12
+ actions=load:0x4->NXM_NX_REG5[],load:0x5->OXM_OF_METADATA[],
+ load:0x3->NXM_NX_REG6[],resubmit(,16)
+ cookie=0x0, duration=187.408s, table=16, n_packets=122, n_bytes=10556,
+ idle_age=1, priority=50,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13
+ actions=resubmit(,17)
+ cookie=0x0, duration=187.408s, table=17, n_packets=2, n_bytes=684,
+ idle_age=84, priority=90,udp,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13,nw_src=0.0.0.0,nw_dst=255.255.255.255,
+ tp_src=68,tp_dst=67
+ actions=resubmit(,18)
+ cookie=0x0, duration=187.408s, table=17, n_packets=98, n_bytes=8276,
+ idle_age=1, priority=90,ip,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13,nw_src=192.168.1.5
+ actions=resubmit(,18)
+ cookie=0x0, duration=187.408s, table=17, n_packets=17, n_bytes=1386,
+ idle_age=55, priority=80,ipv6,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=187.408s, table=17, n_packets=0, n_bytes=0,
+ idle_age=187, priority=80,ip,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=187.408s, table=18, n_packets=5, n_bytes=210,
+ idle_age=10, priority=90,arp,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13,arp_spa=192.168.1.5,
+ arp_sha=fa:16:3e:15:7d:13
+ actions=resubmit(,19)
+ cookie=0x0, duration=187.408s, table=18, n_packets=0, n_bytes=0,
+ idle_age=187, priority=80,icmp6,reg6=0x3,metadata=0x5,
+ icmp_type=135,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=187.408s, table=18, n_packets=0, n_bytes=0,
+ idle_age=187, priority=80,icmp6,reg6=0x3,metadata=0x5,
+ icmp_type=136,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=187.408s, table=18, n_packets=0, n_bytes=0,
+ idle_age=187, priority=80,arp,reg6=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=47.068s, table=19, n_packets=33, n_bytes=4081,
+ idle_age=0, priority=100,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
+ idle_age=47, priority=100,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=47.068s, table=22, n_packets=15, n_bytes=1392,
+ idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=16, n_bytes=1922,
+ idle_age=2, priority=2002,ct_state=+new+trk,ip,reg6=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
+ nw_dst=192.168.1.0/24,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.069s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2001,ipv6,reg6=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2001,ip,reg6=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=22, n_packets=2, n_bytes=767,
+ idle_age=27, priority=1,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=1,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=179.457s, table=25, n_packets=2, n_bytes=84,
+ idle_age=33, priority=50,arp,metadata=0x5,arp_tpa=192.168.1.5,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:15:7d:13,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163e157d13->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80105->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],
+ load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=187.408s, table=26, n_packets=50, n_bytes=4806,
+ idle_age=1, priority=50,metadata=0x5,dl_dst=fa:16:3e:15:7d:13
+ actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=469.575s, table=33, n_packets=74, n_bytes=7040,
+ idle_age=305, priority=100,reg7=0x4,metadata=0x4
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
+ cookie=0x0, duration=179.460s, table=34, n_packets=2, n_bytes=684,
+ idle_age=84, priority=100,reg6=0x3,reg7=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.069s, table=49, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=47.068s, table=49, n_packets=34, n_bytes=4455,
+ idle_age=0, priority=100,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
+ idle_age=47, priority=100,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=47.069s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.069s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=22, n_bytes=2000,
+ idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,ct_state=+new+trk,ip,reg7=0x3,
+ metadata=0x5,nw_src=192.168.1.5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,ct_state=+new+trk,ip,reg7=0x3,
+ metadata=0x5,nw_src=203.0.113.103
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=3, n_bytes=1141,
+ idle_age=27, priority=2002,udp,reg7=0x3,metadata=0x5,
+ nw_src=192.168.1.0/24,tp_src=67,tp_dst=68
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=39.497s, table=52, n_packets=0, n_bytes=0,
+ idle_age=39, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2001,ip,reg7=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2001,ipv6,reg7=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=52, n_packets=9, n_bytes=1314,
+ idle_age=2, priority=1,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
+ idle_age=47, priority=1,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=47.068s, table=54, n_packets=23, n_bytes=2945,
+ idle_age=0, priority=90,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13,nw_dst=192.168.1.11
+ actions=resubmit(,55)
+ cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
+ idle_age=47, priority=90,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13,nw_dst=255.255.255.255
+ actions=resubmit(,55)
+ cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
+ idle_age=47, priority=90,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13,nw_dst=224.0.0.0/4
+ actions=resubmit(,55)
+ cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
+ idle_age=47, priority=80,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
+ idle_age=47, priority=80,ipv6,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=47.068s, table=55, n_packets=25, n_bytes=3029,
+ idle_age=0, priority=50,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:15:7d:13
+ actions=resubmit(,64)
+ cookie=0x0, duration=179.460s, table=64, n_packets=116, n_bytes=10623,
+ idle_age=1, priority=100,reg7=0x3,metadata=0x5
+ actions=output:12
+
+ * For each compute node that only contains a DHCP agent on the subnet,
+ OVN creates the following flows:
+
+ .. code-block:: console
+
+ cookie=0x0, duration=192.587s, table=16, n_packets=0, n_bytes=0,
+ idle_age=192, priority=50,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13
+ actions=resubmit(,17)
+ cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
+ idle_age=192, priority=90,ip,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13,nw_src=192.168.1.5
+ actions=resubmit(,18)
+ cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
+ idle_age=192, priority=90,udp,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13,nw_src=0.0.0.0,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=resubmit(,18)
+ cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,ipv6,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,ip,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
+ idle_age=192, priority=90,arp,reg6=0x3,metadata=0x5,
+ dl_src=fa:16:3e:15:7d:13,arp_spa=192.168.1.5,
+ arp_sha=fa:16:3e:15:7d:13
+ actions=resubmit(,19)
+ cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,arp,reg6=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,icmp6,reg6=0x3,metadata=0x5,
+ icmp_type=135,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,icmp6,reg6=0x3,metadata=0x5,
+ icmp_type=136,icmp_code=0
+ actions=drop
+ cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=47.068s, table=19, n_packets=33, n_bytes=4081,
+ idle_age=0, priority=100,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
+ idle_age=47, priority=100,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=47.068s, table=22, n_packets=15, n_bytes=1392,
+ idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=16, n_bytes=1922,
+ idle_age=2, priority=2002,ct_state=+new+trk,ip,reg6=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
+ nw_dst=192.168.1.0/24,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.069s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2001,ipv6,reg6=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=2001,ip,reg6=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=47.068s, table=22, n_packets=2, n_bytes=767,
+ idle_age=27, priority=1,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
+ idle_age=47, priority=1,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=179.457s, table=25, n_packets=2, n_bytes=84,
+ idle_age=33, priority=50,arp,metadata=0x5,arp_tpa=192.168.1.5,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:15:7d:13,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163e157d13->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80105->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],
+ load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=192.587s, table=26, n_packets=61, n_bytes=5607,
+ idle_age=6, priority=50,metadata=0x5,dl_dst=fa:16:3e:15:7d:13
+ actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=184.640s, table=32, n_packets=61, n_bytes=5607,
+ idle_age=6, priority=100,reg7=0x3,metadata=0x5
+ actions=load:0x5->NXM_NX_TUN_ID[0..23],
+ set_field:0x3/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:4
+ cookie=0x0, duration=47.069s, table=49, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
+ idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=47.068s, table=49, n_packets=34, n_bytes=4455,
+ idle_age=0, priority=100,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
+ idle_age=47, priority=100,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=65535,ct_state=+inv+trk,
+ metadata=0x5
+ actions=drop
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=65535,ct_state=-new-est+rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=27, n_bytes=2316,
+ idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,
+ metadata=0x5
+ actions=resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=2002,ct_state=+new+trk,icmp,reg7=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
+ metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=2002,udp,reg7=0x3,metadata=0x5,
+ nw_src=192.168.1.0/24,tp_src=67,tp_dst=68
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=2002,ct_state=+new+trk,ip,reg7=0x3,
+ metadata=0x5,nw_src=203.0.113.103
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=2001,ip,reg7=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=2001,ipv6,reg7=0x3,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=192.587s, table=52, n_packets=25, n_bytes=2604,
+ idle_age=6, priority=1,ip,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
+ idle_age=192, priority=1,ipv6,metadata=0x5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
+ idle_age=192, priority=90,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13,nw_dst=224.0.0.0/4
+ actions=resubmit(,55)
+ cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
+ idle_age=192, priority=90,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13,nw_dst=255.255.255.255
+ actions=resubmit(,55)
+ cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
+ idle_age=192, priority=90,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13,nw_dst=192.168.1.5
+ actions=resubmit(,55)
+ cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,ipv6,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
+ idle_age=192, priority=80,ip,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13
+ actions=drop
+ cookie=0x0, duration=192.587s, table=55, n_packets=0, n_bytes=0,
+ idle_age=192, priority=50,reg7=0x3,metadata=0x5,
+ dl_dst=fa:16:3e:15:7d:13
+ actions=resubmit(,64)
+
+ * For each compute node that contains neither the instance nor a DHCP
+ agent on the subnet, OVN creates the following flows:
+
+ .. code-block:: console
+
+ cookie=0x0, duration=189.763s, table=52, n_packets=0, n_bytes=0,
+ idle_age=189, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
+ metadata=0x4
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=189.763s, table=52, n_packets=0, n_bytes=0,
+ idle_age=189, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
+ metadata=0x4,nw_src=192.168.1.5
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
diff --git a/doc/source/admin/ovn/refarch/provider-networks.rst b/doc/source/admin/ovn/refarch/provider-networks.rst
new file mode 100644
index 00000000000..d8054708919
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/provider-networks.rst
@@ -0,0 +1,656 @@
+.. _refarch-provider-networks:
+
+Provider networks
+-----------------
+
+A provider (external) network bridges instances to physical network
+infrastructure that provides layer-3 services. In most cases, provider networks
+implement layer-2 segmentation using VLAN IDs. A provider network maps to a
+provider bridge on each compute node that supports launching instances on the
+provider network. You can create more than one provider bridge, each one
+requiring a unique name and underlying physical network interface to prevent
+switching loops. Provider networks and bridges can use arbitrary names,
+but each mapping must reference valid provider network and bridge names.
+Each provider bridge can contain one ``flat`` (untagged) network and up to
+the maximum number of ``vlan`` (tagged) networks that the physical network
+infrastructure supports, typically around 4000.
+
+Creating a provider network involves several commands at the host, OVS,
+and Networking service levels that yield a series of operations at the
+OVN level to create the virtual network components. The following example
+creates a ``flat`` provider network ``provider`` using the provider bridge
+``br-provider`` and binds a subnet to it.
+
+Create a provider network
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. On each compute node, create the provider bridge, map the provider
+ network to it, and add the underlying physical or logical (typically
+ a bond) network interface to it.
+
+ .. code-block:: console
+
+ # ovs-vsctl --may-exist add-br br-provider -- set bridge br-provider \
+ protocols=OpenFlow13
+ # ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=provider:br-provider
+ # ovs-vsctl --may-exist add-port br-provider INTERFACE_NAME
+
+ Replace ``INTERFACE_NAME`` with the name of the underlying network
+ interface.
+
+ .. note::
+
+ These commands provide no output if successful.
+
+#. On the controller node, source the administrative project credentials.
+
+#. On the controller node, to enable this chassis to host gateway routers
+ for external connectivity, set ovn-cms-options to enable-chassis-as-gw.
+
+ .. code-block:: console
+
+ # ovs-vsctl set Open_vSwitch . external-ids:ovn-cms-options="enable-chassis-as-gw"
+
+ .. note::
+
+ This command provide no output if successful.
+
+#. On the controller node, create the provider network in the Networking
+ service. In this case, instances and routers in other projects can use
+ the network.
+
+ .. code-block:: console
+
+ $ openstack network create --external --share \
+ --provider-physical-network provider --provider-network-type flat \
+ provider
+ +---------------------------+--------------------------------------+
+ | Field | Value |
+ +---------------------------+--------------------------------------+
+ | admin_state_up | UP |
+ | availability_zone_hints | |
+ | availability_zones | nova |
+ | created_at | 2016-06-15 15:50:37+00:00 |
+ | description | |
+ | id | 0243277b-4aa8-46d8-9e10-5c9ad5e01521 |
+ | ipv4_address_scope | None |
+ | ipv6_address_scope | None |
+ | is_default | False |
+ | mtu | 1500 |
+ | name | provider |
+ | project_id | b1ebf33664df402693f729090cfab861 |
+ | provider:network_type | flat |
+ | provider:physical_network | provider |
+ | provider:segmentation_id | None |
+ | qos_policy_id | None |
+ | router:external | External |
+ | shared | True |
+ | status | ACTIVE |
+ | subnets | 32a61337-c5a3-448a-a1e7-c11d6f062c21 |
+ | tags | [] |
+ | updated_at | 2016-06-15 15:50:37+00:00 |
+ +---------------------------+--------------------------------------+
+
+ .. note::
+
+ The value of ``--provider-physical-network`` must refer to the
+ provider network name in the mapping.
+
+OVN operations
+^^^^^^^^^^^^^^
+
+.. todo: I don't like going this deep with headers, so a future patch
+ will probably break this content into multiple files.
+
+The OVN mechanism driver and OVN perform the following operations during
+creation of a provider network.
+
+#. The mechanism driver translates the network into a logical switch
+ in the OVN northbound database.
+
+ .. code-block:: console
+
+ _uuid : 98edf19f-2dbc-4182-af9b-79cafa4794b6
+ acls : []
+ external_ids : {"neutron:network_name"=provider}
+ load_balancer : []
+ name : "neutron-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
+ ports : [92ee7c2f-cd22-4cac-a9d9-68a374dc7b17]
+
+ .. note::
+
+ The ``neutron:network_name`` field in ``external_ids`` contains
+ the network name and ``name`` contains the network UUID.
+
+#. In addition, because the provider network is handled by a separate
+ bridge, the following logical port is created in the OVN northbound
+ database.
+
+ .. code-block:: console
+
+ _uuid : 92ee7c2f-cd22-4cac-a9d9-68a374dc7b17
+ addresses : [unknown]
+ enabled : []
+ external_ids : {}
+ name : "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
+ options : {network_name=provider}
+ parent_name : []
+ port_security : []
+ tag : []
+ type : localnet
+ up : false
+
+#. The OVN northbound service translates these objects into datapath bindings,
+ port bindings, and the appropriate multicast groups in the OVN southbound
+ database.
+
+ * Datapath bindings
+
+ .. code-block:: console
+
+ _uuid : f1f0981f-a206-4fac-b3a1-dc2030c9909f
+ external_ids : {logical-switch="98edf19f-2dbc-4182-af9b-79cafa4794b6"}
+ tunnel_key : 109
+
+ * Port bindings
+
+ .. code-block:: console
+
+ _uuid : 8427506e-46b5-41e5-a71b-a94a6859e773
+ chassis : []
+ datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
+ logical_port : "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
+ mac : [unknown]
+ options : {network_name=provider}
+ parent_port : []
+ tag : []
+ tunnel_key : 1
+ type : localnet
+
+ * Logical flows
+
+ .. code-block:: console
+
+ Datapath: f1f0981f-a206-4fac-b3a1-dc2030c9909f Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 100, match=(eth.src[40]),
+ action=(drop;)
+ table= 0( ls_in_port_sec_l2), priority= 100, match=(vlan.present),
+ action=(drop;)
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"),
+ action=(next;)
+ table= 1( ls_in_port_sec_ip), priority= 0, match=(1),
+ action=(next;)
+ table= 2( ls_in_port_sec_nd), priority= 0, match=(1),
+ action=(next;)
+ table= 3( ls_in_pre_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 4( ls_in_pre_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 5( ls_in_pre_stateful), priority= 100, match=(reg0[0] == 1),
+ action=(ct_next;)
+ table= 5( ls_in_pre_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 6( ls_in_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 7( ls_in_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 8( ls_in_stateful), priority= 100, match=(reg0[1] == 1),
+ action=(ct_commit; next;)
+ table= 8( ls_in_stateful), priority= 100, match=(reg0[2] == 1),
+ action=(ct_lb;)
+ table= 8( ls_in_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 9( ls_in_arp_rsp), priority= 100,
+ match=(inport == "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"),
+ action=(next;)
+ table= 9( ls_in_arp_rsp), priority= 0, match=(1),
+ action=(next;)
+ table=10( ls_in_l2_lkup), priority= 100, match=(eth.mcast),
+ action=(outport = "_MC_flood"; output;)
+ table=10( ls_in_l2_lkup), priority= 0, match=(1),
+ action=(outport = "_MC_unknown"; output;)
+ Datapath: f1f0981f-a206-4fac-b3a1-dc2030c9909f Pipeline: egress
+ table= 0( ls_out_pre_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 1( ls_out_pre_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 2(ls_out_pre_stateful), priority= 100, match=(reg0[0] == 1),
+ action=(ct_next;)
+ table= 2(ls_out_pre_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 3( ls_out_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 4( ls_out_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 5( ls_out_stateful), priority= 100, match=(reg0[1] == 1),
+ action=(ct_commit; next;)
+ table= 5( ls_out_stateful), priority= 100, match=(reg0[2] == 1),
+ action=(ct_lb;)
+ table= 5( ls_out_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 6( ls_out_port_sec_ip), priority= 0, match=(1),
+ action=(next;)
+ table= 7( ls_out_port_sec_l2), priority= 100, match=(eth.mcast),
+ action=(output;)
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"),
+ action=(output;)
+
+ * Multicast groups
+
+ .. code-block:: console
+
+ _uuid : 0102f08d-c658-4d0a-a18a-ec8adcaddf4f
+ datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
+ name : _MC_unknown
+ ports : [8427506e-46b5-41e5-a71b-a94a6859e773]
+ tunnel_key : 65534
+
+ _uuid : fbc38e51-ac71-4c57-a405-e6066e4c101e
+ datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
+ name : _MC_flood
+ ports : [8427506e-46b5-41e5-a71b-a94a6859e773]
+ tunnel_key : 65535
+
+Create a subnet on the provider network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The provider network requires at least one subnet that contains the IP
+address allocation available for instances, default gateway IP address,
+and metadata such as name resolution.
+
+#. On the controller node, create a subnet bound to the provider network
+ ``provider``.
+
+ .. code-block:: console
+
+ $ openstack subnet create --network provider --subnet-range \
+ 203.0.113.0/24 --allocation-pool start=203.0.113.101,end=203.0.113.250 \
+ --dns-nameserver 8.8.8.8,8.8.4.4 --gateway 203.0.113.1 provider-v4
+ +-------------------+--------------------------------------+
+ | Field | Value |
+ +-------------------+--------------------------------------+
+ | allocation_pools | 203.0.113.101-203.0.113.250 |
+ | cidr | 203.0.113.0/24 |
+ | created_at | 2016-06-15 15:50:45+00:00 |
+ | description | |
+ | dns_nameservers | 8.8.8.8, 8.8.4.4 |
+ | enable_dhcp | True |
+ | gateway_ip | 203.0.113.1 |
+ | host_routes | |
+ | id | 32a61337-c5a3-448a-a1e7-c11d6f062c21 |
+ | ip_version | 4 |
+ | ipv6_address_mode | None |
+ | ipv6_ra_mode | None |
+ | name | provider-v4 |
+ | network_id | 0243277b-4aa8-46d8-9e10-5c9ad5e01521 |
+ | project_id | b1ebf33664df402693f729090cfab861 |
+ | subnetpool_id | None |
+ | updated_at | 2016-06-15 15:50:45+00:00 |
+ +-------------------+--------------------------------------+
+
+If using DHCP to manage instance IP addresses, adding a subnet causes a series
+of operations in the Networking service and OVN.
+
+* The Networking service schedules the network on appropriate number of DHCP
+ agents. The example environment contains three DHCP agents.
+
+* Each DHCP agent spawns a network namespace with a ``dnsmasq`` process using
+ an IP address from the subnet allocation.
+
+* The OVN mechanism driver creates a logical switch port object in the OVN
+ northbound database for each ``dnsmasq`` process.
+
+OVN operations
+^^^^^^^^^^^^^^
+
+The OVN mechanism driver and OVN perform the following operations
+during creation of a subnet on the provider network.
+
+#. If the subnet uses DHCP for IP address management, create logical ports
+ ports for each DHCP agent serving the subnet and bind them to the logical
+ switch. In this example, the subnet contains two DHCP agents.
+
+ .. code-block:: console
+
+ _uuid : 5e144ab9-3e08-4910-b936-869bbbf254c8
+ addresses : ["fa:16:3e:57:f9:ca 203.0.113.101"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "6ab052c2-7b75-4463-b34f-fd3426f61787"
+ options : {}
+ parent_name : []
+ port_security : []
+ tag : []
+ type : ""
+ up : true
+
+ _uuid : 38cf8b52-47c4-4e93-be8d-06bf71f6a7c9
+ addresses : ["fa:16:3e:e0:eb:6d 203.0.113.102"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "94aee636-2394-48bc-b407-8224ab6bb1ab"
+ options : {}
+ parent_name : []
+ port_security : []
+ tag : []
+ type : ""
+ up : true
+
+ _uuid : 924500c4-8580-4d5f-a7ad-8769f6e58ff5
+ acls : []
+ external_ids : {"neutron:network_name"=provider}
+ load_balancer : []
+ name : "neutron-670efade-7cd0-4d87-8a04-27f366eb8941"
+ ports : [38cf8b52-47c4-4e93-be8d-06bf71f6a7c9,
+ 5e144ab9-3e08-4910-b936-869bbbf254c8,
+ a576b812-9c3e-4cfb-9752-5d8500b3adf9]
+
+#. The OVN northbound service creates port bindings for these logical
+ ports and adds them to the appropriate multicast group.
+
+ * Port bindings
+
+ .. code-block:: console
+
+ _uuid : 030024f4-61c3-4807-859b-07727447c427
+ chassis : fc5ab9e7-bc28-40e8-ad52-2949358cc088
+ datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
+ logical_port : "6ab052c2-7b75-4463-b34f-fd3426f61787"
+ mac : ["fa:16:3e:57:f9:ca 203.0.113.101"]
+ options : {}
+ parent_port : []
+ tag : []
+ tunnel_key : 2
+ type : ""
+
+ _uuid : cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46
+ chassis : 6a9d0619-8818-41e6-abef-2f3d9a597c03
+ datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
+ logical_port : "94aee636-2394-48bc-b407-8224ab6bb1ab"
+ mac : ["fa:16:3e:e0:eb:6d 203.0.113.102"]
+ options : {}
+ parent_port : []
+ tag : []
+ tunnel_key : 3
+ type : ""
+
+ * Multicast groups
+
+ .. code-block:: console
+
+ _uuid : 39b32ccd-fa49-4046-9527-13318842461e
+ datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
+ name : _MC_flood
+ ports : [030024f4-61c3-4807-859b-07727447c427,
+ 904c3108-234d-41c0-b93c-116b7e352a75,
+ cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46]
+ tunnel_key : 65535
+
+#. The OVN northbound service translates the logical ports into
+ additional logical flows in the OVN southbound database.
+
+ .. code-block:: console
+
+ Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "94aee636-2394-48bc-b407-8224ab6bb1ab"),
+ action=(next;)
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "6ab052c2-7b75-4463-b34f-fd3426f61787"),
+ action=(next;)
+ table= 9( ls_in_arp_rsp), priority= 50,
+ match=(arp.tpa == 203.0.113.101 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:57:f9:ca;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:57:f9:ca; arp.tpa = arp.spa;
+ arp.spa = 203.0.113.101; outport = inport; inport = "";
+ /* Allow sending out inport. */ output;)
+ table= 9( ls_in_arp_rsp), priority= 50,
+ match=(arp.tpa == 203.0.113.102 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:e0:eb:6d;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:e0:eb:6d; arp.tpa = arp.spa;
+ arp.spa = 203.0.113.102; outport = inport;
+ inport = ""; /* Allow sending out inport. */ output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:57:f9:ca),
+ action=(outport = "6ab052c2-7b75-4463-b34f-fd3426f61787"; output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:e0:eb:6d),
+ action=(outport = "94aee636-2394-48bc-b407-8224ab6bb1ab"; output;)
+ Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: egress
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "6ab052c2-7b75-4463-b34f-fd3426f61787"),
+ action=(output;)
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "94aee636-2394-48bc-b407-8224ab6bb1ab"),
+ action=(output;)
+
+#. For each compute node without a DHCP agent on the subnet:
+
+ * The OVN controller service translates the logical flows into flows on the
+ integration bridge ``br-int``.
+
+ .. code-block:: console
+
+ cookie=0x0, duration=22.303s, table=32, n_packets=0, n_bytes=0,
+ idle_age=22, priority=100,reg7=0xffff,metadata=0x4
+ actions=load:0x4->NXM_NX_TUN_ID[0..23],
+ set_field:0xffff/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],
+ output:5,output:4,resubmit(,33)
+
+#. For each compute node with a DHCP agent on a subnet:
+
+ * Creation of a DHCP network namespace adds two virtual switch ports.
+ The first port connects the DHCP agent with ``dnsmasq`` process to the
+ integration bridge and the second port patches the integration bridge
+ to the provider bridge ``br-provider``.
+
+ .. code-block:: console
+
+ # ovs-ofctl show br-int
+ OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
+ n_tables:254, n_buffers:256
+ capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
+ actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
+ 7(tap6ab052c2-7b): addr:00:00:00:00:10:7f
+ config: PORT_DOWN
+ state: LINK_DOWN
+ speed: 0 Mbps now, 0 Mbps max
+ 8(patch-br-int-to): addr:6a:8c:30:3f:d7:dd
+ config: 0
+ state: 0
+ speed: 0 Mbps now, 0 Mbps max
+
+ # ovs-ofctl -O OpenFlow13 show br-provider
+ OFPT_FEATURES_REPLY (OF1.3) (xid=0x2): dpid:0000080027137c4a
+ n_tables:254, n_buffers:256
+ capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS
+ OFPST_PORT_DESC reply (OF1.3) (xid=0x3):
+ 1(patch-provnet-0): addr:fa:42:c5:3f:d7:6f
+ config: 0
+ state: 0
+ speed: 0 Mbps now, 0 Mbps max
+
+ * The OVN controller service translates these logical flows into flows on
+ the integration bridge.
+
+ .. code-block:: console
+
+ cookie=0x0, duration=17.731s, table=0, n_packets=3, n_bytes=258,
+ idle_age=16, priority=100,in_port=7
+ actions=load:0x2->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
+ load:0x2->NXM_NX_REG6[],resubmit(,16)
+ cookie=0x0, duration=17.730s, table=0, n_packets=15, n_bytes=954,
+ idle_age=2, priority=100,in_port=8,vlan_tci=0x0000/0x1000
+ actions=load:0x1->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
+ load:0x1->NXM_NX_REG6[],resubmit(,16)
+ cookie=0x0, duration=17.730s, table=0, n_packets=0, n_bytes=0,
+ idle_age=17, priority=100,in_port=8,dl_vlan=0
+ actions=strip_vlan,load:0x1->NXM_NX_REG5[],
+ load:0x4->OXM_OF_METADATA[],load:0x1->NXM_NX_REG6[],
+ resubmit(,16)
+ cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
+ idle_age=17, priority=100,metadata=0x4,
+ dl_src=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=drop
+ cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
+ idle_age=17, priority=100,metadata=0x4,vlan_tci=0x1000/0x1000
+ actions=drop
+ cookie=0x0, duration=17.732s, table=16, n_packets=3, n_bytes=258,
+ idle_age=16, priority=50,reg6=0x2,metadata=0x4 actions=resubmit(,17)
+ cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
+ idle_age=17, priority=50,reg6=0x3,metadata=0x4 actions=resubmit(,17)
+ cookie=0x0, duration=17.732s, table=16, n_packets=15, n_bytes=954,
+ idle_age=2, priority=50,reg6=0x1,metadata=0x4 actions=resubmit(,17)
+ cookie=0x0, duration=21.714s, table=17, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,18)
+ cookie=0x0, duration=21.714s, table=18, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,19)
+ cookie=0x0, duration=21.714s, table=19, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,20)
+ cookie=0x0, duration=21.714s, table=20, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,21)
+ cookie=0x0, duration=21.714s, table=21, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ip,reg0=0x1/0x1,metadata=0x4
+ actions=ct(table=22,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=21.714s, table=21, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ipv6,reg0=0x1/0x1,metadata=0x4
+ actions=ct(table=22,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=21.714s, table=21, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,22)
+ cookie=0x0, duration=21.714s, table=22, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,23)
+ cookie=0x0, duration=21.714s, table=23, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,24)
+ cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ipv6,reg0=0x4/0x4,metadata=0x4
+ actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ip,reg0=0x4/0x4,metadata=0x4
+ actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ip,reg0=0x2/0x2,metadata=0x4
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
+ cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ipv6,reg0=0x2/0x2,metadata=0x4
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
+ cookie=0x0, duration=21.714s, table=24, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,25)
+ cookie=0x0, duration=21.714s, table=25, n_packets=15, n_bytes=954,
+ idle_age=6, priority=100,reg6=0x1,metadata=0x4 actions=resubmit(,26)
+ cookie=0x0, duration=21.714s, table=25, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,arp,metadata=0x4,
+ arp_tpa=203.0.113.101,arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:f9:5d:f3,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163ef95df3->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a81264->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],
+ load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=21.714s, table=25, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,arp,metadata=0x4,
+ arp_tpa=203.0.113.102,arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:f0:a5:9f,
+ load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163ef0a59f->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a81265->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],
+ load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=21.714s, table=25, n_packets=3, n_bytes=258,
+ idle_age=20, priority=0,metadata=0x4 actions=resubmit(,26)
+ cookie=0x0, duration=21.714s, table=26, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=100,metadata=0x4,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=21.714s, table=26, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,metadata=0x4,dl_dst=fa:16:3e:f0:a5:9f
+ actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=21.714s, table=26, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,metadata=0x4,dl_dst=fa:16:3e:f9:5d:f3
+ actions=load:0x2->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=21.714s, table=26, n_packets=0, n_bytes=0,
+ idle_age=21, priority=0,metadata=0x4
+ actions=load:0xfffe->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=17.731s, table=33, n_packets=0, n_bytes=0,
+ idle_age=17, priority=100,reg7=0x2,metadata=0x4
+ actions=load:0x2->NXM_NX_REG5[],resubmit(,34)
+ cookie=0x0, duration=118.126s, table=33, n_packets=0, n_bytes=0,
+ idle_age=118, hard_age=17, priority=100,reg7=0xfffe,metadata=0x4
+ actions=load:0x1->NXM_NX_REG5[],load:0x1->NXM_NX_REG7[],
+ resubmit(,34),load:0xfffe->NXM_NX_REG7[]
+ cookie=0x0, duration=118.126s, table=33, n_packets=18, n_bytes=1212,
+ idle_age=2, hard_age=17, priority=100,reg7=0xffff,metadata=0x4
+ actions=load:0x2->NXM_NX_REG5[],load:0x2->NXM_NX_REG7[],
+ resubmit(,34),load:0x1->NXM_NX_REG5[],load:0x1->NXM_NX_REG7[],
+ resubmit(,34),load:0xffff->NXM_NX_REG7[]
+ cookie=0x0, duration=17.730s, table=33, n_packets=0, n_bytes=0,
+ idle_age=17, priority=100,reg7=0x1,metadata=0x4
+ actions=load:0x1->NXM_NX_REG5[],resubmit(,34)
+ cookie=0x0, duration=17.697s, table=33, n_packets=0, n_bytes=0,
+ idle_age=17, priority=100,reg7=0x3,metadata=0x4
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
+ cookie=0x0, duration=17.731s, table=34, n_packets=3, n_bytes=258,
+ idle_age=16, priority=100,reg6=0x2,reg7=0x2,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=17.730s, table=34, n_packets=15, n_bytes=954,
+ idle_age=2, priority=100,reg6=0x1,reg7=0x1,metadata=0x4
+ actions=drop
+ cookie=0x0, duration=21.714s, table=48, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,49)
+ cookie=0x0, duration=21.714s, table=49, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,50)
+ cookie=0x0, duration=21.714s, table=50, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ip,reg0=0x1/0x1,metadata=0x4
+ actions=ct(table=51,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=21.714s, table=50, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ipv6,reg0=0x1/0x1,metadata=0x4
+ actions=ct(table=51,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=21.714s, table=50, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,51)
+ cookie=0x0, duration=21.714s, table=51, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,52)
+ cookie=0x0, duration=21.714s, table=52, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,53)
+ cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ip,reg0=0x4/0x4,metadata=0x4
+ actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ipv6,reg0=0x4/0x4,metadata=0x4
+ actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ipv6,reg0=0x2/0x2,metadata=0x4
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
+ cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,ip,reg0=0x2/0x2,metadata=0x4
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
+ cookie=0x0, duration=21.714s, table=53, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,54)
+ cookie=0x0, duration=21.714s, table=54, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=0,metadata=0x4 actions=resubmit(,55)
+ cookie=0x0, duration=21.714s, table=55, n_packets=18, n_bytes=1212,
+ idle_age=6, priority=100,metadata=0x4,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=resubmit(,64)
+ cookie=0x0, duration=21.714s, table=55, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,reg7=0x3,metadata=0x4
+ actions=resubmit(,64)
+ cookie=0x0, duration=21.714s, table=55, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,reg7=0x2,metadata=0x4
+ actions=resubmit(,64)
+ cookie=0x0, duration=21.714s, table=55, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,reg7=0x1,metadata=0x4
+ actions=resubmit(,64)
+ cookie=0x0, duration=21.712s, table=64, n_packets=15, n_bytes=954,
+ idle_age=6, priority=100,reg7=0x3,metadata=0x4 actions=output:7
+ cookie=0x0, duration=21.711s, table=64, n_packets=3, n_bytes=258,
+ idle_age=20, priority=100,reg7=0x1,metadata=0x4 actions=output:8
+
diff --git a/doc/source/admin/ovn/refarch/refarch.rst b/doc/source/admin/ovn/refarch/refarch.rst
new file mode 100644
index 00000000000..8eb52d2a962
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/refarch.rst
@@ -0,0 +1,311 @@
+.. _refarch-refarch:
+
+======================
+Reference architecture
+======================
+
+The reference architecture defines the minimum environment necessary
+to deploy OpenStack with Open Virtual Network (OVN) integration for
+the Networking service in production with sufficient expectations
+of scale and performance. For evaluation purposes, you can deploy this
+environment using the :doc:`Installation Guide ` or
+`Vagrant `_.
+Any scaling or performance evaluations should use bare metal instead of
+virtual machines.
+
+Layout
+------
+
+The reference architecture includes a minimum of four nodes.
+
+The controller node contains the following components that provide enough
+functionality to launch basic instances:
+
+* One network interface for management
+* Identity service
+* Image service
+* Networking management with ML2 mechanism driver for OVN (control plane)
+* Compute management (control plane)
+
+The database node contains the following components:
+
+* One network interface for management
+* OVN northbound service (``ovn-northd``)
+* Open vSwitch (OVS) database service (``ovsdb-server``) for the OVN
+ northbound database (``ovnnb.db``)
+* Open vSwitch (OVS) database service (``ovsdb-server``) for the OVN
+ southbound database (``ovnsb.db``)
+
+.. note::
+
+ For functional evaluation only, you can combine the controller and
+ database nodes.
+
+The two compute nodes contain the following components:
+
+* Two or three network interfaces for management, overlay networks, and
+ optionally provider networks
+* Compute management (hypervisor)
+* Hypervisor (KVM)
+* OVN controller service (``ovn-controller``)
+* OVS data plane service (``ovs-vswitchd``)
+* OVS database service (``ovsdb-server``) with OVS local configuration
+ (``conf.db``) database
+* OVN metadata agent (``ovn-metadata-agent``)
+
+
+The gateway nodes contain the following components:
+
+* Three network interfaces for management, overlay networks and provider
+ networks.
+* OVN controller service (``ovn-controller``)
+* OVS data plane service (``ovs-vswitchd``)
+* OVS database service (``ovsdb-server``) with OVS local configuration
+ (``conf.db``) database
+
+.. note::
+
+ Each OVN metadata agent provides metadata service locally on the compute
+ nodes in a lightweight way. Each network being accessed by the instances of
+ the compute node will have a corresponding metadata ovn-metadata-$net_uuid
+ namespace, and inside an haproxy will funnel the requests to the
+ ovn-metadata-agent over a unix socket.
+
+ Such namespace can be very helpful for debug purposes to access the local
+ instances on the compute node. If you login as root on such compute node
+ you can execute:
+
+ ip netns ovn-metadata-$net_uuid exec ssh user@my.instance.ip.address
+
+Hardware layout
+~~~~~~~~~~~~~~~
+
+.. image:: figures/ovn-hw.png
+ :alt: Hardware layout
+ :align: center
+
+Service layout
+~~~~~~~~~~~~~~
+
+.. image:: figures/ovn-services.png
+ :alt: Service layout
+ :align: center
+
+Networking service with OVN integration
+---------------------------------------
+
+The reference architecture deploys the Networking service with OVN
+integration as described in the following scenarios:
+
+.. image:: figures/ovn-architecture1.png
+ :alt: Architecture for Networking service with OVN integration
+ :align: center
+
+
+With ovn driver, all the E/W traffic which traverses a virtual
+router is completely distributed, going from compute to compute node
+without passing through the gateway nodes.
+
+N/S traffic that needs SNAT (without floating IPs) will always pass
+through the centralized gateway nodes, although, as soon as you
+have more than one gateway node ovn driver will make use of
+the HA capabilities of ovn.
+
+Centralized Floating IPs
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this architecture, all the N/S router traffic (snat and floating
+IPs) goes through the gateway nodes.
+
+The compute nodes don't need connectivity to the external network,
+although it could be provided if we wanted to have direct connectivity
+to such network from some instances.
+
+For external connectivity, gateway nodes have to set ``ovn-cms-options``
+with ``enable-chassis-as-gw`` in Open_vSwitch table's external_ids column,
+for example:
+
+.. code-block:: console
+
+ $ ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw"
+
+Distributed Floating IPs (DVR)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this architecture, the floating IP N/S traffic flows directly
+from/to the compute nodes through the specific provider network
+bridge. In this case compute nodes need connectivity to the external
+network.
+
+Each compute node contains the following network components:
+
+.. image:: figures/ovn-compute1.png
+ :alt: Compute node network components
+ :align: center
+
+.. note::
+
+ The Networking service creates a unique network namespace for each
+ virtual network that enables the metadata service.
+
+Several external connections can be optionally created via provider
+bridges. Those can be used for direct vm connectivity to the specific
+networks or the use of distributed floating ips.
+
+.. _refarch_database-access:
+
+Accessing OVN database content
+------------------------------
+
+OVN stores configuration data in a collection of OVS database tables.
+The following commands show the contents of the most common database
+tables in the northbound and southbound databases. The example database
+output in this section uses these commands with various output filters.
+
+.. code-block:: console
+
+ $ ovn-nbctl list Logical_Switch
+ $ ovn-nbctl list Logical_Switch_Port
+ $ ovn-nbctl list ACL
+ $ ovn-nbctl list Address_Set
+ $ ovn-nbctl list Logical_Router
+ $ ovn-nbctl list Logical_Router_Port
+ $ ovn-nbctl list Gateway_Chassis
+
+ $ ovn-sbctl list Chassis
+ $ ovn-sbctl list Encap
+ $ ovn-nbctl list Address_Set
+ $ ovn-sbctl lflow-list
+ $ ovn-sbctl list Multicast_Group
+ $ ovn-sbctl list Datapath_Binding
+ $ ovn-sbctl list Port_Binding
+ $ ovn-sbctl list MAC_Binding
+ $ ovn-sbctl list Gateway_Chassis
+
+.. note::
+
+ By default, you must run these commands from the node containing
+ the OVN databases.
+
+.. _refarch-adding-compute-node:
+
+Adding a compute node
+---------------------
+
+When you add a compute node to the environment, the OVN controller
+service on it connects to the OVN southbound database and registers
+the node as a chassis.
+
+.. code-block:: console
+
+ _uuid : 9be8639d-1d0b-4e3d-9070-03a655073871
+ encaps : [2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e]
+ external_ids : {ovn-bridge-mappings=""}
+ hostname : "compute1"
+ name : "410ee302-850b-4277-8610-fa675d620cb7"
+ vtep_logical_switches: []
+
+The ``encaps`` field value refers to tunnel endpoint information
+for the compute node.
+
+.. code-block:: console
+
+ _uuid : 2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e
+ ip : "10.0.0.32"
+ options : {}
+ type : geneve
+
+Security Groups/Rules
+---------------------
+
+Each security group will map to 2 Address_Sets in the OVN NB and SB
+tables, one for ipv4 and another for ipv6, which will be used to hold ip
+addresses for the ports that belong to the security group, so that rules
+with remote_group_id can be efficiently applied.
+
+.. todo: add block with openstack security group rule example
+
+OVN operations
+~~~~~~~~~~~~~~
+
+#. Creating a security group will cause the OVN mechanism driver to create
+ 2 new entries in the Address Set table of the northbound DB:
+
+ .. code-block:: console
+
+ _uuid : 9a9d01bd-4afc-4d12-853a-cd21b547911d
+ addresses : []
+ external_ids : {"neutron:security_group_name"=default}
+ name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+ _uuid : 27a91327-636e-4125-99f0-6f2937a3b6d8
+ addresses : []
+ external_ids : {"neutron:security_group_name"=default}
+ name : "as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+ In the above entries, the address set name include the protocol (IPv4
+ or IPv6, written as ip4 or ip6) and the UUID of the Openstack security
+ group, dashes translated to underscores.
+
+#. In turn, these new entries will be translated by the OVN northd daemon
+ into entries in the southbound DB:
+
+ .. code-block:: console
+
+ _uuid : 886d7b3a-e460-470f-8af2-7c7d88ce45d2
+ addresses : []
+ name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+ _uuid : 355ddcba-941d-4f1c-b823-dc811cec59ca
+ addresses : []
+ name : "as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc"
+
+Networks
+--------
+
+.. toctree::
+ :maxdepth: 1
+
+ provider-networks
+ selfservice-networks
+
+Routers
+-------
+
+.. toctree::
+ :maxdepth: 1
+
+ routers
+
+.. todo: Explain L3HA modes available starting at OVS 2.8
+
+Instances
+---------
+
+Launching an instance causes the same series of operations regardless
+of the network. The following example uses the ``provider`` provider
+network, ``cirros`` image, ``m1.tiny`` flavor, ``default`` security
+group, and ``mykey`` key.
+
+.. toctree::
+ :maxdepth: 1
+
+ launch-instance-provider-network
+ launch-instance-selfservice-network
+
+.. todo: Add north-south when OVN gains support for it.
+
+ Traffic flows
+ -------------
+
+ East-west for instances on the same provider network
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ East-west for instances on different provider networks
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ East-west for instances on the same self-service network
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+ East-west for instances on different self-service networks
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/admin/ovn/refarch/routers.rst b/doc/source/admin/ovn/refarch/routers.rst
new file mode 100644
index 00000000000..abc7ca1853a
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/routers.rst
@@ -0,0 +1,855 @@
+.. _refarch-routers:
+
+Routers
+-------
+
+Routers pass traffic between layer-3 networks.
+
+Create a router
+~~~~~~~~~~~~~~~
+
+#. On the controller node, source the credentials for a regular
+ (non-privileged) project. The following example uses the ``demo``
+ project.
+
+#. On the controller node, create router in the Networking service.
+
+ .. code-block:: console
+
+ $ openstack router create router
+ +-----------------------+--------------------------------------+
+ | Field | Value |
+ +-----------------------+--------------------------------------+
+ | admin_state_up | UP |
+ | description | |
+ | external_gateway_info | null |
+ | headers | |
+ | id | 24addfcd-5506-405d-a59f-003644c3d16a |
+ | name | router |
+ | project_id | b1ebf33664df402693f729090cfab861 |
+ | routes | |
+ | status | ACTIVE |
+ +-----------------------+--------------------------------------+
+
+OVN operations
+^^^^^^^^^^^^^^
+
+The OVN mechanism driver and OVN perform the following operations when
+creating a router.
+
+#. The OVN mechanism driver translates the router into a logical
+ router object in the OVN northbound database.
+
+ .. code-block:: console
+
+ _uuid : 1c2e340d-dac9-496b-9e86-1065f9dab752
+ default_gw : []
+ enabled : []
+ external_ids : {"neutron:router_name"="router"}
+ name : "neutron-a24fd760-1a99-4eec-9f02-24bb284ff708"
+ ports : []
+ static_routes : []
+
+#. The OVN northbound service translates this object into logical flows
+ and datapath bindings in the OVN southbound database.
+
+ * Datapath bindings
+
+ .. code-block:: console
+
+ _uuid : 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa
+ external_ids : {logical-router="1c2e340d-dac9-496b-9e86-1065f9dab752"}
+ tunnel_key : 3
+
+ * Logical flows
+
+ .. code-block:: console
+
+ Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: ingress
+ table= 0( lr_in_admission), priority= 100,
+ match=(vlan.present || eth.src[40]),
+ action=(drop;)
+ table= 1( lr_in_ip_input), priority= 100,
+ match=(ip4.mcast || ip4.src == 255.255.255.255 ||
+ ip4.src == 127.0.0.0/8 || ip4.dst == 127.0.0.0/8 ||
+ ip4.src == 0.0.0.0/8 || ip4.dst == 0.0.0.0/8),
+ action=(drop;)
+ table= 1( lr_in_ip_input), priority= 50, match=(ip4.mcast),
+ action=(drop;)
+ table= 1( lr_in_ip_input), priority= 50, match=(eth.bcast),
+ action=(drop;)
+ table= 1( lr_in_ip_input), priority= 30,
+ match=(ip4 && ip.ttl == {0, 1}), action=(drop;)
+ table= 1( lr_in_ip_input), priority= 0, match=(1),
+ action=(next;)
+ table= 2( lr_in_unsnat), priority= 0, match=(1),
+ action=(next;)
+ table= 3( lr_in_dnat), priority= 0, match=(1),
+ action=(next;)
+ table= 5( lr_in_arp_resolve), priority= 0, match=(1),
+ action=(get_arp(outport, reg0); next;)
+ table= 6( lr_in_arp_request), priority= 100,
+ match=(eth.dst == 00:00:00:00:00:00),
+ action=(arp { eth.dst = ff:ff:ff:ff:ff:ff; arp.spa = reg1;
+ arp.op = 1; output; };)
+ table= 6( lr_in_arp_request), priority= 0, match=(1),
+ action=(output;)
+ Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: egress
+ table= 0( lr_out_snat), priority= 0, match=(1),
+ action=(next;)
+
+#. The OVN controller service on each compute node translates these objects
+ into flows on the integration bridge ``br-int``.
+
+ .. code-block:: console
+
+ # ovs-ofctl dump-flows br-int
+ cookie=0x0, duration=6.402s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,metadata=0x5,vlan_tci=0x1000/0x1000
+ actions=drop
+ cookie=0x0, duration=6.402s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,metadata=0x5,
+ dl_src=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x5,nw_dst=127.0.0.0/8
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x5,nw_dst=0.0.0.0/8
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x5,nw_dst=224.0.0.0/4
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,ip,metadata=0x5,nw_dst=224.0.0.0/4
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x5,nw_src=255.255.255.255
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x5,nw_src=127.0.0.0/8
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x5,nw_src=0.0.0.0/8
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,arp,metadata=0x5,arp_op=2
+ actions=push:NXM_NX_REG0[],push:NXM_OF_ETH_SRC[],
+ push:NXM_NX_ARP_SHA[],push:NXM_OF_ARP_SPA[],
+ pop:NXM_NX_REG0[],pop:NXM_OF_ETH_SRC[],
+ controller(userdata=00.00.00.01.00.00.00.00),
+ pop:NXM_OF_ETH_SRC[],pop:NXM_NX_REG0[]
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,metadata=0x5,dl_dst=ff:ff:ff:ff:ff:ff
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=30,ip,metadata=0x5,nw_ttl=0
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=30,ip,metadata=0x5,nw_ttl=1
+ actions=drop
+ cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x5
+ actions=resubmit(,18)
+ cookie=0x0, duration=6.402s, table=18, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x5
+ actions=resubmit(,19)
+ cookie=0x0, duration=6.402s, table=19, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x5
+ actions=resubmit(,20)
+ cookie=0x0, duration=6.402s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x5
+ actions=resubmit(,32)
+ cookie=0x0, duration=6.402s, table=48, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x5
+ actions=resubmit(,49)
+
+Attach a self-service network to the router
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Self-service networks, particularly subnets, must interface with a
+router to enable connectivity with other self-service and provider
+networks.
+
+#. On the controller node, add the self-service network subnet
+ ``selfservice-v4`` to the router ``router``.
+
+ .. code-block:: console
+
+ $ openstack router add subnet router selfservice-v4
+
+ .. note::
+
+ This command provides no output.
+
+OVN operations
+^^^^^^^^^^^^^^
+
+The OVN mechanism driver and OVN perform the following operations when
+adding a subnet as an interface on a router.
+
+#. The OVN mechanism driver translates the operation into logical
+ objects and devices in the OVN northbound database and performs a
+ series of operations on them.
+
+ * Create a logical port.
+
+ .. code-block:: console
+
+ _uuid : 4c9e70b1-fff0-4d0d-af8e-42d3896eb76f
+ addresses : ["fa:16:3e:0c:55:62 192.168.1.1"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "5b72d278-5b16-44a6-9aa0-9e513a429506"
+ options : {router-port="lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"}
+ parent_name : []
+ port_security : []
+ tag : []
+ type : router
+ up : false
+
+ * Add the logical port to logical switch.
+
+ .. code-block:: console
+
+ _uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
+ acls : []
+ external_ids : {"neutron:network_name"="selfservice"}
+ name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
+ ports : [1ed7c28b-dc69-42b8-bed6-46477bb8b539,
+ 4c9e70b1-fff0-4d0d-af8e-42d3896eb76f,
+ ae10a5e0-db25-4108-b06a-d2d5c127d9c4]
+
+ * Create a logical router port object.
+
+ .. code-block:: console
+
+ _uuid : f60ccb93-7b3d-4713-922c-37104b7055dc
+ enabled : []
+ external_ids : {}
+ mac : "fa:16:3e:0c:55:62"
+ name : "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"
+ network : "192.168.1.1/24"
+ peer : []
+
+ * Add the logical router port to the logical router object.
+
+ .. code-block:: console
+
+ _uuid : 1c2e340d-dac9-496b-9e86-1065f9dab752
+ default_gw : []
+ enabled : []
+ external_ids : {"neutron:router_name"="router"}
+ name : "neutron-a24fd760-1a99-4eec-9f02-24bb284ff708"
+ ports : [f60ccb93-7b3d-4713-922c-37104b7055dc]
+ static_routes : []
+
+#. The OVN northbound service translates these objects into logical flows,
+ datapath bindings, and the appropriate multicast groups in the OVN
+ southbound database.
+
+ * Logical flows in the logical router datapath
+
+ .. code-block:: console
+
+ Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: ingress
+ table= 0( lr_in_admission), priority= 50,
+ match=((eth.mcast || eth.dst == fa:16:3e:0c:55:62) &&
+ inport == "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"),
+ action=(next;)
+ table= 1( lr_in_ip_input), priority= 100,
+ match=(ip4.src == {192.168.1.1, 192.168.1.255}), action=(drop;)
+ table= 1( lr_in_ip_input), priority= 90,
+ match=(ip4.dst == 192.168.1.1 && icmp4.type == 8 &&
+ icmp4.code == 0),
+ action=(ip4.dst = ip4.src; ip4.src = 192.168.1.1; ip.ttl = 255;
+ icmp4.type = 0;
+ inport = ""; /* Allow sending out inport. */ next; )
+ table= 1( lr_in_ip_input), priority= 90,
+ match=(inport == "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506" &&
+ arp.tpa == 192.168.1.1 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:0c:55:62;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:0c:55:62; arp.tpa = arp.spa;
+ arp.spa = 192.168.1.1;
+ outport = "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506";
+ inport = ""; /* Allow sending out inport. */ output;)
+ table= 1( lr_in_ip_input), priority= 60,
+ match=(ip4.dst == 192.168.1.1), action=(drop;)
+ table= 4( lr_in_ip_routing), priority= 24,
+ match=(ip4.dst == 192.168.1.0/255.255.255.0),
+ action=(ip.ttl--; reg0 = ip4.dst; reg1 = 192.168.1.1;
+ eth.src = fa:16:3e:0c:55:62;
+ outport = "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506";
+ next;)
+ Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: egress
+ table= 1( lr_out_delivery), priority= 100,
+ match=(outport == "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506),
+ action=(output;)
+
+ * Logical flows in the logical switch datapath
+
+ .. code-block:: console
+
+ Datapath: 611d35e8-b1e1-442c-bc07-7c6192ad6216 Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "5b72d278-5b16-44a6-9aa0-9e513a429506"),
+ action=(next;)
+ table= 3( ls_in_pre_acl), priority= 110,
+ match=(ip && inport == "5b72d278-5b16-44a6-9aa0-9e513a429506"),
+ action=(next;)
+ table= 9( ls_in_arp_rsp), priority= 50,
+ match=(arp.tpa == 192.168.1.1 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:0c:55:62;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:0c:55:62; arp.tpa = arp.spa;
+ arp.spa = 192.168.1.1; outport = inport;
+ inport = ""; /* Allow sending out inport. */ output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:fa:76:8f),
+ action=(outport = "f112b99a-8ccc-4c52-8733-7593fa0966ea"; output;)
+ Datapath: 611d35e8-b1e1-442c-bc07-7c6192ad6216 Pipeline: egress
+ table= 1( ls_out_pre_acl), priority= 110,
+ match=(ip && outport == "f112b99a-8ccc-4c52-8733-7593fa0966ea"),
+ action=(next;)
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "f112b99a-8ccc-4c52-8733-7593fa0966ea"),
+ action=(output;)
+
+ * Port bindings
+
+ .. code-block:: console
+
+ _uuid : 0f86395b-a0d8-40fd-b22c-4c9e238a7880
+ chassis : []
+ datapath : 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa
+ logical_port : "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"
+ mac : []
+ options : {peer="5b72d278-5b16-44a6-9aa0-9e513a429506"}
+ parent_port : []
+ tag : []
+ tunnel_key : 1
+ type : patch
+
+ _uuid : 8d95ab8c-c2ea-4231-9729-7ecbfc2cd676
+ chassis : []
+ datapath : 4aef86e4-e54a-4c83-bb27-d65c670d4b51
+ logical_port : "5b72d278-5b16-44a6-9aa0-9e513a429506"
+ mac : ["fa:16:3e:0c:55:62 192.168.1.1"]
+ options : {peer="lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"}
+ parent_port : []
+ tag : []
+ tunnel_key : 3
+ type : patch
+
+ * Multicast groups
+
+ .. code-block:: console
+
+ _uuid : 4a6191aa-d8ac-4e93-8306-b0d8fbbe4e35
+ datapath : 4aef86e4-e54a-4c83-bb27-d65c670d4b51
+ name : _MC_flood
+ ports : [8d95ab8c-c2ea-4231-9729-7ecbfc2cd676,
+ be71fac3-9f04-41c9-9951-f3f7f1fa1ec5,
+ da5c1269-90b7-4df2-8d76-d4575754b02d]
+ tunnel_key : 65535
+
+ In addition, if the self-service network contains ports with IP addresses
+ (typically instances or DHCP servers), OVN creates a logical flow for
+ each port, similar to the following example.
+
+ .. code-block:: console
+
+ Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: ingress
+ table= 5( lr_in_arp_resolve), priority= 100,
+ match=(outport == "lrp-f112b99a-8ccc-4c52-8733-7593fa0966ea" &&
+ reg0 == 192.168.1.11),
+ action=(eth.dst = fa:16:3e:b6:91:70; next;)
+
+#. On each compute node, the OVN controller service creates patch ports,
+ similar to the following example.
+
+ .. code-block:: console
+
+ 7(patch-f112b99a-): addr:4e:01:91:2a:73:66
+ config: 0
+ state: 0
+ speed: 0 Mbps now, 0 Mbps max
+ 8(patch-lrp-f112b): addr:be:9d:7b:31:bb:87
+ config: 0
+ state: 0
+ speed: 0 Mbps now, 0 Mbps max
+
+#. On all compute nodes, the OVN controller service creates the
+ following additional flows:
+
+ .. code-block:: console
+
+ cookie=0x0, duration=6.667s, table=0, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,in_port=8
+ actions=load:0x9->OXM_OF_METADATA[],load:0x1->NXM_NX_REG6[],
+ resubmit(,16)
+ cookie=0x0, duration=6.667s, table=0, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,in_port=7
+ actions=load:0x7->OXM_OF_METADATA[],load:0x4->NXM_NX_REG6[],
+ resubmit(,16)
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg6=0x4,metadata=0x7
+ actions=resubmit(,17)
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg6=0x1,metadata=0x9,
+ dl_dst=fa:16:3e:fa:76:8f
+ actions=resubmit(,17)
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg6=0x1,metadata=0x9,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=resubmit(,17)
+ cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x9,nw_src=192.168.1.1
+ actions=drop
+ cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x9,nw_src=192.168.1.255
+ actions=drop
+ cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,arp,reg6=0x1,metadata=0x9,
+ arp_tpa=192.168.1.1,arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:fa:76:8f,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163efa768f->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80101->NXM_OF_ARP_SPA[],load:0x1->NXM_NX_REG7[],
+ load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,icmp,metadata=0x9,nw_dst=192.168.1.1,
+ icmp_type=8,icmp_code=0
+ actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],mod_nw_src:192.168.1.1,
+ load:0xff->NXM_NX_IP_TTL[],load:0->NXM_OF_ICMP_TYPE[],
+ load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,18)
+ cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=60,ip,metadata=0x9,nw_dst=192.168.1.1
+ actions=drop
+ cookie=0x0, duration=6.674s, table=20, n_packets=0, n_bytes=0,
+ idle_age=6, priority=24,ip,metadata=0x9,nw_dst=192.168.1.0/24
+ actions=dec_ttl(),move:NXM_OF_IP_DST[]->NXM_NX_REG0[],
+ load:0xc0a80101->NXM_NX_REG1[],mod_dl_src:fa:16:3e:fa:76:8f,
+ load:0x1->NXM_NX_REG7[],resubmit(,21)
+ cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg0=0xc0a80103,reg7=0x1,metadata=0x9
+ actions=mod_dl_dst:fa:16:3e:d5:00:02,resubmit(,22)
+ cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg0=0xc0a80102,reg7=0x1,metadata=0x9
+ actions=mod_dl_dst:fa:16:3e:82:8b:0e,resubmit(,22)
+ cookie=0x0, duration=6.673s, table=21, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg0=0xc0a8010b,reg7=0x1,metadata=0x9
+ actions=mod_dl_dst:fa:16:3e:b6:91:70,resubmit(,22)
+ cookie=0x0, duration=6.673s, table=25, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.1,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:fa:76:8f,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163efa768f->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80101->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=6.674s, table=26, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:fa:76:8f
+ actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=6.667s, table=33, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x4,metadata=0x7
+ actions=resubmit(,34)
+ cookie=0x0, duration=6.667s, table=33, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x1,metadata=0x9
+ actions=resubmit(,34)
+ cookie=0x0, duration=6.667s, table=34, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg6=0x4,reg7=0x4,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.667s, table=34, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg6=0x1,reg7=0x1,metadata=0x9
+ actions=drop
+ cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=110,ipv6,reg7=0x4,metadata=0x7
+ actions=resubmit(,50)
+ cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=110,ip,reg7=0x4,metadata=0x7
+ actions=resubmit(,50)
+ cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x1,metadata=0x9
+ actions=resubmit(,64)
+ cookie=0x0, duration=6.673s, table=55, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg7=0x4,metadata=0x7
+ actions=resubmit(,64)
+ cookie=0x0, duration=6.667s, table=64, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x4,metadata=0x7
+ actions=output:7
+ cookie=0x0, duration=6.667s, table=64, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x1,metadata=0x9
+ actions=output:8
+
+#. On compute nodes not containing a port on the network, the OVN controller
+ also creates additional flows.
+
+ .. code-block:: console
+
+ cookie=0x0, duration=6.673s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,metadata=0x7,
+ dl_src=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=drop
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,metadata=0x7,vlan_tci=0x1000/0x1000
+ actions=drop
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg6=0x3,metadata=0x7,
+ dl_src=fa:16:3e:b6:91:70
+ actions=resubmit(,17)
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg6=0x2,metadata=0x7
+ actions=resubmit(,17)
+ cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg6=0x1,metadata=0x7
+ actions=resubmit(,17)
+ cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,ip,reg6=0x3,metadata=0x7,
+ dl_src=fa:16:3e:b6:91:70,nw_src=192.168.1.11
+ actions=resubmit(,18)
+ cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,udp,reg6=0x3,metadata=0x7,
+ dl_src=fa:16:3e:b6:91:70,nw_src=0.0.0.0,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=resubmit(,18)
+ cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,ip,reg6=0x3,metadata=0x7,
+ dl_src=fa:16:3e:b6:91:70
+ actions=drop
+ cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,ipv6,reg6=0x3,metadata=0x7,
+ dl_src=fa:16:3e:b6:91:70
+ actions=drop
+ cookie=0x0, duration=6.670s, table=17, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,18)
+ cookie=0x0, duration=6.674s, table=18, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,arp,reg6=0x3,metadata=0x7,
+ dl_src=fa:16:3e:b6:91:70,arp_spa=192.168.1.11,
+ arp_sha=fa:16:3e:b6:91:70
+ actions=resubmit(,19)
+ cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,icmp6,reg6=0x3,metadata=0x7,icmp_type=135,
+ icmp_code=0
+ actions=drop
+ cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,icmp6,reg6=0x3,metadata=0x7,icmp_type=136,
+ icmp_code=0
+ actions=drop
+ cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,arp,reg6=0x3,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,19)
+ cookie=0x0, duration=6.673s, table=19, n_packets=0, n_bytes=0,
+ idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=136,icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=6.673s, table=19, n_packets=0, n_bytes=0,
+ idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=135,icmp_code=0
+ actions=resubmit(,20)
+ cookie=0x0, duration=6.674s, table=19, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=6.670s, table=19, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
+ cookie=0x0, duration=6.674s, table=19, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,20)
+ cookie=0x0, duration=6.673s, table=20, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,21)
+ cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,reg0=0x1/0x1,metadata=0x7
+ actions=ct(table=22,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=6.670s, table=21, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,reg0=0x1/0x1,metadata=0x7
+ actions=ct(table=22,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,22)
+ cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,metadata=0x7
+ actions=resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,ct_state=-new-est+rel-inv+trk,metadata=0x7
+ actions=resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,ct_state=+inv+trk,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,23)
+ cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,udp,reg6=0x3,metadata=0x7,
+ nw_dst=255.255.255.255,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,udp,reg6=0x3,metadata=0x7,
+ nw_dst=192.168.1.0/24,tp_src=68,tp_dst=67
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,ct_state=+new+trk,ip,reg6=0x3,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2001,ip,reg6=0x3,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2001,ipv6,reg6=0x3,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=1,ipv6,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=1,ip,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
+ cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,23)
+ cookie=0x0, duration=6.673s, table=23, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,24)
+ cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,reg0=0x2/0x2,metadata=0x7
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
+ cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,reg0=0x2/0x2,metadata=0x7
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
+ cookie=0x0, duration=6.673s, table=24, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,reg0=0x4/0x4,metadata=0x7
+ actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=6.670s, table=24, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,reg0=0x4/0x4,metadata=0x7
+ actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,25)
+ cookie=0x0, duration=6.673s, table=25, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.11,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:b6:91:70,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163eb69170->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a8010b->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=6.670s, table=25, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.3,arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:d5:00:02,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163ed50002->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80103->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=6.670s, table=25, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.2,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:82:8b:0e,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163e828b0e->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80102->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=6.674s, table=25, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,26)
+ cookie=0x0, duration=6.674s, table=26, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,metadata=0x7,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=6.674s, table=26, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:d5:00:02
+ actions=load:0x2->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=6.673s, table=26, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:b6:91:70
+ actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=6.670s, table=26, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:82:8b:0e
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=6.674s, table=32, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x3,metadata=0x7
+ actions=load:0x7->NXM_NX_TUN_ID[0..23],
+ set_field:0x3/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:3
+ cookie=0x0, duration=6.673s, table=32, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x2,metadata=0x7
+ actions=load:0x7->NXM_NX_TUN_ID[0..23],
+ set_field:0x2/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:3
+ cookie=0x0, duration=6.670s, table=32, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,reg7=0x1,metadata=0x7
+ actions=load:0x7->NXM_NX_TUN_ID[0..23],
+ set_field:0x1/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:5
+ cookie=0x0, duration=6.674s, table=48, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,49)
+ cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=135,icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=136,icmp_code=0
+ actions=resubmit(,50)
+ cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
+ cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,50)
+ cookie=0x0, duration=6.674s, table=50, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,reg0=0x1/0x1,metadata=0x7
+ actions=ct(table=51,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=6.673s, table=50, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,reg0=0x1/0x1,metadata=0x7
+ actions=ct(table=51,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=6.673s, table=50, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,51)
+ cookie=0x0, duration=6.670s, table=51, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,52)
+ cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,ct_state=+inv+trk,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,metadata=0x7
+ actions=resubmit(,53)
+ cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,ct_state=-new-est+rel-inv+trk,metadata=0x7
+ actions=resubmit(,53)
+ cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=136,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=135,
+ icmp_code=0
+ actions=resubmit(,53)
+ cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,ct_state=+new+trk,ip,reg7=0x3,metadata=0x7,
+ nw_src=192.168.1.11
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,ct_state=+new+trk,ip,reg7=0x3,metadata=0x7,
+ nw_src=192.168.1.11
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,udp,reg7=0x3,metadata=0x7,
+ nw_src=192.168.1.0/24,tp_src=67,tp_dst=68
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
+ metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2001,ip,reg7=0x3,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=2001,ipv6,reg7=0x3,metadata=0x7
+ actions=drop
+ cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=1,ip,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=1,ipv6,metadata=0x7
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+ cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,53)
+ cookie=0x0, duration=6.674s, table=53, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,reg0=0x4/0x4,metadata=0x7
+ actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=6.674s, table=53, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,reg0=0x4/0x4,metadata=0x7
+ actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=6.673s, table=53, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ipv6,reg0=0x2/0x2,metadata=0x7
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
+ cookie=0x0, duration=6.673s, table=53, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,ip,reg0=0x2/0x2,metadata=0x7
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
+ cookie=0x0, duration=6.674s, table=53, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,54)
+ cookie=0x0, duration=6.674s, table=54, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,ip,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:b6:91:70,nw_dst=255.255.255.255
+ actions=resubmit(,55)
+ cookie=0x0, duration=6.673s, table=54, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,ip,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:b6:91:70,nw_dst=192.168.1.11
+ actions=resubmit(,55)
+ cookie=0x0, duration=6.673s, table=54, n_packets=0, n_bytes=0,
+ idle_age=6, priority=90,ip,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:b6:91:70,nw_dst=224.0.0.0/4
+ actions=resubmit(,55)
+ cookie=0x0, duration=6.670s, table=54, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,ip,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:b6:91:70
+ actions=drop
+ cookie=0x0, duration=6.670s, table=54, n_packets=0, n_bytes=0,
+ idle_age=6, priority=80,ipv6,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:b6:91:70
+ actions=drop
+ cookie=0x0, duration=6.674s, table=54, n_packets=0, n_bytes=0,
+ idle_age=6, priority=0,metadata=0x7
+ actions=resubmit(,55)
+ cookie=0x0, duration=6.673s, table=55, n_packets=0, n_bytes=0,
+ idle_age=6, priority=100,metadata=0x7,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=resubmit(,64)
+ cookie=0x0, duration=6.674s, table=55, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg7=0x3,metadata=0x7,
+ dl_dst=fa:16:3e:b6:91:70
+ actions=resubmit(,64)
+ cookie=0x0, duration=6.673s, table=55, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg7=0x1,metadata=0x7
+ actions=resubmit(,64)
+ cookie=0x0, duration=6.670s, table=55, n_packets=0, n_bytes=0,
+ idle_age=6, priority=50,reg7=0x2,metadata=0x7
+ actions=resubmit(,64)
+
+#. On compute nodes containing a port on the network, the OVN controller
+ also creates an additional flow.
+
+ .. code-block:: console
+
+ cookie=0x0, duration=13.358s, table=52, n_packets=0, n_bytes=0,
+ idle_age=13, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
+ metadata=0x7,ipv6_src=::
+ actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
+
+.. todo: Future commit
+
+ Attach the router to a second self-service network
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. todo: Add after NAT patches merge.
+
+ Attach the router to an external network
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/admin/ovn/refarch/selfservice-networks.rst b/doc/source/admin/ovn/refarch/selfservice-networks.rst
new file mode 100644
index 00000000000..decbb694f08
--- /dev/null
+++ b/doc/source/admin/ovn/refarch/selfservice-networks.rst
@@ -0,0 +1,517 @@
+.. _refarch-selfservice-networks:
+
+Self-service networks
+---------------------
+
+A self-service (project) network includes only virtual components, thus
+enabling projects to manage them without additional configuration of the
+underlying physical network. The OVN mechanism driver supports Geneve
+and VLAN network types with a preference toward Geneve. Projects can
+choose to isolate self-service networks, connect two or more together
+via routers, or connect them to provider networks via routers with
+appropriate capabilities. Similar to provider networks, self-service
+networks can use arbitrary names.
+
+.. note::
+
+ Similar to provider networks, self-service VLAN networks map to a
+ unique bridge on each compute node that supports launching instances
+ on those networks. Self-service VLAN networks also require several
+ commands at the host and OVS levels. The following example assumes
+ use of Geneve self-service networks.
+
+Create a self-service network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Creating a self-service network involves several commands at the
+Networking service level that yield a series of operations at the OVN
+level to create the virtual network components. The following example
+creates a Geneve self-service network and binds a subnet to it. The
+subnet uses DHCP to distribute IP addresses to instances.
+
+#. On the controller node, source the credentials for a regular
+ (non-privileged) project. The following example uses the ``demo``
+ project.
+
+#. On the controller node, create a self-service network in the Networking
+ service.
+
+ .. code-block:: console
+
+ $ openstack network create selfservice
+ +-------------------------+--------------------------------------+
+ | Field | Value |
+ +-------------------------+--------------------------------------+
+ | admin_state_up | UP |
+ | availability_zone_hints | |
+ | availability_zones | |
+ | created_at | 2016-06-09T15:42:41 |
+ | description | |
+ | id | f49791f7-e653-4b43-99b1-0f5557c313e4 |
+ | ipv4_address_scope | None |
+ | ipv6_address_scope | None |
+ | mtu | 1442 |
+ | name | selfservice |
+ | port_security_enabled | True |
+ | project_id | 1ef26f483b9d44e8ac0c97388d6cb609 |
+ | router_external | Internal |
+ | shared | False |
+ | status | ACTIVE |
+ | subnets | |
+ | tags | [] |
+ | updated_at | 2016-06-09T15:42:41 |
+ +-------------------------+--------------------------------------+
+
+OVN operations
+^^^^^^^^^^^^^^
+
+The OVN mechanism driver and OVN perform the following operations
+during creation of a self-service network.
+
+#. The mechanism driver translates the network into a logical switch in
+ the OVN northbound database.
+
+ .. code-block:: console
+
+ uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
+ acls : []
+ external_ids : {"neutron:network_name"="selfservice"}
+ name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
+ ports : []
+
+#. The OVN northbound service translates this object into new datapath
+ bindings and logical flows in the OVN southbound database.
+
+ * Datapath bindings
+
+ .. code-block:: console
+
+ _uuid : 0b214af6-8910-489c-926a-fd0ed16a8251
+ external_ids : {logical-switch="15e2c80b-1461-4003-9869-80416cd97de5"}
+ tunnel_key : 5
+
+ * Logical flows
+
+ .. code-block:: console
+
+ Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 100, match=(eth.src[40]),
+ action=(drop;)
+ table= 0( ls_in_port_sec_l2), priority= 100, match=(vlan.present),
+ action=(drop;)
+ table= 1( ls_in_port_sec_ip), priority= 0, match=(1),
+ action=(next;)
+ table= 2( ls_in_port_sec_nd), priority= 0, match=(1),
+ action=(next;)
+ table= 3( ls_in_pre_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 4( ls_in_pre_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 5( ls_in_pre_stateful), priority= 100, match=(reg0[0] == 1),
+ action=(ct_next;)
+ table= 5( ls_in_pre_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 6( ls_in_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 7( ls_in_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 8( ls_in_stateful), priority= 100, match=(reg0[2] == 1),
+ action=(ct_lb;)
+ table= 8( ls_in_stateful), priority= 100, match=(reg0[1] == 1),
+ action=(ct_commit; next;)
+ table= 8( ls_in_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 9( ls_in_arp_rsp), priority= 0, match=(1),
+ action=(next;)
+ table=10( ls_in_l2_lkup), priority= 100, match=(eth.mcast),
+ action=(outport = "_MC_flood"; output;)
+ Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: egress
+ table= 0( ls_out_pre_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 1( ls_out_pre_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 2(ls_out_pre_stateful), priority= 100, match=(reg0[0] == 1),
+ action=(ct_next;)
+ table= 2(ls_out_pre_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 3( ls_out_lb), priority= 0, match=(1),
+ action=(next;)
+ table= 4( ls_out_acl), priority= 0, match=(1),
+ action=(next;)
+ table= 5( ls_out_stateful), priority= 100, match=(reg0[1] == 1),
+ action=(ct_commit; next;)
+ table= 5( ls_out_stateful), priority= 100, match=(reg0[2] == 1),
+ action=(ct_lb;)
+ table= 5( ls_out_stateful), priority= 0, match=(1),
+ action=(next;)
+ table= 6( ls_out_port_sec_ip), priority= 0, match=(1),
+ action=(next;)
+ table= 7( ls_out_port_sec_l2), priority= 100, match=(eth.mcast),
+ action=(output;)
+
+ .. note::
+
+ These actions do not create flows on any nodes.
+
+Create a subnet on the self-service network
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+A self-service network requires at least one subnet. In most cases,
+the environment provides suitable values for IP address allocation for
+instances, default gateway IP address, and metadata such as name
+resolution.
+
+#. On the controller node, create a subnet bound to the self-service network
+ ``selfservice``.
+
+ .. code-block:: console
+
+ $ openstack subnet create --network selfservice --subnet-range 192.168.1.0/24 selfservice-v4
+ +-------------------+--------------------------------------+
+ | Field | Value |
+ +-------------------+--------------------------------------+
+ | allocation_pools | 192.168.1.2-192.168.1.254 |
+ | cidr | 192.168.1.0/24 |
+ | created_at | 2016-06-16 00:19:08+00:00 |
+ | description | |
+ | dns_nameservers | |
+ | enable_dhcp | True |
+ | gateway_ip | 192.168.1.1 |
+ | headers | |
+ | host_routes | |
+ | id | 8f027f25-0112-45b9-a1b9-2f8097c57219 |
+ | ip_version | 4 |
+ | ipv6_address_mode | None |
+ | ipv6_ra_mode | None |
+ | name | selfservice-v4 |
+ | network_id | 8ed4e43b-63ef-41ed-808b-b59f1120aec0 |
+ | project_id | b1ebf33664df402693f729090cfab861 |
+ | subnetpool_id | None |
+ | updated_at | 2016-06-16 00:19:08+00:00 |
+ +-------------------+--------------------------------------+
+
+
+OVN operations
+^^^^^^^^^^^^^^
+
+.. todo: Update this part with the new agentless DHCP details
+
+The OVN mechanism driver and OVN perform the following operations
+during creation of a subnet on a self-service network.
+
+#. If the subnet uses DHCP for IP address management, create logical ports
+ ports for each DHCP agent serving the subnet and bind them to the logical
+ switch. In this example, the subnet contains two DHCP agents.
+
+ .. code-block:: console
+
+ _uuid : 1ed7c28b-dc69-42b8-bed6-46477bb8b539
+ addresses : ["fa:16:3e:94:db:5e 192.168.1.2"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "0cfbbdca-ff58-4cf8-a7d3-77daaebe3056"
+ options : {}
+ parent_name : []
+ port_security : []
+ tag : []
+ type : ""
+ up : true
+
+ _uuid : ae10a5e0-db25-4108-b06a-d2d5c127d9c4
+ addresses : ["fa:16:3e:90:bd:f1 192.168.1.3"]
+ enabled : true
+ external_ids : {"neutron:port_name"=""}
+ name : "74930ace-d939-4bca-b577-fccba24c3fca"
+ options : {}
+ parent_name : []
+ port_security : []
+ tag : []
+ type : ""
+ up : true
+
+ _uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
+ acls : []
+ external_ids : {"neutron:network_name"="selfservice"}
+ name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
+ ports : [1ed7c28b-dc69-42b8-bed6-46477bb8b539,
+ ae10a5e0-db25-4108-b06a-d2d5c127d9c4]
+
+#. The OVN northbound service creates port bindings for these logical
+ ports and adds them to the appropriate multicast group.
+
+ * Port bindings
+
+ .. code-block:: console
+
+ _uuid : 3e463ca0-951c-46fd-b6cf-05392fa3aa1f
+ chassis : 6a9d0619-8818-41e6-abef-2f3d9a597c03
+ datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
+ logical_port : "a203b410-97c1-4e4a-b0c3-558a10841c16"
+ mac : ["fa:16:3e:a1:dc:58 192.168.1.3"]
+ options : {}
+ parent_port : []
+ tag : []
+ tunnel_key : 2
+ type : ""
+
+ _uuid : fa7b294d-2a62-45ae-8de3-a41c002de6de
+ chassis : d63e8ae8-caf3-4a6b-9840-5c3a57febcac
+ datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
+ logical_port : "39b23721-46f4-4747-af54-7e12f22b3397"
+ mac : ["fa:16:3e:1a:b4:23 192.168.1.2"]
+ options : {}
+ parent_port : []
+ tag : []
+ tunnel_key : 1
+ type : ""
+
+ * Multicast groups
+
+ .. code-block:: console
+
+ _uuid : c08d0102-c414-4a47-98d9-dd3fa9f9901c
+ datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
+ name : _MC_flood
+ ports : [3e463ca0-951c-46fd-b6cf-05392fa3aa1f,
+ fa7b294d-2a62-45ae-8de3-a41c002de6de]
+ tunnel_key : 65535
+
+#. The OVN northbound service translates the logical ports into logical flows
+ in the OVN southbound database.
+
+ .. code-block:: console
+
+ Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: ingress
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "39b23721-46f4-4747-af54-7e12f22b3397"),
+ action=(next;)
+ table= 0( ls_in_port_sec_l2), priority= 50,
+ match=(inport == "a203b410-97c1-4e4a-b0c3-558a10841c16"),
+ action=(next;)
+ table= 9( ls_in_arp_rsp), priority= 50,
+ match=(arp.tpa == 192.168.1.2 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:1a:b4:23;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:1a:b4:23; arp.tpa = arp.spa;
+ arp.spa = 192.168.1.2; outport = inport;
+ inport = ""; /* Allow sending out inport. */ output;)
+ table= 9( ls_in_arp_rsp), priority= 50,
+ match=(arp.tpa == 192.168.1.3 && arp.op == 1),
+ action=(eth.dst = eth.src; eth.src = fa:16:3e:a1:dc:58;
+ arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
+ arp.sha = fa:16:3e:a1:dc:58; arp.tpa = arp.spa;
+ arp.spa = 192.168.1.3; outport = inport;
+ inport = ""; /* Allow sending out inport. */ output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:a1:dc:58),
+ action=(outport = "a203b410-97c1-4e4a-b0c3-558a10841c16"; output;)
+ table=10( ls_in_l2_lkup), priority= 50,
+ match=(eth.dst == fa:16:3e:1a:b4:23),
+ action=(outport = "39b23721-46f4-4747-af54-7e12f22b3397"; output;)
+ Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: egress
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "39b23721-46f4-4747-af54-7e12f22b3397"),
+ action=(output;)
+ table= 7( ls_out_port_sec_l2), priority= 50,
+ match=(outport == "a203b410-97c1-4e4a-b0c3-558a10841c16"),
+ action=(output;)
+
+#. For each compute node without a DHCP agent on the subnet:
+
+ * The OVN controller service translates these objects into flows on the
+ integration bridge ``br-int``.
+
+ .. code-block:: console
+
+ # ovs-ofctl dump-flows br-int
+ cookie=0x0, duration=9.054s, table=32, n_packets=0, n_bytes=0,
+ idle_age=9, priority=100,reg7=0xffff,metadata=0x5
+ actions=load:0x5->NXM_NX_TUN_ID[0..23],
+ set_field:0xffff/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],
+ output:4,output:3
+
+#. For each compute node with a DHCP agent on the subnet:
+
+ * Creation of a DHCP network namespace adds a virtual switch ports that
+ connects the DHCP agent with the ``dnsmasq`` process to the integration
+ bridge.
+
+ .. code-block:: console
+
+ # ovs-ofctl show br-int
+ OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
+ n_tables:254, n_buffers:256
+ capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
+ actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
+ 9(tap39b23721-46): addr:00:00:00:00:b0:5d
+ config: PORT_DOWN
+ state: LINK_DOWN
+ speed: 0 Mbps now, 0 Mbps max
+
+ * The OVN controller service translates these objects into flows on the
+ integration bridge.
+
+ .. code-block:: console
+
+ cookie=0x0, duration=21.074s, table=0, n_packets=8, n_bytes=648,
+ idle_age=11, priority=100,in_port=9
+ actions=load:0x2->NXM_NX_REG5[],load:0x5->OXM_OF_METADATA[],
+ load:0x1->NXM_NX_REG6[],resubmit(,16)
+ cookie=0x0, duration=21.076s, table=16, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,metadata=0x5,
+ dl_src=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=drop
+ cookie=0x0, duration=21.075s, table=16, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,metadata=0x5,vlan_tci=0x1000/0x1000
+ actions=drop
+ cookie=0x0, duration=21.076s, table=16, n_packets=0, n_bytes=0,
+ idle_age=21, priority=50,reg6=0x2,metadata=0x5
+ actions=resubmit(,17)
+ cookie=0x0, duration=21.075s, table=16, n_packets=8, n_bytes=648,
+ idle_age=11, priority=50,reg6=0x1,metadata=0x5
+ actions=resubmit(,17)
+ cookie=0x0, duration=21.075s, table=17, n_packets=8, n_bytes=648,
+ idle_age=11, priority=0,metadata=0x5
+ actions=resubmit(,18)
+ cookie=0x0, duration=21.076s, table=18, n_packets=8, n_bytes=648,
+ idle_age=11, priority=0,metadata=0x5
+ actions=resubmit(,19)
+ cookie=0x0, duration=21.076s, table=19, n_packets=8, n_bytes=648,
+ idle_age=11, priority=0,metadata=0x5
+ actions=resubmit(,20)
+ cookie=0x0, duration=21.075s, table=20, n_packets=8, n_bytes=648,
+ idle_age=11, priority=0,metadata=0x5
+ actions=resubmit(,21)
+ cookie=0x0, duration=5.398s, table=21, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ipv6,reg0=0x1/0x1,metadata=0x5
+ actions=ct(table=22,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=5.398s, table=21, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ip,reg0=0x1/0x1,metadata=0x5
+ actions=ct(table=22,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=5.398s, table=22, n_packets=6, n_bytes=508,
+ idle_age=2, priority=0,metadata=0x5
+ actions=resubmit(,23)
+ cookie=0x0, duration=5.398s, table=23, n_packets=6, n_bytes=508,
+ idle_age=2, priority=0,metadata=0x5
+ actions=resubmit(,24)
+ cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ipv6,reg0=0x4/0x4,metadata=0x5
+ actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ip,reg0=0x4/0x4,metadata=0x5
+ actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ipv6,reg0=0x2/0x2,metadata=0x5
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
+ cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ip,reg0=0x2/0x2,metadata=0x5
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
+ cookie=0x0, duration=5.399s, table=24, n_packets=6, n_bytes=508,
+ idle_age=2, priority=0,metadata=0x5 actions=resubmit(,25)
+ cookie=0x0, duration=5.398s, table=25, n_packets=0, n_bytes=0,
+ idle_age=5, priority=50,arp,metadata=0x5,
+ arp_tpa=192.168.1.2,arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:82:8b:0e,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163e828b0e->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80102->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=5.378s, table=25, n_packets=0, n_bytes=0,
+ idle_age=5, priority=50,arp,metadata=0x5,arp_tpa=192.168.1.3,
+ arp_op=1
+ actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
+ mod_dl_src:fa:16:3e:d5:00:02,load:0x2->NXM_OF_ARP_OP[],
+ move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
+ load:0xfa163ed50002->NXM_NX_ARP_SHA[],
+ move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
+ load:0xc0a80103->NXM_OF_ARP_SPA[],
+ move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
+ load:0->NXM_OF_IN_PORT[],resubmit(,32)
+ cookie=0x0, duration=5.399s, table=25, n_packets=6, n_bytes=508,
+ idle_age=2, priority=0,metadata=0x5
+ actions=resubmit(,26)
+ cookie=0x0, duration=5.399s, table=26, n_packets=6, n_bytes=508,
+ idle_age=2, priority=100,metadata=0x5,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=5.398s, table=26, n_packets=0, n_bytes=0,
+ idle_age=5, priority=50,metadata=0x5,dl_dst=fa:16:3e:d5:00:02
+ actions=load:0x2->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=5.398s, table=26, n_packets=0, n_bytes=0,
+ idle_age=5, priority=50,metadata=0x5,dl_dst=fa:16:3e:82:8b:0e
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,32)
+ cookie=0x0, duration=21.038s, table=32, n_packets=0, n_bytes=0,
+ idle_age=21, priority=100,reg7=0x2,metadata=0x5
+ actions=load:0x5->NXM_NX_TUN_ID[0..23],
+ set_field:0x2/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:4
+ cookie=0x0, duration=21.038s, table=32, n_packets=8, n_bytes=648,
+ idle_age=11, priority=100,reg7=0xffff,metadata=0x5
+ actions=load:0x5->NXM_NX_TUN_ID[0..23],
+ set_field:0xffff/0xffffffff->tun_metadata0,
+ move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],
+ output:4,resubmit(,33)
+ cookie=0x0, duration=5.397s, table=33, n_packets=12, n_bytes=1016,
+ idle_age=2, priority=100,reg7=0xffff,metadata=0x5
+ actions=load:0x1->NXM_NX_REG7[],resubmit(,34),
+ load:0xffff->NXM_NX_REG7[]
+ cookie=0x0, duration=5.397s, table=33, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,reg7=0x1,metadata=0x5
+ actions=resubmit(,34)
+ cookie=0x0, duration=21.074s, table=34, n_packets=8, n_bytes=648,
+ idle_age=11, priority=100,reg6=0x1,reg7=0x1,metadata=0x5
+ actions=drop
+ cookie=0x0, duration=21.076s, table=48, n_packets=8, n_bytes=648,
+ idle_age=11, priority=0,metadata=0x5 actions=resubmit(,49)
+ cookie=0x0, duration=21.075s, table=49, n_packets=8, n_bytes=648,
+ idle_age=11, priority=0,metadata=0x5 actions=resubmit(,50)
+ cookie=0x0, duration=5.398s, table=50, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ipv6,reg0=0x1/0x1,metadata=0x5
+ actions=ct(table=51,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=5.398s, table=50, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ip,reg0=0x1/0x1,metadata=0x5
+ actions=ct(table=51,zone=NXM_NX_REG5[0..15])
+ cookie=0x0, duration=5.398s, table=50, n_packets=6, n_bytes=508,
+ idle_age=3, priority=0,metadata=0x5
+ actions=resubmit(,51)
+ cookie=0x0, duration=5.398s, table=51, n_packets=6, n_bytes=508,
+ idle_age=3, priority=0,metadata=0x5
+ actions=resubmit(,52)
+ cookie=0x0, duration=5.398s, table=52, n_packets=6, n_bytes=508,
+ idle_age=3, priority=0,metadata=0x5
+ actions=resubmit(,53)
+ cookie=0x0, duration=5.399s, table=53, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ipv6,reg0=0x4/0x4,metadata=0x5
+ actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=5.398s, table=53, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ip,reg0=0x4/0x4,metadata=0x5
+ actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
+ cookie=0x0, duration=5.398s, table=53, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ip,reg0=0x2/0x2,metadata=0x5
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
+ cookie=0x0, duration=5.398s, table=53, n_packets=0, n_bytes=0,
+ idle_age=5, priority=100,ipv6,reg0=0x2/0x2,metadata=0x5
+ actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
+ cookie=0x0, duration=5.398s, table=53, n_packets=6, n_bytes=508,
+ idle_age=3, priority=0,metadata=0x5
+ actions=resubmit(,54)
+ cookie=0x0, duration=5.398s, table=54, n_packets=6, n_bytes=508,
+ idle_age=3, priority=0,metadata=0x5
+ actions=resubmit(,55)
+ cookie=0x0, duration=5.398s, table=55, n_packets=6, n_bytes=508,
+ idle_age=3, priority=100,metadata=0x5,
+ dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
+ actions=resubmit(,64)
+ cookie=0x0, duration=5.398s, table=55, n_packets=0, n_bytes=0,
+ idle_age=5, priority=50,reg7=0x1,metadata=0x5
+ actions=resubmit(,64)
+ cookie=0x0, duration=5.398s, table=55, n_packets=0, n_bytes=0,
+ idle_age=5, priority=50,reg7=0x2,metadata=0x5
+ actions=resubmit(,64)
+ cookie=0x0, duration=5.397s, table=64, n_packets=6, n_bytes=508,
+ idle_age=3, priority=100,reg7=0x1,metadata=0x5
+ actions=output:9
diff --git a/doc/source/admin/ovn/routing.rst b/doc/source/admin/ovn/routing.rst
new file mode 100644
index 00000000000..c081ff02766
--- /dev/null
+++ b/doc/source/admin/ovn/routing.rst
@@ -0,0 +1,182 @@
+.. _ovn_routing:
+
+=======
+Routing
+=======
+
+North/South
+-----------
+
+The different configurations are detailed in the :doc:`/admin/ovn/refarch/refarch`
+
+Non distributed FIP
+~~~~~~~~~~~~~~~~~~~
+
+North/South traffic flows through the active chassis for each router for SNAT
+traffic, and also for FIPs.
+
+.. image:: figures/ovn-north-south.png
+ :alt: L3 North South non-distributed FIP
+ :align: center
+
+
+Distributed Floating IP
+~~~~~~~~~~~~~~~~~~~~~~~
+
+In the following diagram we can see how VMs with no Floating IP (VM1, VM6)
+still communicate throught the gateway nodes using SNAT on the edge routers
+R1 and R2.
+
+While VM3, VM4, and VM5 have an assigned floating IP, and it's traffic flows
+directly through the local provider bridge/interface to the external network.
+
+.. image:: figures/ovn-north-south-distributed-fip.png
+ :alt: L3 North South distributed FIP
+ :align: center
+
+
+L3HA support
+~~~~~~~~~~~~
+
+Ovn driver implements L3 high availability in a transparent way. You
+don't need to enable any config flags. As soon as you have more than
+one chassis capable of acting as an l3 gateway to the specific external
+network attached to the router it will schedule the router gateway port
+to multiple chassis, making use of the ``gateway_chassis`` column on OVN's
+``Logical_Router_Port`` table.
+
+In order to have external connectivity, either:
+
+* Some gateway nodes have ``ovn-cms-options`` with the value
+ ``enable-chassis-as-gw`` in Open_vSwitch table's external_ids column, or
+
+* if no gateway node exists with the external ids column set with that
+ value, then all nodes would be eligible to host gateway chassis.
+
+Example to how to enabled chassis to host gateways:
+ .. code-block:: console
+
+ $ ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw"
+
+At the low level, functionality is all implemented mostly by OpenFlow rules
+with bundle active_passive outputs. The ARP responder and router
+enablement/disablement is handled by ovn-controller. Gratuitous ARPs for FIPs
+and router external addresses are periodically sent by ovn-controller itself.
+
+BFD monitoring
+^^^^^^^^^^^^^^
+
+OVN monitors the availability of the chassis via the BFD protocol, which is
+encapsulated on top of the Geneve tunnels established from chassis to chassis.
+
+.. image:: figures/ovn-l3ha-bfd.png
+ :alt: L3HA BFD monitoring
+ :align: center
+
+
+Each chassis that is marked as a gateway chassis will monitor all the other
+gateway chassis in the deployment as well as compute node chassis, to let the
+gateways enable/disable routing of packets and ARP responses / announcements.
+
+Each compute node chassis will monitor each gateway chassis via BFD to
+automatically steer external traffic (snat/dnat) through the active chassis
+for a given router.
+
+.. image:: figures/ovn-l3ha-bfd-3gw.png
+ :alt: L3HA BFD monitoring (3 gateway nodes)
+ :align: center
+
+The gateway nodes monitor each other in star topology. Compute nodes don't
+monitor each other because that's not necessary.
+
+
+Failover (detected by BFD)
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Look at the following example:
+
+.. image:: figures/ovn-l3ha-bfd-failover.png
+ :alt: L3HA BFD monitoring failover
+ :align: center
+
+Compute nodes BFD monitoring of the gateway nodes will detect that
+tunnel endpoint going to gateway node 1 is down, so. So traffic output that
+needs to get into the external network through the router will be directed
+to the lower priority chassis for R1. R2 stays the same because Gateway Node
+2 was already the highest priority chassis for R2.
+
+Gateway node 2 will detect that tunnel endpoint to gateway node 1 is down, so
+it will become responsible for the external leg of R1, and it's ovn-controller
+will populate flows for the external ARP responder, traffic forwarding (N/S)
+and periodic gratuitous ARPs.
+
+Gateway node 2 will also bind the external port of the router (represented
+as a chassis-redirect port on the South Bound database).
+
+
+If Gateway node 1 is still alive, failure over interface 2 will be detected
+because it's not seeing any other nodes.
+
+No mechanisms are still present to detect external network failure, so as good
+practice to detect network failure we recommend that all interfaces are handled
+over a single bonded interface with VLANs.
+
+Supported failure modes are:
+ - gateway chassis becomes disconnected from network (tunneling interface)
+ - ovs-vswitchd is stopped (it's responsible for BFD signaling)
+ - ovn-controller is stopped, as ovn-controller will remove himself as a
+ registered chassis.
+
+.. note::
+ As a side note, it's also important to understand, that as for VRRP or CARP
+ protocols, this detection mechanism only works for link failures, but not
+ for routing failures.
+
+
+Failback
+~~~~~~~~
+
+L3HA behaviour is preemptive in OVN (at least for the time being) since that
+would balance back the routers to the original chassis, avoiding any of the
+gateway nodes becoming a bottleneck.
+
+.. image:: figures/ovn-l3ha-bfd.png
+ :alt: L3HA BFD monitoring (Fail back)
+ :align: center
+
+
+East/West
+---------
+
+East/West traffic on ovn driver is completely distributed, that means
+that routing will happen internally on the compute nodes without the need
+to go through the gateway nodes.
+
+
+Traffic going through a virtual router, different subnets
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Traffic going through a virtual router, and going from a virtual network/subnet
+to another will flow directly from compute to compute node encapsulated as
+usual, while all the routing operations like decreasing TTL or switching MAC
+addresses will be handled in OpenFlow at the source host of the packet.
+
+.. image:: figures/ovn-east-west-3.png
+ :alt: East/West traffic across subnets
+ :align: center
+
+
+Traffic across the same subnet
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Traffic across a subnet will happen as described in the following diagram,
+although this kind of communication doesn't make use of routing at all (just
+encapsulation) it's been included for completeness.
+
+.. image:: figures/ovn-east-west-2.png
+ :alt: East/West traffic same subnet
+ :align: center
+
+Traffic goes directly from instance to instance through br-int in the case
+of both instances living in the same host (VM1 and VM2), or via
+encapsulation when living on different hosts (VM3 and VM4).
diff --git a/doc/source/admin/ovn/troubleshooting.rst b/doc/source/admin/ovn/troubleshooting.rst
new file mode 100644
index 00000000000..646a05648ce
--- /dev/null
+++ b/doc/source/admin/ovn/troubleshooting.rst
@@ -0,0 +1,45 @@
+.. _ovn_troubleshooting:
+
+===============
+Troubleshooting
+===============
+
+The following section describe common problems that you might
+encounter after/during the installation of the OVN ML2 driver with
+Devstack and possible solutions to these problems.
+
+Launching VM's failure
+-----------------------
+
+Disable AppArmor
+~~~~~~~~~~~~~~~~
+
+Using Ubuntu you might encounter libvirt permission errors when trying
+to create OVS ports after launching a VM (from the nova compute log).
+Disabling AppArmor might help with this problem, check out
+https://help.ubuntu.com/community/AppArmor for instructions on how to
+disable it.
+
+Multi-Node setup not working
+-----------------------------
+
+Geneve kernel module not supported
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+By default OVN creates tunnels between compute nodes using the Geneve protocol.
+Older kernels (< 3.18) don't support the Geneve module and hence tunneling
+can't work. You can check it with this command 'lsmod | grep openvswitch'
+(geneve should show up in the result list)
+
+For more information about which upstream Kernel version is required for
+support of each tunnel type, see the answer to " Why do tunnels not work when
+using a kernel module other than the one packaged with Open vSwitch?" in the
+`OVS FAQ `__.
+
+MTU configuration
+~~~~~~~~~~~~~~~~~
+
+This problem is not unique to OVN but is amplified due to the possible larger
+size of geneve header compared to other common tunneling protocols (VXLAN).
+If you are using VM's as compute nodes make sure that you either lower the MTU
+size on the virtual interface or enable fragmentation on it.
diff --git a/doc/source/admin/ovn/tutorial.rst b/doc/source/admin/ovn/tutorial.rst
new file mode 100644
index 00000000000..428886b91b0
--- /dev/null
+++ b/doc/source/admin/ovn/tutorial.rst
@@ -0,0 +1,10 @@
+.. _ovn_tutorial:
+
+==========================
+OpenStack and OVN Tutorial
+==========================
+
+The OVN project documentation includes an in depth tutorial of using OVN with
+OpenStack.
+
+`OpenStack and OVN Tutorial `_
diff --git a/doc/source/conf.py b/doc/source/conf.py
index b2447ad6527..85e17297e5e 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -269,6 +269,7 @@ _config_generator_config_files = [
'ml2_conf.ini',
'neutron.conf',
'openvswitch_agent.ini',
+ 'ovn.ini',
'sriov_agent.ini',
]
diff --git a/doc/source/configuration/config-samples.rst b/doc/source/configuration/config-samples.rst
index 80a181294c3..7d45f91f559 100644
--- a/doc/source/configuration/config-samples.rst
+++ b/doc/source/configuration/config-samples.rst
@@ -15,6 +15,7 @@ Sample Configuration Files
samples/macvtap-agent.rst
samples/openvswitch-agent.rst
samples/sriov-agent.rst
+ samples/ovn.rst
.. toctree::
:maxdepth: 1
diff --git a/doc/source/configuration/samples/ovn.rst b/doc/source/configuration/samples/ovn.rst
new file mode 100644
index 00000000000..f7160c98fb1
--- /dev/null
+++ b/doc/source/configuration/samples/ovn.rst
@@ -0,0 +1,10 @@
+.. _samples_ovn:
+
+==============
+Sample ovn.ini
+==============
+
+This sample configuration can also be viewed in `the raw format
+<../../_static/config-samples/ovn.conf.sample>`_.
+
+.. literalinclude:: ../../_static/config-samples/ovn.conf.sample
diff --git a/doc/source/contributor/index.rst b/doc/source/contributor/index.rst
index 9f2236fb9f5..faa71a7c8d6 100644
--- a/doc/source/contributor/index.rst
+++ b/doc/source/contributor/index.rst
@@ -61,6 +61,7 @@ the developer guide includes information about Neutron testing infrastructure.
effective_neutron
development_environment
+ ovn_vagrant/index
contribute
neutron_api
client_command_extensions
diff --git a/doc/source/contributor/internals/index.rst b/doc/source/contributor/internals/index.rst
index 81ae9da72b8..212e156d5d1 100644
--- a/doc/source/contributor/internals/index.rst
+++ b/doc/source/contributor/internals/index.rst
@@ -68,3 +68,4 @@ Neutron Internals
sriov_nic_agent
tag
upgrade
+ ovn/index
diff --git a/doc/source/contributor/internals/ovn/acl_optimizations.rst b/doc/source/contributor/internals/ovn/acl_optimizations.rst
new file mode 100644
index 00000000000..b70ec64b6b4
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/acl_optimizations.rst
@@ -0,0 +1,186 @@
+.. _acl_optimizations:
+
+========================================
+ACL Handling optimizations in ovn driver
+========================================
+
+This document presents the current problem with ACLs and the design changes
+proposed to core OVN as well as the necessary modifications to be made to
+ovn driver to improve their usage.
+
+Problem description
+===================
+
+There is basically two problems being addressed in this spec:
+
+1. While in Neutron, a ``Security Group Rule`` is tied to a
+``Security Group``, in OVN ``ACLs`` are created per port. Therefore,
+we'll typically have *many* more ACLs than Security Group Rules, resulting
+in a performance hit as the number of ports grows.
+
+2. An ACL in OVN is applied to a ``Logical Switch``. As a result,
+``ovn driver`` has to figure out which Logical Switches to apply the
+generated ACLs per each Security Rule.
+
+Let's highlight both problems with an example:
+
+- Neutron Networks: NA, NB, NC
+- Neutron Security Group: SG1
+- Number of Neutron Security Group Rules in SG1: 10
+- Neutron Ports in NA: 100
+- Neutron Ports in NB: 100
+- Neutron Ports in NC: 100
+- All ports belong to SG1
+
+When we implement the above scenario in OVN, this is what we'll get:
+
+- OVN Logical Switches: NA, NB, NC
+- Number of ACL rows in Northbound DB ACL table: 3000 (10 rules * 100 ports *
+ 3 networks)
+- Number of elements in acl column on each Logical_Switch row: 1000 (10 rules
+ * 100 ports).
+
+And this is how, for example, the ACL match fields for the default Neutron
+Security Group would look like::
+
+ outport == && ip4 && ip4.src == $as_ip4_
+ outport == && ip4 && ip4.src == $as_ip4_
+ outport == && ip4 && ip4.src == $as_ip4_
+ ...
+ outport == && ip4 && ip4.src == $as_ip4_
+
+As you can see, all of them look the same except for the outport field which
+is clearly redundant and makes the NB database grow a lot at scale.
+Also, ``ovn driver`` had to figure out for each rule in SG1 which Logical
+Switches it had to apply the ACLs on (NA, NB and NC). This can be really costly
+when the number of networks and port grows.
+
+
+Proposed optimization
+=====================
+
+In the OpenStack context, we'll be facing this scenario most of the time
+where the majority of the ACLs will look the same except for the
+outport/inport fields in the match column. It would make sense to be able to
+substitute all those ACLs by a single one which references all the ports
+affected by that SG rule::
+
+ outport == @port_group1 && ip4 && ip4.src == $port_group1_ip4
+
+
+Implementation Details
+======================
+
+Core OVN
+--------
+
+There's a series of patches in Core OVN that will enable us to achieve this
+optimization:
+
+https://github.com/openvswitch/ovs/commit/3d2848bafa93a2b483a4504c5de801454671dccf
+https://github.com/openvswitch/ovs/commit/1beb60afd25a64f1779903b22b37ed3d9956d47c
+https://github.com/openvswitch/ovs/commit/689829d53612a573f810271a01561f7b0948c8c8
+
+
+In summary, these patches are:
+
+- Adding a new entity called Port_Group which will hold a list of weak
+ references to the Logical Switch ports that belong to it.
+- Automatically creating/updating two Address Sets (_ip4 and _ip6) in
+ Southbound database every time a new port is added to the group.
+- Support adding a list of ACLs to a Port Group. As the SG rules may
+ span across different Logical Switches, we used to insert the ACLs in
+ all the Logical Switches where we have ports in within a SG. Figuring this
+ out is expensive and this new feature is a huge gain in terms of
+ performance when creating/deleting ports.
+
+
+ovn driver
+----------
+
+In the OpenStack integration driver, the following changes are required to
+accomplish this optimization:
+
+- When a Neutron Security Group is created, create the equivalent Port Group
+ in OVN (pg-), instead of creating a pair of Adress Sets
+ for IPv4 and IPv6. This Port Group will reference Neutron SG id in its
+ ``external_ids`` column.
+
+- When a Neutron Port is created, the equivalent Logical Port in OVN will be
+ added to those Port Groups associated to the Neutron Security Groups this
+ port belongs to.
+
+- When a Neutron Port is deleted, we'll delete the associated Logical Port in
+ OVN. Since the schema includes a weak reference to the port, when the LSP
+ gets deleted, it will also be automatically deleted from any Port Group
+ entry where it was previously present.
+
+- Instead of handling SG rules per port, we now need to handle them per SG
+ referencing the associated Port Group in the outport/inport fields. This
+ will be the biggest gain in terms of processing since we don't need to
+ iterate through all the ports anymore. For example:
+
+.. code-block:: python
+
+ -def acl_direction(r, port):
+ +def acl_direction(r):
+ if r['direction'] == 'ingress':
+ portdir = 'outport'
+ else:
+ portdir = 'inport'
+ - return '%s == "%s"' % (portdir, port['id'])
+ + return '%s == "@%s"' % (portdir, utils.ovn_name(r['security_group_id'])
+
+- Every time a SG rule is created, instead of figuring out the ports affected
+ by its SG and inserting an ACL row which will be referrenced by different
+ Logical Switches, we will just reference it from the associated Port Group.
+
+- For Neutron remote security groups, we just need to reference the
+ automatically created Address_Set for that Port Group.
+
+As a bonus, we are tackling the race conditions that could happen in
+Address_Sets right now when we're deleting and creating a port at the same
+time. This is thanks to the fact that the Address_Sets in the SB table are
+generated automatically by ovn-northd from the Port_Group contents and
+Port Group is referencing actual Logical Switch Ports. More info at:
+https://bugs.launchpad.net/networking-ovn/+bug/1611852
+
+
+Backwards compatibility considerations
+--------------------------------------
+
+- If the schema doesn't include the ``Port_Group`` table, keep the old
+ behavior(Address Sets) for backwards compatibility.
+
+- If the schema supports Port Groups, then a migration task will be performed
+ from an OvnWorker. This way we'll ensure that it'll happen only once across
+ the cloud thanks the OVSDB lock. This will be done right at the beginning of
+ the ovn_db_sync process to make sure that when neutron-server starts,
+ everything is in place to work with Port Groups. This migration process will
+ perform the following steps:
+
+ * Create the default drop Port Group and add all ports with port
+ security enabled to it.
+ * Create a Port Group for every existing Neutron Security Group and
+ add all its Security Group Rules as ACLs to that Port Group.
+ * Delete all existing Address Sets in NorthBound database which correspond to
+ a Neutron Security Group.
+ * Delete all the ACLs in every Logical Switch (Neutron network).
+
+We should eventually remove the backwards compatibility and migration path. At
+that point we should require OVS >= 2.10 from neutron ovn driver.
+
+Special cases
+-------------
+
+Ports with no security groups
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When a port doesn't belong to any Security Group and port security is enabled,
+we, by default, drop all the traffic to/from that port. In order to implement
+this through Port Groups, we'll create a special Port Group with a fixed name
+(``neutron_pg_drop``) which holds the ACLs to drop all the traffic.
+
+This PG will be created automatically when we first need it, avoiding the need
+to create it beforehand or during deployment.
+
diff --git a/doc/source/contributor/internals/ovn/data_model.rst b/doc/source/contributor/internals/ovn/data_model.rst
new file mode 100644
index 00000000000..e87bd28e22b
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/data_model.rst
@@ -0,0 +1,263 @@
+.. _data_model:
+
+===========================================
+Mapping between Neutron and OVN data models
+===========================================
+
+The primary job of the Neutron OVN ML2 driver is to translate requests for
+resources into OVN's data model. Resources are created in OVN by updating the
+appropriate tables in the OVN northbound database (an ovsdb database). This
+document looks at the mappings between the data that exists in Neutron and what
+the resulting entries in the OVN northbound DB would look like.
+
+
+Network
+-------
+
+::
+
+ Neutron Network:
+ id
+ name
+ subnets
+ admin_state_up
+ status
+ tenant_id
+
+Once a network is created, we should create an entry in the Logical Switch
+table.
+
+::
+
+ OVN northbound DB Logical Switch:
+ external_ids: {
+ 'neutron:network_name': network.name
+ }
+
+
+Subnet
+------
+
+::
+
+ Neutron Subnet:
+ id
+ name
+ ip_version
+ network_id
+ cidr
+ gateway_ip
+ allocation_pools
+ dns_nameservers
+ host_routers
+ tenant_id
+ enable_dhcp
+ ipv6_ra_mode
+ ipv6_address_mode
+
+Once a subnet is created, we should create an entry in the DHCP Options table
+with the DHCPv4 or DHCPv6 options.
+
+::
+
+ OVN northbound DB DHCP_Options:
+ cidr
+ options
+ external_ids: {
+ 'subnet_id': subnet.id
+ }
+
+Port
+----
+
+::
+
+ Neutron Port:
+ id
+ name
+ network_id
+ admin_state_up
+ mac_address
+ fixed_ips
+ device_id
+ device_owner
+ tenant_id
+ status
+
+When a port is created, we should create an entry in the Logical Switch Ports
+table in the OVN northbound DB.
+
+::
+
+ OVN Northbound DB Logical Switch Port:
+ switch: reference to OVN Logical Switch
+ router_port: (empty)
+ name: port.id
+ up: (read-only)
+ macs: [port.mac_address]
+ port_security:
+ external_ids: {'neutron:port_name': port.name}
+
+
+If the port has extra DHCP options defined, we should create an entry
+in the DHCP Options table in the OVN northbound DB.
+
+::
+
+ OVN northbound DB DHCP_Options:
+ cidr
+ options
+ external_ids: {
+ 'subnet_id': subnet.id,
+ 'port_id': port.id
+ }
+
+Router
+------
+
+::
+
+ Neutron Router:
+ id
+ name
+ admin_state_up
+ status
+ tenant_id
+ external_gw_info:
+ network_id
+ external_fixed_ips: list of dicts
+ ip_address
+ subnet_id
+
+::
+
+ OVN Northbound DB Logical Router:
+ ip:
+ default_gw:
+ external_ids:
+
+
+Router Port
+-----------
+
+::
+
+ OVN Northbound DB Logical Router Port:
+ router: (reference to Logical Router)
+ network: (reference to network this port is connected to)
+ mac:
+ external_ids:
+
+
+Security Groups
+---------------
+
+::
+
+ Neutron Port:
+ id
+ security_group: id
+ network_id
+
+ Neutron Security Group
+ id
+ name
+ tenant_id
+ security_group_rules
+
+ Neutron Security Group Rule
+ id
+ tenant_id
+ security_group_id
+ direction
+ remote_group_id
+ ethertype
+ protocol
+ port_range_min
+ port_range_max
+ remote_ip_prefix
+
+::
+
+ OVN Northbound DB ACL Rule:
+ lswitch: (reference to Logical Switch - port.network_id)
+ priority: (0..65535)
+ match: boolean expressions according to security rule
+ Translation map (sg_rule ==> match expression)
+ -----------------------------------------------
+ sg_rule.direction="Ingress" => "inport=port.id"
+ sg_rule.direction="Egress" => "outport=port.id"
+ sg_rule.ethertype => "eth.type"
+ sg_rule.protocol => "ip.proto"
+ sg_rule.port_range_min/port_range_max =>
+ "port_range_min <= tcp.src <= port_range_max"
+ "port_range_min <= udp.src <= port_range_max"
+
+ sg_rule.remote_ip_prefix => "ip4.src/mask, ip4.dst/mask, ipv6.src/mask, ipv6.dst/mask"
+
+ (all match options for ACL can be found here:
+ http://openvswitch.org/support/dist-docs/ovn-nb.5.html)
+ action: "allow-related"
+ log: true/false
+ external_ids: {'neutron:port_id': port.id}
+ {'neutron:security_rule_id': security_rule.id}
+
+Security groups maps between three neutron objects to one OVN-NB object, this
+enable us to do the mapping in various ways, depending on OVN capabilities
+
+The current implementation will use the first option in this list for
+simplicity, but all options are kept here for future reference
+
+1) For every pair, define an ACL entry::
+
+ Leads to many ACL entries.
+ acl.match = sg_rule converted
+ example: ((inport==port.id) && (ip.proto == "tcp") &&
+ (1024 <= tcp.src <= 4095) && (ip.src==192.168.0.1/16))
+
+ external_ids: {'neutron:port_id': port.id}
+ {'neutron:security_rule_id': security_rule.id}
+
+2) For every pair, define an ACL entry::
+
+ Reduce the number of ACL entries.
+ Means we have to manage the match field in case specific rule changes
+ example: (((inport==port.id) && (ip.proto == "tcp") &&
+ (1024 <= tcp.src <= 4095) && (ip.src==192.168.0.1/16)) ||
+ ((outport==port.id) && (ip.proto == "udp") && (1024 <= tcp.src <= 4095)) ||
+ ((inport==port.id) && (ip.proto == 6) ) ||
+ ((inport==port.id) && (eth.type == 0x86dd)))
+
+ (This example is a security group with four security rules)
+
+ external_ids: {'neutron:port_id': port.id}
+ {'neutron:security_group_id': security_group.id}
+
+3) For every pair, define an ACL entry::
+
+ Reduce even more the number of ACL entries.
+ Manage complexity increase
+ example: (((inport==port.id) && (ip.proto == "tcp") && (1024 <= tcp.src <= 4095)
+ && (ip.src==192.168.0.1/16)) ||
+ ((outport==port.id) && (ip.proto == "udp") && (1024 <= tcp.src <= 4095)) ||
+ ((inport==port.id) && (ip.proto == 6) ) ||
+ ((inport==port.id) && (eth.type == 0x86dd))) ||
+
+ (((inport==port2.id) && (ip.proto == "tcp") && (1024 <= tcp.src <= 4095)
+ && (ip.src==192.168.0.1/16)) ||
+ ((outport==port2.id) && (ip.proto == "udp") && (1024 <= tcp.src <= 4095)) ||
+ ((inport==port2.id) && (ip.proto == 6) ) ||
+ ((inport==port2.id) && (eth.type == 0x86dd)))
+
+ external_ids: {'neutron:security_group': security_group.id}
+
+
+Which option to pick depends on OVN match field length capabilities, and the
+trade off between better performance due to less ACL entries compared to the
+complexity to manage them.
+
+If the default behaviour is not "drop" for unmatched entries, a rule with
+lowest priority must be added to drop all traffic ("match==1")
+
+Spoofing protection rules are being added by OVN internally and we need to
+ignore the automatically added rules in Neutron
diff --git a/doc/source/contributor/internals/ovn/database_consistency.rst b/doc/source/contributor/internals/ovn/database_consistency.rst
new file mode 100644
index 00000000000..be0b789cfee
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/database_consistency.rst
@@ -0,0 +1,442 @@
+.. _database_consistency:
+
+================================
+Neutron/OVN Database consistency
+================================
+
+This document presents the problem and proposes a solution for the data
+consistency issue between the Neutron and OVN databases. Although the
+focus of this document is OVN this problem is common enough to be present
+in other ML2 drivers (e.g OpenDayLight, BigSwitch, etc...). Some of them
+already contain a mechanism in place for dealing with it.
+
+Problem description
+===================
+
+In a common Neutron deployment model there could have multiple Neutron
+API workers processing requests. For each request, the worker will update
+the Neutron database and then invoke the ML2 driver to translate the
+information to that specific SDN data model.
+
+There are at least two situations that could lead to some inconsistency
+between the Neutron and the SDN databases, for example:
+
+.. _problem_1:
+
+Problem 1: Neutron API workers race condition
+---------------------------------------------
+
+.. code-block:: python
+
+ In Neutron:
+ with neutron_db_transaction:
+ update_neutron_db()
+ ml2_driver.update_port_precommit()
+ ml2_driver.update_port_postcommit()
+
+ In the ML2 driver:
+ def update_port_postcommit:
+ port = neutron_db.get_port()
+ update_port_in_ovn(port)
+
+Imagine the case where a port is being updated twice and each request
+is being handled by a different API worker. The method responsible for
+updating the resource in the OVN (``update_port_postcommit``) is not
+atomic and invoked outside of the Neutron database transaction. This could
+lead to a problem where the order in which the updates are committed to
+the Neutron database are different than the order that they are committed
+to the OVN database, resulting in an inconsistency.
+
+This problem has been reported at `bug #1605089
+`_.
+
+.. _problem_2:
+
+Problem 2: Backend failures
+---------------------------
+
+Another situation is when the changes are already committed in Neutron
+but an exception is raised upon trying to update the OVN database (e.g
+lost connectivity to the ``ovsdb-server``). We currently don't have a
+good way of handling this problem, obviously it would be possible to try
+to immediately rollback the changes in the Neutron database and raise an
+exception but, that rollback itself is an operation that could also fail.
+
+Plus, rollbacks is not very straight forward when it comes to updates
+or deletes. In a case where a VM is being teared down and OVN fail to
+delete a port, re-creating that port in Neutron doesn't necessary fix the
+problem. The decommission of a VM involves many other things, in fact, we
+could make things even worse by leaving some dirty data around. I believe
+this is a problem that would be better dealt with by other methods.
+
+Proposed change
+===============
+
+In order to fix the problems presented at the `Problem description`_
+section this document proposes a solution based on the Neutron's
+``revision_number`` attribute. In summary, for every resource in Neutron
+there's an attribute called ``revision_number`` which gets incremented
+on each update made on that resource. For example::
+
+ $ openstack port create --network nettest porttest
+ ...
+ | revision_number | 2 |
+ ...
+
+ $ openstack port set porttest --mac-address 11:22:33:44:55:66
+
+ $ mysql -e "use neutron; select standard_attr_id from ports where id=\"91c08021-ded3-4c5a-8d57-5b5c389f8e39\";"
+ +------------------+
+ | standard_attr_id |
+ +------------------+
+ | 1427 |
+ +------------------+
+
+ $ mysql -e "use neutron; SELECT revision_number FROM standardattributes WHERE id=1427;"
+ +-----------------+
+ | revision_number |
+ +-----------------+
+ | 3 |
+ +-----------------+
+
+
+This document proposes a solution that will use the `revision_number`
+attribute for three things:
+
+#. Perform a compare-and-swap operation based on the resource version
+#. Guarantee the order of the updates (`Problem 1 `_)
+#. Detecting when resources in Neutron and OVN are out-of-sync
+
+But, before any of points above can be done we need to change the
+ovn driver code to:
+
+
+#1 - Store the revision_number referent to a change in OVNDB
+------------------------------------------------------------
+
+To be able to compare the version of the resource in Neutron against
+the version in OVN we first need to know which version the OVN resource
+is present at.
+
+Fortunately, each table in the OVNDB contains a special column called
+``external_ids`` which external systems (like Neutron)
+can use to store information about its own resources that corresponds
+to the entries in OVNDB.
+
+So, every time a resource is created or updated in OVNDB by
+ovn driver, the Neutron ``revision_number`` referent to that change
+will be stored in the ``external_ids`` column of that resource. That
+will allow ovn driver to look at both databases and detect whether
+the version in OVN is up-to-date with Neutron or not.
+
+
+#2 - Ensure correctness when updating OVN
+-----------------------------------------
+
+As stated in `Problem 1 `_, simultaneous updates to a single
+resource will race and, with the current code, the order in which these
+updates are applied is not guaranteed to be the correct order. That
+means that, if two or more updates arrives we can't prevent an older
+version of that update to be applied after a newer one.
+
+This document proposes creating a special ``OVSDB command`` that runs
+as part of the same transaction that is updating a resource in OVNDB to
+prevent changes with a lower ``revision_number`` to be applied in case
+the resource in OVN is at a higher ``revision_number`` already.
+
+This new OVSDB command needs to basically do two things:
+
+1. Add a verify operation to the ``external_ids`` column in OVNDB so
+that if another client modifies that column mid-operation the transaction
+will be restarted.
+
+A better explanation of what "verify" does is described at the doc string
+of the `Transaction class`_ in the OVS code itself, I quote:
+
+ Because OVSDB handles multiple clients, it can happen that between
+ the time that OVSDB client A reads a column and writes a new value,
+ OVSDB client B has written that column. Client A's write should not
+ ordinarily overwrite client B's, especially if the column in question
+ is a "map" column that contains several more or less independent data
+ items. If client A adds a "verify" operation before it writes the
+ column, then the transaction fails in case client B modifies it first.
+ Client A will then see the new value of the column and compose a new
+ transaction based on the new contents written by client B.
+
+2. Compare the ``revision_number`` from the update against what is
+presently stored in OVNDB. If the version in OVNDB is already higher
+than the version in the update, abort the transaction.
+
+So basically this new command is responsible for guarding the OVN resource
+by not allowing old changes to be applied on top of new ones. Here's a
+scenario where two concurrent updates comes in the wrong order and how
+the solution above will deal with it:
+
+Neutron worker 1 (NW-1): Updates a port with address A (revision_number: 2)
+
+Neutron worker 2 (NW-2): Updates a port with address B (revision_number: 3)
+
+TXN 1: NW-2 transaction is committed first and the OVN resource now has RN 3
+
+TXN 2: NW-1 transaction detects the change in the external_ids column and
+is restarted
+
+TXN 2: NW-1 the new command now sees that the OVN resource is at RN 3,
+which is higher than the update version (RN 2) and aborts the transaction.
+
+There's a bit more for the above to work with the current ovn driver
+code, basically we need to tidy up the code to do two more things.
+
+1. Consolidate changes to a resource in a single transaction.
+
+This is important regardless of this spec, having all changes to a
+resource done in a single transaction minimizes the risk of having
+half-changes written to the database in case of an eventual problem. This
+`should be done already `_
+but it's important to have it here in case we find more examples like
+that as we code.
+
+2. When doing partial updates, use the OVNDB as the source of comparison
+to create the deltas.
+
+Being able to do a partial update in a resource is important for
+performance reasons; it's a way to minimize the number of changes that
+will be performed in the database.
+
+Right now, some of the update() methods in ovn driver creates the
+deltas using the *current* and *original* parameters that are passed to
+it. The *current* parameter is, as the name says, the current version
+of the object present in the Neutron DB. The *original* parameter is
+the previous version (current - 1) of that object.
+
+The problem of creating the deltas by comparing these two objects is
+because only the data in the Neutron DB is used for it. We need to stop
+using the *original* object for it and instead we should create the
+delta based on the *current* version of the Neutron DB against the data
+stored in the OVNDB to be able to detect the real differences between
+the two databases.
+
+So in summary, to guarantee the correctness of the updates this document
+proposes to:
+
+#. Create a new OVSDB command is responsible for comparing revision
+ numbers and aborting the transaction, when needed.
+#. Consolidate changes to a resource in a single transaction (should be
+ done already)
+#. When doing partial updates, create the deltas based in the current
+ version in the Neutron DB and the OVNDB.
+
+
+#3 - Detect and fix out-of-sync resources
+-----------------------------------------
+
+When things are working as expected the above changes should ensure
+that Neutron DB and OVNDB are in sync but, what happens when things go
+bad ? As per `Problem 2 `_, things like temporarily losing
+connectivity with the OVNDB could cause changes to fail to be committed
+and the databases getting out-of-sync. We need to be able to detect the
+resources that were affected by these failures and fix them.
+
+We do already have the means to do it, similar to what the
+`ovn_db_sync.py`_ script does we could fetch all the data from both
+databases and compare each resource. But, depending on the size of the
+deployment this can be really slow and costy.
+
+This document proposes an optimization for this problem to make it
+efficient enough so that we can run it periodically (as a periodic task)
+and not manually as a script anymore.
+
+First, we need to create an additional table in the Neutron database
+that would serve as a cache for the revision numbers in **OVNDB**.
+
+The new table schema could look this:
+
+================ ======== =================================================
+Column name Type Description
+================ ======== =================================================
+standard_attr_id Integer Primary key. The reference ID from the
+ standardattributes table in Neutron for
+ that resource. ONDELETE SET NULL.
+resource_uuid String The UUID of the resource
+resource_type String The type of the resource (e.g, Port, Router, ...)
+revision_number Integer The version of the object present in OVN
+acquired_at DateTime The time that the entry was create. For
+ troubleshooting purposes
+updated_at DateTime The time that the entry was updated. For
+ troubleshooting purposes
+================ ======== =================================================
+
+For the different actions: Create, update and delete; this table will be
+used as:
+
+
+1. Create:
+
+In the create_*_precommit() method, we will create an entry in the new
+table within the same Neutron transaction. The revision_number column
+for the new entry will have a placeholder value until the resource is
+successfully created in OVNDB.
+
+In case we fail to create the resource in OVN (but succeed in Neutron)
+we still have the entry logged in the new table and this problem can
+be detected by fetching all resources where the revision_number column
+value is equal to the placeholder value.
+
+The pseudo-code will look something like this:
+
+.. code-block:: python
+
+ def create_port_precommit(ctx, port):
+ create_initial_revision(port['id'], revision_number=-1,
+ session=ctx.session)
+
+ def create_port_postcommit(ctx, port):
+ create_port_in_ovn(port)
+ bump_revision(port['id'], revision_number=port['revision_number'])
+
+
+2. Update:
+
+For update it's simpler, we need to bump the revision number for
+that resource **after** the OVN transaction is committed in the
+update_*_postcommit() method. That way, if an update fails to be applied
+to OVN the inconsistencies can be detected by a JOIN between the new
+table and the ``standardattributes`` table where the revision_number
+columns does not match.
+
+The pseudo-code will look something like this:
+
+.. code-block:: python
+
+ def update_port_postcommit(ctx, port):
+ update_port_in_ovn(port)
+ bump_revision(port['id'], revision_number=port['revision_number'])
+
+
+3. Delete:
+
+The ``standard_attr_id`` column in the new table is a foreign key
+constraint with a ``ONDELETE=SET NULL`` set. That means that, upon
+Neutron deleting a resource the ``standard_attr_id`` column in the new
+table will be set to *NULL*.
+
+If deleting a resource succeeds in Neutron but fails in OVN, the
+inconsistency can be detect by looking at all resources that has a
+``standard_attr_id`` equals to NULL.
+
+The pseudo-code will look something like this:
+
+.. code-block:: python
+
+ def delete_port_postcommit(ctx, port):
+ delete_port_in_ovn(port)
+ delete_revision(port['id'])
+
+
+With the above optimization it's possible to create a periodic task that
+can run quite frequently to detect and fix the inconsistencies caused
+by random backend failures.
+
+.. note::
+ There's no lock linking both database updates in the postcommit()
+ methods. So, it's true that the method bumping the revision_number
+ column in the new table in Neutron DB could still race but, that
+ should be fine because this table acts like a cache and the real
+ revision_number has been written in OVNDB.
+
+ The mechanism that will detect and fix the out-of-sync resources should
+ detect this inconsistency as well and, based on the revision_number
+ in OVNDB, decide whether to sync the resource or only bump the
+ revision_number in the cache table (in case the resource is already
+ at the right version).
+
+
+Refereces
+=========
+
+* There's a chain of patches with a proof of concept for this approach,
+ they start at: https://review.openstack.org/#/c/517049/
+
+Alternatives
+============
+
+Journaling
+----------
+
+An alternative solution to this problem is *journaling*. The basic
+idea is to create another table in the Neutron database and log every
+operation (create, update and delete) instead of passing it directly to
+the SDN controller.
+
+A separated thread (or multiple instances of it) is then responsible
+for reading this table and applying the operations to the SDN backend.
+
+This approach has been used and validated
+by drivers such as `networking-odl
+`_.
+
+An attempt to implement this approach
+in *ovn driver* can be found `here
+`_.
+
+Some things to keep in mind about this approach:
+
+* The code can get quite complex as this approach is not only about
+ applying the changes to the SDN backend asynchronously. The dependencies
+ between each resource as well as their operations also needs to be
+ computed. For example, before attempting to create a router port the
+ router that this port belongs to needs to be created. Or, before
+ attempting to delete a network all the dependent resources on it
+ (subnets, ports, etc...) needs to be processed first.
+
+* The number of journal threads running can cause problems. In my tests
+ I had three controllers, each one with 24 CPU cores (Intel Xeon E5-2620
+ with hyperthreading enabled) and 64GB RAM. Running 1 journal thread
+ per Neutron API worker has caused ``ovsdb-server`` to misbehave
+ when under heavy pressure [1]_. Running multiple journal threads
+ seem to be causing other types of problems `in other drivers as well
+ `_.
+
+* When under heavy pressure [1]_, I noticed that the journal
+ threads could come to a halt (or really slowed down) while the
+ API workers were handling a lot of requests. This resulted in some
+ operations taking more than a minute to be processed. This behaviour
+ can be seem `in this screenshot `_.
+
+.. TODO find a better place to host that image
+
+* Given that the 1 journal thread per Neutron API worker approach
+ is problematic, determining the right number of journal threads is
+ also difficult. In my tests, I've noticed that 3 journal threads
+ per controller worked better but that number was pure based on
+ ``trial & error``. In production this number should probably be
+ calculated based in the environment, perhaps something like `TripleO
+ `_ (or any upper layer) would be in a better
+ position to make that decision.
+
+* At least temporarily, the data in the Neutron database is duplicated
+ between the normal tables and the journal one.
+
+* Some operations like creating a new
+ resource via Neutron's API will return `HTTP 201
+ `_,
+ which indicates that the resource has been created and is ready to
+ be used, but as these resources are created asynchronously one could
+ argue that the HTTP codes are now misleading. As a note, the resource
+ will be created at the Neutron database by the time the HTTP request
+ returns but it may not be present in the SDN backend yet.
+
+Given all considerations, this approach is still valid and the fact
+that it's already been used by other ML2 drivers makes it more open for
+collaboration and code sharing.
+
+.. _`Transaction class`: https://github.com/openvswitch/ovs/blob/3728b3b0316b44d1f9181be115b63ea85ff5883c/python/ovs/db/idl.py#L1014-L1055
+
+.. _`ovn_db_sync.py`: https://github.com/openstack/networking-ovn/blob/a9af75cd3ce6cd6685b6435b325c97cacc83ce0e/networking_ovn/ovn_db_sync.py
+
+.. rubric:: Footnotes
+
+.. [1] I ran the tests using `Browbeat
+ `_ which is basically orchestrate
+ `Openstack Rally `_ and monitor the
+ machine's usage of resources.
diff --git a/doc/source/contributor/internals/ovn/distributed_ovsdb_events.rst b/doc/source/contributor/internals/ovn/distributed_ovsdb_events.rst
new file mode 100644
index 00000000000..6552ae0733e
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/distributed_ovsdb_events.rst
@@ -0,0 +1,142 @@
+.. _distributed_ovsdb_events:
+
+================================
+Distributed OVSDB events handler
+================================
+
+This document presents the problem and proposes a solution for handling
+OVSDB events in a distributed fashion in ovn driver.
+
+Problem description
+===================
+
+In ovn driver, the OVSDB Monitor class is responsible for listening
+to the OVSDB events and performing certain actions on them. We use it
+extensively for various tasks including critical ones such as monitoring
+for port binding events (in order to notify Neutron/Nova that a port
+has been bound to a certain chassis). Currently, this class uses a
+distributed OVSDB lock to ensure that only one instance handles those
+events at a time.
+
+The problem with this approach is that it creates a bottleneck because
+even if we have multiple Neutron Workers running at the moment, only one
+is actively handling those events. And, this problem is highlighted even
+more when working with technologies such as containers which rely on
+creating multiple ports at a time and waiting for them to be bound.
+
+Proposed change
+===============
+
+In order to fix this problem, this document proposes using a `Consistent
+Hash Ring`_ to split the load of handling events across multiple Neutron
+Workers.
+
+A new table called ``ovn_hash_ring`` will be created in the Neutron
+Database where the Neutron Workers capable of handling OVSDB events will
+be registered. The table will use the following schema:
+
+================ ======== =================================================
+Column name Type Description
+================ ======== =================================================
+node_uuid String Primary key. The unique identification of a
+ Neutron Worker.
+hostname String The hostname of the machine this Node is running
+ on.
+created_at DateTime The time that the entry was created. For
+ troubleshooting purposes.
+updated_at DateTime The time that the entry was updated. Used as a
+ heartbeat to indicate that the Node is still
+ alive.
+================ ======== =================================================
+
+This table will be used to form the `Consistent Hash Ring`_. Fortunately,
+we have an implementation already in the `tooz`_ library of OpenStack. It
+was contributed by the `Ironic`_ team which also uses this data
+structure in order to spread the API request load across multiple
+Ironic Conductors.
+
+Here's how a `Consistent Hash Ring`_ from `tooz`_ works::
+
+ from tooz import hashring
+
+ hring = hashring.HashRing({'worker1', 'worker2', 'worker3'})
+
+ # Returns set(['worker3'])
+ hring[b'event-id-1']
+
+ # Returns set(['worker1'])
+ hring[b'event-id-2']
+
+
+How OVSDB Monitor will use the Ring
+-----------------------------------
+
+Every instance of the OVSDB Monitor class will be listening to a series
+of events from the OVSDB database and each of them will have a unique
+ID registered in the database which will be part of the `Consistent
+Hash Ring`.
+
+When an event arrives, each OVSDB Monitor instance will hash that
+event UUID and the ring will return one instance ID, which will then
+be compared with its own ID and if it matches that instance will then
+process the event.
+
+Verifying status of OVSDB Monitor instance
+------------------------------------------
+
+A new maintenance task will be created in ovn driver which will
+update the ``updated_at`` column from the ``ovn_hash_ring`` table for
+the entries matching its hostname indicating that all Neutron Workers
+running on that hostname are alive.
+
+Note that only a single maintenance instance runs on each machine so
+the writes to the Neutron database are optimized.
+
+When forming the ring, the code should check for entries where the
+value of ``updated_at`` column is newer than a given timeout. Entries
+that haven't been updated in a certain time won't be part of the ring.
+If the ring already exists it will be re-balanced.
+
+Clean up and minimizing downtime window
+---------------------------------------
+
+Apart from heartbeating, we need to make sure that we remove the Nodes
+from the ring when the service is stopped or killed.
+
+By stopping the ``neutron-server`` service, all Nodes sharing the same
+hostname as the machine where the service is running will be removed
+from the ``ovn_hash_ring`` table. This is done by handling the SIGTERM
+event. Upon this event arriving, ovn driver should invoke the clean
+up method and then let the process halt.
+
+Unfortunately nothing can be done in case of a SIGKILL, this will leave
+the nodes in the database and they will be part of the ring until the
+timeout is reached or the service is restarted. This can introduce a
+window of time which can result in some events being lost. The current
+implementation shares the same problem, if the instance holding the
+current OVSDB lock is killed abruptly, events will be lost until the lock
+is moved on to the next instance which is alive. One could argue that
+the current implementation aggravates the problem because all events
+will be lost where with the distributed mechanism **some** events will
+be lost. As far as distributed systems goes, that's a normal scenario
+and things are soon corrected.
+
+Ideas for future improvements
+-----------------------------
+
+This section contains some ideas that can be added on top of this work
+to further improve it:
+
+* Listen to changes to the Chassis table in the OVSDB and force a ring
+ re-balance when a Chassis is added or removed from it.
+
+* Cache the ring for a short while to minimize the database reads when
+ the service is under heavy load.
+
+* To greater minimize/avoid event losses it would be possible to cache the
+ last X events to be reprocessed in case a node times out and the
+ ring re-balances.
+
+.. _`Consistent Hash Ring`: https://en.wikipedia.org/wiki/Consistent_hashing
+.. _`tooz`: https://github.com/openstack/tooz
+.. _`Ironic`: https://github.com/openstack/ironic
diff --git a/doc/source/contributor/internals/ovn/index.rst b/doc/source/contributor/internals/ovn/index.rst
new file mode 100644
index 00000000000..066723fdf68
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/index.rst
@@ -0,0 +1,18 @@
+..
+
+================
+OVN Design Notes
+================
+
+.. toctree::
+ :maxdepth: 1
+
+ data_model
+ native_dhcp
+ ovn_worker
+ metadata_api
+ database_consistency
+ acl_optimizations
+ loadbalancer
+ distributed_ovsdb_events
+ l3_ha_rescheduling
diff --git a/doc/source/contributor/internals/ovn/l3_ha_rescheduling.rst b/doc/source/contributor/internals/ovn/l3_ha_rescheduling.rst
new file mode 100644
index 00000000000..ae8f0e97ddf
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/l3_ha_rescheduling.rst
@@ -0,0 +1,166 @@
+.. _l3_ha_rescheduling:
+
+===================================
+L3 HA Scheduling of Gateway Chassis
+===================================
+
+Problem Description
+-------------------
+
+Currently if a single network node is active in the system, gateway chassis
+for the routers would be scheduled on that node. However, when a new node is
+added to the system, neither rescheduling nor rebalancing occur automatically.
+This makes the router created on the first node to be not in HA mode.
+
+Side-effects of this behavior include:
+
+* Skewed up load on different network nodes due to lack of router rescheduling.
+
+* If the active node, where the gateway chassis for a router is scheduled
+ goes down, then because of lack of HA the North-South traffic from that
+ router will be hampered.
+
+Overview of Proposed Approach
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Gateway scheduling has been proposed in `[2]`_. However, rebalancing or
+rescheduling was not a part of that solution. This specification clarifies
+what is rescheduling and rebalancing.
+Rescheduling would automatically happen on every event triggered by
+addition or deletion of chassis.
+Rebalancing would be only triggered by manual operator action.
+
+Rescheduling of Gateway Chassis
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+In order to provide proper rescheduling of the gateway ports during
+addition or deletion of the chassis, following approach can be considered:
+
+* Identify the number of chassis in which each router has been scheduled
+
+ - Consider router for scheduling if no. of chassis < *MAX_GW_CHASSIS*
+
+*MAX_GW_CHASSIS* is defined in `[0]`_
+
+* Find a list of chassis where router is scheduled and reschedule it
+ up to *MAX_GW_CHASSIS* gateways using list of available candidates.
+ Do not modify the master chassis association to not interrupt network flows.
+
+Rescheduling is an event triggered operation which will occur whenever a
+chassis is added or removed. When it happend, ``schedule_unhosted_gateways()``
+`[1]`_ will be called to host the unhosted gateways. Routers without gateway
+ports are excluded in this operation because those are not connected to
+provider networks and haven't the gateway ports. More information about
+it can be found in the ``gateway_chassis`` table definition in OVN
+NorthBound DB `[5]`_.
+
+Chassis which has the flag ``enable-chassis-as-gw`` enabled in their OVN
+southbound database table, would be the ones eligible for hosting the routers.
+Rescheduling of router depends on current prorities set. Each chassis is given
+a specific priority for the router's gateway and priority increases with
+increasing value ( i.e. 1 < 2 < 3 ...). The highest prioritized chassis hosts
+gateway port. Other chassis are selected as slaves.
+
+There are two approaches for rescheduling supported by ovn driver right
+now:
+* Least loaded - select least-loaded chassis first,
+* Random - select chassis randomly.
+
+Few points to consider for the design:
+
+* If there are 2 Chassis C1 and C2, where the routers are already balanced,
+ and a new chassis C3 is added, then routers should be rescheduled only from
+ C1 to C3 and C2 to C3. Rescheduling from C1 to C2 and vice-versa should not
+ be allowed.
+
+* In order to reschedule the router's chassis, the ``master`` chassis for a
+ gateway router will be left untouched. However, for the scenario where all
+ routers are scheduled in only one chassis which is available as gateway,
+ the addition of the second gateway chassis would schedule the router
+ gateway ports at a lower priority on the new chassis.
+
+Following scenarios are possible which have been considered in the design:
+
+* Case #1:
+ - System has only one chassis C1 and all router gateway ports are scheduled
+ on it. We add a new chassis C2.
+ - Behavior: All the routers scheduled on C1 will also be scheduled on C2
+ with priority 1.
+* Case #2:
+ - System has 2 chassis C1 and C2 during installation. C1 goes down.
+ - Behavior: In this case, all routers would be rescheduled to C2.
+ Once C1 is back up, routers would be rescheduled on it. However,
+ since C2 is now the new master, routers on C1 would have lower priority.
+* Case #3:
+ - System has 2 chassis C1 and C2 during installation. C3 is added to it.
+ - Behavior: In this case, routers would not move their master chassis
+ associations. So routers which have their master on C1, would remain
+ there, and same for routers on C2. However, lower proritized candidates
+ of existing gateways would be scheduled on the chassis C3, depending
+ on the type of used scheduler (Random or LeastLoaded).
+
+
+Rebalancing of Gateway Chassis
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Rebalancing is the second part of the design and it assigns a new master to
+already scheduled router gateway ports. Downtime is expected in this
+operation. Rebalancing of routers can be achieved using external cli script.
+Similar approach has been implemeneted for DHCP rescheduling `[4]`_.
+The master chassis gateway could be moved only to other, previously scheduled
+gateway. Rebalancing of chassis occurs only if number of scheduled master
+chassis ports per each provider network hosted by given chassis is higher than
+average number of hosted master gateway ports per chassis per provider network.
+
+This dependency is determined by formula:
+
+avg_gw_per_chassis = num_gw_by_provider_net / num_chassis_with_provider_net
+
+Where:
+ - avg_gw_per_chassis - average number of scheduler master gateway chassis
+ withing same provider network.
+ - num_gw_by_provider_net - number of master chassis gateways scheduled in
+ given provider networks.
+ - num_chassis_with_provider_net - number of chassis that has connectivity
+ to given provider network.
+
+The rebalancing occurs only if:
+
+num_gw_by_provider_net_by_chassis > avg_gw_per_chassis
+
+Where:
+ - num_gw_by_provider_net_by_chassis - number of hosted master gateways
+ by given provider network by given chassis
+ - avg_gw_per_chassis - average number of scheduler master gateway chassis
+ withing same provider network.
+
+
+Following scenarios are possible which have been considered in the design:
+
+* Case #1:
+ - System has only two chassis C1 and C2. Chassis host the same number
+ of gateways.
+ - Behavior: Rebalancing doesn't occur.
+* Case #2:
+ - System has only two chassis C1 and C2. C1 hosts 3 gateways.
+ C2 hosts 2 gateways.
+ - Behavior: Rebalancing doesn't occur to not continuously move gateways
+ between chassis in loop.
+* Case #3:
+ - System has two chassis C1 and C2. In meantime third chassis C3 has been
+ added to the system.
+ - Behavior: Rebalancing should occur. Gateways from C1 and C2 should be
+ moved to C3 up to avg_gw_per_chassis.
+* Case #4:
+ - System has two chassis C1 and C2. C1 is connected to provnet1, but C2
+ is connected to provnet2.
+ - Behavior: Rebalancing shouldn't occur because of lack of chassis within
+ same provider network.
+
+References
+~~~~~~~~~~
+.. _`[0]`: https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/neutron/common/ovn/constants.py#L171
+.. _`[1]`: https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/neutron/services/ovn_l3/plugin.py#L318
+.. _`[2]`: https://bugs.launchpad.net/networking-ovn/+bug/1762694
+.. _`[3]`: https://developer.openstack.org/api-ref/network/v2/index.html?expanded=schedule-router-to-an-l3-agent-detail#schedule-router-to-an-l3-agent
+.. _`[4]`: https://opendev.org/x/osops-tools-contrib/src/branch/master/neutron/dhcp_agents_balancer.py
+.. _`[5]`: http://www.openvswitch.org/support/dist-docs/ovn-nb.5.txt
diff --git a/doc/source/contributor/internals/ovn/loadbalancer.rst b/doc/source/contributor/internals/ovn/loadbalancer.rst
new file mode 100644
index 00000000000..df46c06715c
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/loadbalancer.rst
@@ -0,0 +1,316 @@
+.. _loadbalancer:
+
+==================================
+OpenStack LoadBalancer API and OVN
+==================================
+
+Introduction
+------------
+
+Load balancing is essential for enabling simple or automatic delivery
+scaling and availability since application delivery, scaling and
+availability are considered vital features of any cloud.
+Octavia is an open source, operator-scale load balancing solution designed
+to work with OpenStack.
+
+The purpose of this document is to propose a design for how we can use OVN
+as the backend for OpenStack's LoadBalancer API provided by Octavia.
+
+Octavia LoadBalancers Today
+---------------------------
+
+A Detailed design analysis of Octavia is available here:
+
+https://docs.openstack.org/octavia/queens/contributor/design/version0.5/component-design.html
+
+Currently, Octavia uses the in-built Amphorae driver to fulfill the
+Loadbalancing requests in Openstack. Amphorae can be a Virtual machine,
+container, dedicated hardware, appliance or device that actually performs the
+task of load balancing in the Octavia system. More specifically, an amphora
+takes requests from clients on the front-end and distributes these to back-end
+systems. Amphorae communicates with its controllers over the LoadBalancer's
+network through a driver interface on the controller.
+
+Amphorae needs a placeholder, such as a separate VM/Container for deployment,
+so that it can handle the LoadBalancer's requests. Along with this,
+it also needs a separate network (termed as lb-mgmt-network) which handles all
+Amphorae requests.
+
+Amphorae has the capability to handle L4 (TCP/UDP) as well as L7 (HTTP)
+LoadBalancer requests and provides monitoring features using HealthMonitors.
+
+Octavia with OVN
+----------------
+
+OVN native LoadBalancer currently supports L4 protocols, with support for L7
+protocols aimed for in future releases. Currently it also does not have any
+monitoring facility. However, it does not need any extra
+hardware/VM/Container for deployment, which is a major positive point when
+compared with Amphorae. Also, it does not need any special network to
+handle the LoadBalancer's requests as they are taken care by OpenFlow rules
+directly. And, though OVN does not have support for TLS, it is in the works
+and once implemented can be integrated with Octavia.
+
+This following section details about how OVN can be used as an Octavia driver.
+
+Overview of Proposed Approach
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OVN Driver for Octavia runs under the scope of Octavia. Octavia API
+receives and forwards calls to the OVN Driver.
+
+**Step 1** - Creating a LoadBalancer
+
+Octavia API receives and issues a LoadBalancer creation request on
+a network to the OVN Provider driver. OVN driver creates a LoadBalancer
+in the OVN NorthBound DB and asynchronously updates the Octavia DB
+with the status response. A VIP port is created in Neutron when the
+LoadBalancer creation is complete. The VIP information however is not updated
+in the NorthBound DB until the Members are associated with the
+LoadBalancer's Pool.
+
+**Step 2** - Creating LoadBalancer entities (Pools, Listeners, Members)
+
+Once a LoadBalancer is created by OVN in its NorthBound DB, users can now
+create Pools, Listeners and Members associated with the LoadBalancer using
+the Octavia API. With the creation of each entity, the LoadBalancer's
+*external_ids* column in the NorthBound DB would be updated and corresponding
+Logical and Openflow rules would be added for handling them.
+
+**Step 3** - LoadBalancer request processing
+
+When a user sends a request to the VIP IP address, OVN pipeline takes care of
+load balancing the VIP request to one of the backend members.
+More information about this can be found in the ovn-northd man pages.
+
+OVN LoadBalancer Driver Logic
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+* On startup: Open and maintain a connection to the OVN Northbound DB
+ (using the ovsdbapp library). On first connection, and anytime a reconnect
+ happens:
+
+ * Do a full sync.
+
+* Register a callback when a new interface is added to a router or deleted
+ from a router.
+
+* When a new LoadBalancer L1 is created, create a Row in OVN's
+ ``Load_Balancer`` table and update its entries for name and network
+ references. If the network on which the LoadBalancer is created, is
+ associated with a router, say R1, then add the router reference to the
+ LoadBalancer's *external_ids* and associate the LoadBalancer to the router.
+ Also associate the LoadBalancer L1 with all those networks which have an
+ interface on the router R1. This is required so that Logical Flows for
+ inter-network communication while using the LoadBalancer L1 is possible.
+ Also, during this time, a new port is created via Neutron which acts as a
+ VIP Port. The information of this new port is not visible on the OVN's
+ NorthBound DB till a member is added to the LoadBalancer.
+
+* If a new network interface is added to the router R1 described above, all
+ the LoadBalancers on that network are associated with the router R1 and all
+ the LoadBalancers on the router are associated with the new network.
+
+* If a network interface is removed from the router R1, then all the
+ LoadBalancers which have been solely created on that network (identified
+ using the *ls_ref* attribute in the LoadBalancer's *external_ids*) are
+ removed from the router. Similarly those LoadBalancers which are associated
+ with the network but not actually created on that network are removed from
+ the network.
+
+* LoadBalancer can either be deleted with all its children entities using
+ the *cascade* option, or its members/pools/listeners can be individually
+ deleted. When the LoadBalancer is deleted, its references and
+ associations from all networks and routers are removed. This might change
+ in the future once the association of LoadBalancers with networks/routers
+ are changed to *weak* from *strong* [3]. Also the VIP port is deleted
+ when the LoadBalancer is deleted.
+
+OVN LoadBalancer at work
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+OVN Northbound schema [5] has a table to store LoadBalancers.
+The table looks like::
+
+ "Load_Balancer": {
+ "columns": {
+ "name": {"type": "string"},
+ "vips": {
+ "type": {"key": "string", "value": "string",
+ "min": 0, "max": "unlimited"}},
+ "protocol": {
+ "type": {"key": {"type": "string",
+ "enum": ["set", ["tcp", "udp"]]},
+ "min": 0, "max": 1}},
+ "external_ids": {
+ "type": {"key": "string", "value": "string",
+ "min": 0, "max": "unlimited"}}},
+ "isRoot": true},
+
+There is a ``load_balancer`` column in the Logical_Switch table (which
+corresponds to a Neutron network) as well as the Logical_Router table
+(which corresponds to a Neutron router) referring back to the 'Load_Balancer'
+table.
+
+The OVN driver updates the OVN Northbound DB. When a LoadBalancer is created,
+a row in this table is created. And when the listeners and members are added,
+'vips' column is updated accordingly. And the Logical_Switch's
+``load_balancer`` column is also updated accordingly.
+
+ovn-northd service which monitors for changes to the OVN Northbound DB,
+generates OVN logical flows to enable load balancing and ovn-controller
+running on each compute node, translates the logical flows into actual
+OpenFlow rules.
+
+The status of each entity in the Octavia DB is managed according to [4]
+
+Below are few examples on what happens when LoadBalancer commands are
+executed and what changes in the Load_Balancer Northbound DB table.
+
+1. Create a LoadBalancer::
+
+ $ openstack loadbalancer create --provider ovn --vip-subnet-id=private lb1
+
+ $ ovn-nbctl list load_balancer
+ _uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
+ external_ids : {
+ lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
+ ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 1}",
+ neutron:vip="10.0.0.10",
+ neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
+ name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
+ protocol : []
+ vips : {}
+
+2. Create a pool::
+
+ $ openstack loadbalancer pool create --name p1 --loadbalancer lb1
+ --protocol TCP --lb-algorithm SOURCE_IP_PORT
+
+ $ ovn-nbctl list load_balancer
+ _uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
+ external_ids : {
+ lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
+ ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 1}",
+ "pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"="", neutron:vip="10.0.0.10",
+ neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
+ name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
+ protocol : []
+ vips : {}
+
+3. Create a member::
+
+ $ openstack loadbalancer member create --address 10.0.0.107
+ --subnet-id 2d54ec67-c589-473b-bc67-41f3d1331fef --protocol-port 80 p1
+
+ $ ovn-nbctl list load_balancer
+ _uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
+ external_ids : {
+ lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
+ ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 2}",
+ "pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"=
+ "member_579c0c9f-d37d-4ba5-beed-cabf6331032d_10.0.0.107:80",
+ neutron:vip="10.0.0.10",
+ neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
+ name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
+ protocol : []
+ vips : {}
+
+4. Create another member::
+
+ $ openstack loadbalancer member create --address 20.0.0.107
+ --subnet-id c2e2da10-1217-4fe2-837a-1c45da587df7 --protocol-port 80 p1
+
+ $ ovn-nbctl list load_balancer
+ _uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
+ external_ids : {
+ lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
+ ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 2,
+ \"neutron-12c42705-3e15-4e2d-8fc0-070d1b80b9ef\": 1}",
+ "pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"=
+ "member_579c0c9f-d37d-4ba5-beed-cabf6331032d_10.0.0.107:80,
+ member_d100f2ed-9b55-4083-be78-7f203d095561_20.0.0.107:80",
+ neutron:vip="10.0.0.10",
+ neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
+ name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
+ protocol : []
+ vips : {}
+
+5. Create a listener::
+
+ $ openstack loadbalancer listener create --name l1 --protocol TCP
+ --protocol-port 82 --default-pool p1 lb1
+
+ $ ovn-nbctl list load_balancer
+ _uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
+ external_ids : {
+ lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
+ ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 2,
+ \"neutron-12c42705-3e15-4e2d-8fc0-070d1b80b9ef\": 1}",
+ "pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"="10.0.0.107:80,20.0.0.107:80",
+ "listener_12345678-2501-43f2-b34e-38a9cb7e4132"=
+ "82:pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9",
+ neutron:vip="10.0.0.10",
+ neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
+ name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
+ protocol : []
+ vips : {"10.0.0.10:82"="10.0.0.107:80,20.0.0.107:80"}
+
+As explained earlier in the design section:
+
+- If a network N1 has a LoadBalancer LB1 associated to it and one of
+ its interfaces is added to a router R1, LB1 is associated with R1 as well.
+
+- If a network N2 has a LoadBalancer LB2 and one of its interfaces is added
+ to the router R1, then R1 will have both LoadBalancers LB1 and LB2. N1 and
+ N2 will also have both the LoadBalancers associated to them. However, kindly
+ note that though network N1 would have both LB1 and LB2 LoadBalancers
+ associated with it, only LB1 would be the LoadBalancer which has a direct
+ reference to the network N1, since LB1 was created on N1. This is visible
+ in the ``ls_ref`` key of the ``external_ids`` column in LB1's entry in
+ the ``load_balancer`` table.
+
+- If a network N3 is added to the router R1, N3 will also have both
+ LoadBalancers (LB1, LB2) associated to it.
+
+- If the interface to network N2 is removed from R1, network N2 will now only
+ have LB2 associated with it. Networks N1 and N3 and router R1 will have
+ LoadBalancer LB1 associated with them.
+
+Limitations
+-----------
+Following actions are not supported by OVN Driver:
+
+- Creating a LoadBalancer/Listener/Pool with L7 Protocol
+
+- Creating HealthMonitors
+
+- Currently only one algorithm is supported for pool management
+ (Source IP Port)
+
+- Creating Listeners and Pools with different protocols. They should be of the
+ same protocol type.
+
+Following issue exists with OVN's integration with Octavia:
+
+- If creation/deletion of a LoadBalancer, Listener, Pool or Member fails, then
+ the corresponding object will remain in the DB in a PENDING_* state.
+
+Support Matrix
+--------------
+A detailed matrix of the operations supported by OVN Provider driver in Octavia
+can be found in https://docs.openstack.org/octavia/latest/user/feature-classification/index.html
+
+Other References
+----------------
+[1] Octavia API:
+https://docs.openstack.org/api-ref/load-balancer/v2/
+
+[2] Octavia Glossary:
+https://docs.openstack.org/octavia/queens/reference/glossary.html
+
+[3] https://github.com/openvswitch/ovs/commit/612f80fa8ebf88dad2e204364c6c02b451dca36c
+
+[4] https://docs.openstack.org/api-ref/load-balancer/v2/index.html#status-codes
+
+[5] https://github.com/openvswitch/ovs/blob/d1b235d7a6246e00d4afc359071d3b6b3ed244c3/ovn/ovn-nb.ovsschema#L117
diff --git a/doc/source/contributor/internals/ovn/metadata_api.rst b/doc/source/contributor/internals/ovn/metadata_api.rst
new file mode 100644
index 00000000000..5524980caf5
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/metadata_api.rst
@@ -0,0 +1,363 @@
+.. _metadata_api:
+
+==============================
+OpenStack Metadata API and OVN
+==============================
+
+Introduction
+------------
+
+OpenStack Nova presents a metadata API to VMs similar to what is available on
+Amazon EC2. Neutron is involved in this process because the source IP address
+is not enough to uniquely identify the source of a metadata request since
+networks can have overlapping IP addresses. Neutron is responsible for
+intercepting metadata API requests and adding HTTP headers which uniquely
+identify the source of the request before forwarding it to the metadata API
+server.
+
+The purpose of this document is to propose a design for how to enable this
+functionality when OVN is used as the backend for OpenStack Neutron.
+
+Neutron and Metadata Today
+--------------------------
+
+The following blog post describes how VMs access the metadata API through
+Neutron today.
+
+https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/
+
+In summary, we run a metadata proxy in either the router namespace or DHCP
+namespace. The DHCP namespace can be used when there's no router connected to
+the network. The one downside to the DHCP namespace approach is that it
+requires pushing a static route to the VM through DHCP so that it knows to
+route metadata requests to the DHCP server IP address.
+
+* Instance sends a HTTP request for metadata to 169.254.169.254
+
+* This request either hits the router or DHCP namespace depending on the route
+ in the instance
+
+* The metadata proxy service in the namespace adds the following info to the
+ request:
+
+ * Instance IP (X-Forwarded-For header)
+
+ * Router or Network-ID (X-Neutron-Network-Id or X-Neutron-Router-Id header)
+
+* The metadata proxy service sends this request to the metadata agent (outside
+ the namespace) via a UNIX domain socket.
+
+* The neutron-metadata-agent service forwards the request to the Nova metadata
+ API service by adding some new headers (instance ID and Tenant ID) to the
+ request [0].
+
+For proper operation, Neutron and Nova must be configured to communicate
+together with a shared secret. Neutron uses this secret to sign the Instance-ID
+header of the metadata request to prevent spoofing. This secret is configured
+through metadata_proxy_shared_secret on both nova and neutron configuration
+files (optional).
+
+[0] https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/neutron/agent/metadata/agent.py#L175
+
+Neutron and Metadata with OVN
+-----------------------------
+
+The current metadata API approach does not translate directly to OVN. There
+are no Neutron agents in use with OVN. Further, OVN makes no use of its own
+network namespaces that we could take advantage of like the original
+implementation makes use of the router and dhcp namespaces.
+
+We must use a modified approach that fits the OVN model. This section details
+a proposed approach.
+
+Overview of Proposed Approach
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The proposed approach would be similar to the *isolated network* case in the
+current ML2+OVS implementation. Therefore, we would be running a metadata
+proxy (haproxy) instance on every hypervisor for each network a VM on that
+host is connected to.
+
+The downside of this approach is that we'll be running more metadata proxies
+than we're doing now in case of routed networks (one per virtual router) but
+since haproxy is very lightweight and they will be idling most of the time,
+it shouldn't be a big issue overall. However, the major benefit of this
+approach is that we don't have to implement any scheduling logic to distribute
+metadata proxies across the nodes, nor any HA logic. This, however, can be
+evolved in the future as explained below in this document.
+
+Also, this approach relies on a new feature in OVN that we must implement
+first so that an OVN port can be present on *every* chassis (similar to
+*localnet* ports). This new type of logical port would be *localport* and we
+will never forward packets over a tunnel for these ports. We would only send
+packets to the local instance of a *localport*.
+
+**Step 1** - Create a port for the metadata proxy
+
+When using the DHCP agent today, Neutron automatically creates a port for the
+DHCP agent to use. We could do the same thing for use with the metadata proxy
+(haproxy). We'll create an OVN *localport* which will be present on every
+chassis and this port will have the same MAC/IP address on every host.
+Eventually, we can share the same neutron port for both DHCP and metadata.
+
+**Step 2** - Routing metadata API requests to the correct Neutron port
+
+This works similarly to the current approach.
+
+We would program OVN to include a static route in DHCP responses that routes
+metadata API requests to the *localport* that is hosting the metadata API
+proxy.
+
+Also, in case DHCP isn't enabled or the client ignores the route info, we
+will program a static route in the OVN logical router which will still get
+metadata requests directed to the right place.
+
+If the DHCP route does not work and the network is isolated, VMs won't get
+metadata, but this already happens with the current implementation so this
+approach doesn't introduce a regression.
+
+**Step 3** - Management of the namespaces and haproxy instances
+
+We propose a new agent called ``neutron-ovn-metadata-agent``.
+We will run this agent on every hypervisor and it will be responsible for
+spawning the haproxy instances for managing the OVS interfaces, network
+namespaces and haproxy processes used to proxy metadata API requests.
+
+**Step 4** - Metadata API request processing
+
+Similar to the existing neutron metadata agent, ``neutron-ovn-metadata-agent``
+must act as an intermediary between haproxy and the Nova metadata API service.
+``neutron-ovn-metadata-agent`` is the process that will have access to the
+host networks where the Nova metadata API exists. Each haproxy will be in a
+network namespace not able to reach the appropriate host network. Haproxy
+will add the necessary headers to the metadata API request and then forward it
+to ``neutron-ovn-metadata-agent`` over a UNIX domain socket, which matches the
+behavior of the current metadata agent.
+
+
+Metadata Proxy Management Logic
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In neutron-ovn-metadata-agent.
+
+* On startup:
+
+ * Do a full sync. Ensure we have all the required metadata proxies running.
+ For that, the agent would watch the ``Port_Binding`` table of the OVN
+ Southbound database and look for all rows with the ``chassis`` column set
+ to the host the agent is running on. For all those entries, make sure a
+ metadata proxy instance is spawned for every ``datapath`` (Neutron
+ network) those ports are attached to. The agent will keep record of the
+ list of networks it currently has proxies running on by updating the
+ ``external-ids`` key ``neutron-metadata-proxy-networks`` of the OVN
+ ``Chassis`` record in the OVN Southbound database that corresponds to this
+ host. As an example, this key would look like
+ ``neutron-metadata-proxy-networks=NET1_UUID,NET4_UUID`` meaning that this
+ chassis is hosting one or more VM's connected to networks 1 and 4 so we
+ should have a metadata proxy instance running for each. Ensure any running
+ metadata proxies no longer needed are torn down.
+
+* Open and maintain a connection to the OVN Northbound database (using the
+ ovsdbapp library). On first connection, and anytime a reconnect happens:
+
+ * Do a full sync.
+
+* Register a callback for creates/updates/deletes to Logical_Switch_Port rows
+ to detect when metadata proxies should be started or torn down.
+ ``neutron-ovn-metadata-agent`` will watch OVN Southbound database
+ (``Port_Binding`` table) to detect when a port gets bound to its chassis. At
+ that point, the agent will make sure that there's a metadata proxy
+ attached to the OVN *localport* for the network which this port is connected
+ to.
+
+* When a new network is created, we must create an OVN *localport* for use
+ as a metadata proxy. This port will be owned by ``network:dhcp`` so that it
+ gets auto deleted upon the removal of the network and it will remain ``DOWN``
+ and not bound to any chassis. The metadata port will be created regardless of
+ the DHCP setting of the subnets within the network as long as the metadata
+ service is enabled.
+
+* When a network is deleted, we must tear down the metadata proxy instance (if
+ present) on the host and delete the corresponding OVN *localport* (which will
+ happen automatically as it's owned by ``network:dhcp``).
+
+Launching a metadata proxy includes:
+
+* Creating a network namespace::
+
+ $ sudo ip netns add
+
+* Creating a VETH pair (OVS upgrades that upgrade the kernel module will make
+ internal ports go away and then brought back by OVS scripts. This may cause
+ some disruption. Therefore, veth pairs are preferred over internal ports)::
+
+ $ sudo ip link add 0 type veth peer name 1
+
+* Creating an OVS interface and placing one end in that namespace::
+
+ $ sudo ovs-vsctl add-port br-int 0
+ $ sudo ip link set 1 netns
+
+* Setting the IP and MAC addresses on that interface::
+
+ $ sudo ip netns exec \
+ > ip link set 1 address
+ $ sudo ip netns exec \
+ > ip addr add / dev 1
+
+* Bringing the VETH pair up::
+
+ $ sudo ip netns exec ip link set 1 up
+ $ sudo ip link set 0 up
+
+* Set ``external-ids:iface-id=NEUTRON_PORT_UUID`` on the OVS interface so that
+ OVN is able to correlate this new OVS interface with the correct OVN logical
+ port::
+
+ $ sudo ovs-vsctl set Interface 0 external_ids:iface-id=
+
+* Starting haproxy in this network namespace.
+
+* Add the network UUID to ``external-ids:neutron-metadata-proxy-networks`` on
+ the Chassis table for our chassis in OVN Southbound database.
+
+Tearing down a metadata proxy includes:
+
+* Removing the network UUID from our chassis.
+
+* Stopping haproxy.
+
+* Deleting the OVS interface.
+
+* Deleting the network namespace.
+
+**Other considerations**
+
+This feature will be enabled by default when using ``ovn`` driver, but there
+should be a way to disable it in case operators who don't need metadata don't
+have to deal with the complexity of it (haproxy instances, network namespaces,
+etcetera). In this case, the agent would not create the neutron ports needed
+for metadata.
+
+There could be a race condition when the first VM for a certain network boots
+on a hypervisor if it does so before the metadata proxy instance has been
+spawned.
+
+Right now, the ``vif-plugged`` event to Nova is sent out when the up column
+in the OVN Northbound database's Logical_Switch_Port table changes to True,
+indicating that the VIF is now up. To overcome this race condition we want
+to wait until all network UUID's to which this VM is connected to are present
+in ``external-ids:neutron-metadata-proxy-networks`` on the Chassis table
+for our chassis in OVN Southbound database. This will delay the event to Nova
+until the metadata proxy instance is up and running on the host ensuring the
+VM will be able to get the metadata on boot.
+
+Alternatives Considered
+-----------------------
+
+Alternative 1: Build metadata support into ovn-controller
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+We've been building some features useful to OpenStack directly into OVN. DHCP
+and DNS are key examples of things we've replaced by building them into
+ovn-controller. The metadata API case has some key differences that make this
+a less attractive solution:
+
+The metadata API is an OpenStack specific feature. DHCP and DNS by contrast
+are more clearly useful outside of OpenStack. Building metadata API proxy
+support into ovn-controller means embedding an HTTP and TCP stack into
+ovn-controller. This is a significant degree of undesired complexity.
+
+This option has been ruled out for these reasons.
+
+Alternative 2: Distributed metadata and High Availability
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+In this approach, we would spawn a metadata proxy per virtual router or per
+network (if isolated), thus, improving the number of metadata proxy instances
+running in the cloud. However, scheduling and HA have to be considered. Also,
+we wouldn't need the OVN *localport* implementation.
+
+``neutron-ovn-metadata-agent`` would run on any host that we wish to be able
+to host metadata API proxies. These hosts must also be running ovn-controller.
+
+Each of these hosts will have a Chassis record in the OVN southbound database
+created by ovn-controller. The Chassis table has a column called
+``external_ids`` which can be used for general metadata however we see fit.
+``neutron-ovn-metadata-agent`` will update its corresponding Chassis record
+with an external-id of ``neutron-metadata-proxy-host=true`` to indicate that
+this OVN chassis is one capable of hosting metadata proxy instances.
+
+Once we have a way to determine hosts capable of hosting metadata API proxies,
+we can add logic to the ovn ML2 driver that schedules metadata API
+proxies. This would be triggered by Neutron API requests.
+
+The output of the scheduling process would be setting an ``external_ids`` key
+on a Logical_Switch_Port in the OVN northbound database that corresponds with
+a metadata proxy. The key could be something like
+``neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME``.
+
+``neutron-ovn-metadata-agent`` on each host would also be watching for updates
+to these Logical_Switch_Port rows. When it detects that a metadata proxy has
+been scheduled locally, it will kick off the process to spawn the local
+haproxy instance and get it plugged into OVN.
+
+HA must also be considered. We must know when a host goes down so that all
+metadata proxies scheduled to that host can be rescheduled. This is almost
+the exact same problem we have with L3 HA. When a host goes down, we need to
+trigger rescheduling gateways to other hosts. We should ensure that the
+approach used for rescheduling L3 gateways can be utilized for rescheduling
+metadata proxies, as well.
+
+In neutron-server (ovn mechanism driver) .
+
+Introduce a new ovn driver configuration option:
+
+* ``[ovn] isolated_metadata=[True|False]``
+
+Events that trigger scheduling a new metadata proxy:
+
+* If isolated_metadata is True
+
+ * When a new network is created, we must create an OVN logical port for use
+ as a metadata proxy and then schedule this to one of the
+ ``neutron-ovn-metadata-agent`` instances.
+
+* If isolated_metadata is False
+
+ * When a network is attached to or removed from a logical router, ensure
+ that at least one of the networks has a metadata proxy port already
+ created. If not, pick a network and create a metadata proxy port and then
+ schedule it to an agent. At this point, we need to update the static route
+ for metadata API.
+
+Events that trigger unscheduling an existing metadata proxy:
+
+* When a network is deleted, delete the metadata proxy port if it exists and
+ unschedule it from a ``neutron-ovn-metadata-agent``.
+
+To schedule a new metadata proxy:
+
+* Determine the list of available OVN Chassis that can host metadata proxies
+ by reading the ``Chassis`` table of the OVN Southbound database. Look for
+ chassis that have an external-id of ``neutron-metadata-proxy-host=true``.
+
+* Of the available OVN chassis, choose the one "least loaded", or currently
+ hosting the fewest number of metadata proxies.
+
+* Set ``neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME`` as an external-id on
+ the Logical_Switch_Port in the OVN Northbound database that corresponds to
+ the neutron port used for this metadata proxy. ``CHASSIS_HOSTNAME`` maps to
+ the hostname row of a Chassis record in the OVN Southbound database.
+
+This approach has been ruled out for its complexity although we have analyzed
+the details deeply because, eventually, and depending on the implementation of
+L3 HA, we will want to evolve to it.
+
+Other References
+----------------
+
+* Haproxy config --
+ https://review.openstack.org/#/c/431691/34/neutron/agent/metadata/driver.py
+
+* https://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html
diff --git a/doc/source/contributor/internals/ovn/native_dhcp.rst b/doc/source/contributor/internals/ovn/native_dhcp.rst
new file mode 100644
index 00000000000..1560449472a
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/native_dhcp.rst
@@ -0,0 +1,53 @@
+.. _native_dhcp:
+
+=============================================
+Using the native DHCP feature provided by OVN
+=============================================
+
+DHCPv4
+------
+
+OVN implements a native DHCPv4 support which caters to the common use case of
+providing an IP address to a booting instance by providing stateless replies to
+DHCPv4 requests based on statically configured address mappings. To do this it
+allows a short list of DHCPv4 options to be configured and applied at each
+compute host running ovn-controller.
+
+OVN northbound db provides a table 'DHCP_Options' to store the DHCP options.
+Logical switch port has a reference to this table.
+
+When a subnet is created and enable_dhcp is True, a new entry is created in
+this table. The 'options' column stores the DHCPv4 options. These DHCPv4
+options are included in the DHCPv4 reply by the ovn-controller when the VIF
+attached to the logical switch port sends a DHCPv4 request.
+
+In order to map the DHCP_Options row with the subnet, the OVN ML2 driver
+stores the subnet id in the 'external_ids' column.
+
+When a new port is created, the 'dhcpv4_options' column of the logical switch
+port refers to the DHCP_Options row created for the subnet of the port.
+If the port has multiple IPv4 subnets, then the first subnet in the 'fixed_ips'
+is used.
+
+If the port has extra DHCPv4 options defined, then a new entry is created
+in the DHCP_Options table for the port. The default DHCP options are obtained
+from the subnet DHCP_Options table and the extra DHCPv4 options of the port
+are overridden. In order to map the port DHCP_Options row with the port,
+the OVN ML2 driver stores both the subnet id and port id in the 'external_ids'
+column.
+
+If admin wants to disable native OVN DHCPv4 for any particular port, then the
+admin needs to define the 'dhcp_disabled' with the value 'true' in the extra
+DHCP options.
+
+Ex. neutron port-update \
+--extra-dhcp-opt ip_version=4, opt_name=dhcp_disabled, opt_value=false
+
+
+DHCPv6
+------
+
+OVN implements a native DHCPv6 support similar to DHCPv4. When a v6 subnet is
+created, the OVN ML2 driver will insert a new entry into DHCP_Options table
+only when the subnet 'ipv6_address_mode' is not 'slaac', and enable_dhcp is
+True.
diff --git a/doc/source/contributor/internals/ovn/ovn_worker.rst b/doc/source/contributor/internals/ovn/ovn_worker.rst
new file mode 100644
index 00000000000..ae8d3b06578
--- /dev/null
+++ b/doc/source/contributor/internals/ovn/ovn_worker.rst
@@ -0,0 +1,84 @@
+.. _ovn_worker:
+
+===========================================
+OVN Neutron Worker and Port status handling
+===========================================
+
+When the logical switch port's VIF is attached or removed to/from the ovn
+integration bridge, ovn-northd updates the Logical_Switch_Port.up to 'True'
+or 'False' accordingly.
+
+In order for the OVN Neutron ML2 driver to update the corresponding neutron
+port's status to 'ACTIVE' or 'DOWN' in the db, it needs to monitor the
+OVN Northbound db. A neutron worker is created for this purpose.
+
+The implementation of the ovn worker can be found here -
+'networking_ovn.ovsdb.worker.OvnWorker'.
+
+Neutron service will create 'n' api workers and 'm' rpc workers and 1 ovn
+worker (all these workers are separate processes).
+
+Api workers and rpc workers will create ovsdb idl client object
+('ovs.db.idl.Idl') to connect to the OVN_Northbound db.
+See 'networking_ovn.ovsdb.impl_idl_ovn.OvsdbNbOvnIdl' and
+'ovsdbapp.backend.ovs_idl.connection.Connection' classes for more details.
+
+Ovn worker will create 'networking_ovn.ovsdb.ovsdb_monitor.OvnIdl' class
+object (which inherits from 'ovs.db.idl.Idl') to connect to the
+OVN_Northbound db. On receiving the OVN_Northbound db updates from the
+ovsdb-server, 'notify' function of 'OVnIdl' is called by the parent class
+object.
+
+OvnIdl.notify() function passes the received events to the
+ovsdb_monitor.OvnDbNotifyHandler class.
+ovsdb_monitor.OvnDbNotifyHandler checks for any changes in
+the 'Logical_Switch_Port.up' and updates the neutron port's status accordingly.
+
+If 'notify_nova_on_port_status_changes' configuration is set, then neutron
+would notify nova on port status changes.
+
+ovsdb locks
+-----------
+
+If there are multiple neutron servers running, then each neutron server will
+have one ovn worker which listens for the notify events. When the
+'Logical_Switch_Port.up' is updated by ovn-northd, we do not want all the
+neutron servers to handle the event and update the neutron port status.
+In order for only one neutron server to handle the events, ovsdb locks are
+used.
+
+At start, each neutron server's ovn worker will try to acquire a lock with id -
+'neutron_ovn_event_lock'. The ovn worker which has acquired the lock will
+handle the notify events.
+
+In case the neutron server with the lock dies, ovsdb-server will assign the
+lock to another neutron server in the queue.
+
+More details about the ovsdb locks can be found here [1] and [2]
+
+[1] - https://tools.ietf.org/html/draft-pfaff-ovsdb-proto-04#section-4.1.8
+[2] - https://github.com/openvswitch/ovs/blob/branch-2.4/python/ovs/db/idl.py#L67
+
+
+One thing to note is the ovn worker (with OvnIdl) do not carry out any
+transactions to the OVN Northbound db.
+
+Since the api and rpc workers are not configured with any locks,
+using the ovsdb lock on the OVN_Northbound and OVN_Southbound DBs by the ovn
+workers will not have any side effects to the transactions done by these api
+and rpc workers.
+
+Handling port status changes when neutron server(s) are down
+------------------------------------------------------------
+
+When neutron server starts, ovn worker would receive a dump of all
+logical switch ports as events. 'ovsdb_monitor.OvnDbNotifyHandler' would
+sync up if there are any inconsistencies in the port status.
+
+OVN Southbound DB Access
+------------------------
+
+The OVN Neutron ML2 driver has a need to acquire chassis information (hostname
+and physnets combinations). This is required initially to support routed
+networks. Thus, the plugin will initiate and maintain a connection to the OVN
+SB DB during startup.
diff --git a/doc/source/contributor/internals/upgrade.rst b/doc/source/contributor/internals/upgrade.rst
index 5e524a76e8d..8a60d1a80d2 100644
--- a/doc/source/contributor/internals/upgrade.rst
+++ b/doc/source/contributor/internals/upgrade.rst
@@ -28,8 +28,7 @@
considerations specific to that choice of backend. For example, OVN does
not use Neutron agents, but does have a local controller that runs on each
compute node. OVN supports rolling upgrades, but information about how that
- works should be covered in the documentation for networking-ovn, the OVN
- Neutron plugin.
+ works should be covered in the documentation for the OVN Neutron plugin.
Upgrade strategy
================
diff --git a/doc/source/contributor/ovn_vagrant/index.rst b/doc/source/contributor/ovn_vagrant/index.rst
new file mode 100644
index 00000000000..cd6831e02c5
--- /dev/null
+++ b/doc/source/contributor/ovn_vagrant/index.rst
@@ -0,0 +1,20 @@
+..
+
+================================================
+Deploying a development environment with vagrant
+================================================
+
+
+The vagrant directory contains a set of vagrant configurations which will
+help you deploy Neutron with ovn driver for testing or development purposes.
+
+We provide a sparse multinode architecture with clear separation between
+services. In the future we will include all-in-one and multi-gateway
+architectures.
+
+
+.. toctree::
+ :maxdepth: 2
+
+ prerequisites
+ sparse-architecture
diff --git a/doc/source/contributor/ovn_vagrant/prerequisites.rst b/doc/source/contributor/ovn_vagrant/prerequisites.rst
new file mode 100644
index 00000000000..7d480e9aa14
--- /dev/null
+++ b/doc/source/contributor/ovn_vagrant/prerequisites.rst
@@ -0,0 +1,29 @@
+.. _prerequisites:
+
+=====================
+Vagrant prerequisites
+=====================
+
+Those are the prerequisites for using the vagrant file definitions
+
+#. Install `VirtualBox `_ and
+ `Vagrant `_. Alternatively
+ you can use parallels or libvirt vagrant plugin.
+
+#. Install plug-ins for Vagrant::
+
+ $ vagrant plugin install vagrant-cachier
+ $ vagrant plugin install vagrant-vbguest
+
+#. On Linux hosts, you can enable instances to access external networks such
+ as the Internet by enabling IP forwarding and configuring SNAT from the IP
+ address range of the provider network interface (typically vboxnet1) on
+ the host to the external network interface on the host. For example, if
+ the ``eth0`` network interface on the host provides external network
+ connectivity::
+
+ # sysctl -w net.ipv4.ip_forward=1
+ # sysctl -p
+ # iptables -t nat -A POSTROUTING -s 10.10.0.0/16 -o eth0 -j MASQUERADE
+
+ Note: These commands do not persist after rebooting the host.
diff --git a/doc/source/contributor/ovn_vagrant/sparse-architecture.rst b/doc/source/contributor/ovn_vagrant/sparse-architecture.rst
new file mode 100644
index 00000000000..0d66f08ed26
--- /dev/null
+++ b/doc/source/contributor/ovn_vagrant/sparse-architecture.rst
@@ -0,0 +1,106 @@
+.. _sparse-architecture:
+
+===================
+Sparse architecture
+===================
+
+The Vagrant scripts deploy OpenStack with Open Virtual Network (OVN)
+using four nodes (five if you use the optional ovn-vtep node) to implement a
+minimal variant of the reference architecture:
+
+#. ovn-db: Database node containing the OVN northbound (NB) and southbound (SB)
+ databases via the Open vSwitch (OVS) database and ``ovn-northd`` services.
+#. ovn-controller: Controller node containing the Identity service, Image
+ service, control plane portion of the Compute service, control plane
+ portion of the Networking service including the ``ovn`` ML2
+ driver, and the dashboard. In addition, the controller node is configured
+ as an NFS server to support instance live migration between the two
+ compute nodes.
+#. ovn-compute1 and ovn-compute2: Two compute nodes containing the Compute
+ hypervisor, ``ovn-controller`` service for OVN, metadata agents for the
+ Networking service, and OVS services. In addition, the compute nodes are
+ configured as NFS clients to support instance live migration between them.
+#. ovn-vtep: Optional. A node to run the HW VTEP simulator. This node is not
+ started by default but can be started by running "vagrant up ovn-vtep"
+ after doing a normal "vagrant up".
+
+During deployment, Vagrant creates three VirtualBox networks:
+
+#. Vagrant management network for deployment and VM access to external
+ networks such as the Internet. Becomes the VM ``eth0`` network interface.
+#. OpenStack management network for the OpenStack control plane, OVN
+ control plane, and OVN overlay networks. Becomes the VM ``eth1`` network
+ interface.
+#. OVN provider network that connects OpenStack instances to external networks
+ such as the Internet. Becomes the VM ``eth2`` network interface.
+
+Requirements
+------------
+
+The default configuration requires approximately 12 GB of RAM and supports
+launching approximately four OpenStack instances using the ``m1.tiny``
+flavor. You can change the amount of resources for each VM in the
+``instances.yml`` file.
+
+Deployment
+----------
+
+#. Follow the pre-requisites described in
+ :doc:`/contributor/ovn_vagrant/prerequisites`
+
+#. Clone the ``neutron`` repository locally and change to the
+ ``neutron/tools/ovn_vagrant/sparse`` directory::
+
+ $ git clone https://opendev.org/openstack/neutron.git
+ $ cd neutron/tools/ovn_vagrant/sparse
+
+#. If necessary, adjust any configuration in the ``instances.yml`` file.
+
+ * If you change any IP addresses or networks, avoid conflicts with the
+ host.
+ * For evaluating large MTUs, adjust the ``mtu`` option. You must also
+ change the MTU on the equivalent ``vboxnet`` interfaces on the host
+ to the same value after Vagrant creates them. For example::
+
+ # ip link set dev vboxnet0 mtu 9000
+ # ip link set dev vboxnet1 mtu 9000
+
+#. Launch the VMs and grab some coffee::
+
+ $ vagrant up
+
+#. After the process completes, you can use the ``vagrant status`` command
+ to determine the VM status::
+
+ $ vagrant status
+ Current machine states:
+
+ ovn-db running (virtualbox)
+ ovn-controller running (virtualbox)
+ ovn-vtep running (virtualbox)
+ ovn-compute1 running (virtualbox)
+ ovn-compute2 running (virtualbox)
+
+#. You can access the VMs using the following commands::
+
+ $ vagrant ssh ovn-db
+ $ vagrant ssh ovn-controller
+ $ vagrant ssh ovn-vtep
+ $ vagrant ssh ovn-compute1
+ $ vagrant ssh ovn-compute2
+
+ Note: If you prefer to use the VM console, the password for the ``root``
+ account is ``vagrant``. Since ovn-controller is set as the primary
+ in the Vagrantfile, the command ``vagrant ssh`` (without specifying
+ the name) will connect ssh to that virtual machine.
+
+#. Access OpenStack services via command-line tools on the ``ovn-controller``
+ node or via the dashboard from the host by pointing a web browser at the
+ IP address of the ``ovn-controller`` node.
+
+ Note: By default, OpenStack includes two accounts: ``admin`` and ``demo``,
+ both using password ``password``.
+
+#. After completing your tasks, you can destroy the VMs::
+
+ $ vagrant destroy
diff --git a/doc/source/contributor/testing/index.rst b/doc/source/contributor/testing/index.rst
index fe03db3dd7b..e09fea4d671 100644
--- a/doc/source/contributor/testing/index.rst
+++ b/doc/source/contributor/testing/index.rst
@@ -36,3 +36,4 @@ Testing
template_model_sync_test
db_transient_failure_injection
ci_scenario_jobs
+ ovn_devstack
diff --git a/doc/source/contributor/testing/ovn_devstack.rst b/doc/source/contributor/testing/ovn_devstack.rst
new file mode 100644
index 00000000000..83e89d50795
--- /dev/null
+++ b/doc/source/contributor/testing/ovn_devstack.rst
@@ -0,0 +1,602 @@
+.. _ovn_devstack:
+
+=====================
+Testing with DevStack
+=====================
+
+This document describes how to test OpenStack with OVN using DevStack. We will
+start by describing how to test on a single host.
+
+Single Node Test Environment
+----------------------------
+
+1. Create a test system.
+
+It's best to use a throwaway dev system for running DevStack. Your best bet is
+to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
+
+2. Create the ``stack`` user.
+
+::
+
+ $ git clone https://opendev.org/openstack/devstack.git
+ $ sudo ./devstack/tools/create-stack-user.sh
+
+3. Switch to the ``stack`` user and clone DevStack and Neutron.
+
+::
+
+ $ sudo su - stack
+ $ git clone https://opendev.org/openstack/devstack.git
+ $ git clone https://opendev.org/openstack/neutron.git
+
+4. Configure DevStack to use networking-ovn.
+
+Ovn driver comes with a sample DevStack configuration file you can start
+with. For example, you may want to set some values for the various PASSWORD
+variables in that file so DevStack doesn't have to prompt you for them. Feel
+free to edit it if you'd like, but it should work as-is.
+
+::
+
+ $ cd devstack
+ $ cp ../neutron/devstack/ovn.conf.sample local.conf
+
+5. Run DevStack.
+
+This is going to take a while. It installs a bunch of packages, clones a bunch
+of git repos, and installs everything from these git repos.
+
+::
+
+ $ ./stack.sh
+
+Once DevStack completes successfully, you should see output that looks
+something like this::
+
+ This is your host IP address: 172.16.189.6
+ This is your host IPv6 address: ::1
+ Horizon is now available at http://172.16.189.6/dashboard
+ Keystone is serving at http://172.16.189.6/identity/
+ The default users are: admin and demo
+ The password: password
+ 2017-03-09 15:10:54.117 | stack.sh completed in 2110 seconds.
+
+Environment Variables
+---------------------
+
+Once DevStack finishes successfully, we're ready to start interacting with
+OpenStack APIs. OpenStack provides a set of command line tools for interacting
+with these APIs. DevStack provides a file you can source to set up the right
+environment variables to make the OpenStack command line tools work.
+
+::
+
+ $ . openrc
+
+If you're curious what environment variables are set, they generally start with
+an OS prefix::
+
+ $ env | grep OS
+ OS_REGION_NAME=RegionOne
+ OS_IDENTITY_API_VERSION=2.0
+ OS_PASSWORD=password
+ OS_AUTH_URL=http://192.168.122.8:5000/v2.0
+ OS_USERNAME=demo
+ OS_TENANT_NAME=demo
+ OS_VOLUME_API_VERSION=2
+ OS_CACERT=/opt/stack/data/CA/int-ca/ca-chain.pem
+ OS_NO_CACHE=1
+
+Default Network Configuration
+-----------------------------
+
+By default, DevStack creates networks called ``private`` and ``public``.
+Run the following command to see the existing networks::
+
+ $ openstack network list
+ +--------------------------------------+---------+----------------------------------------------------------------------------+
+ | ID | Name | Subnets |
+ +--------------------------------------+---------+----------------------------------------------------------------------------+
+ | 40080dad-0064-480a-b1b0-592ae51c1471 | private | 5ff81545-7939-4ae0-8365-1658d45fa85c, da34f952-3bfc-45bb-b062-d2d973c1a751 |
+ | 7ec986dd-aae4-40b5-86cf-8668feeeab67 | public | 60d0c146-a29b-4cd3-bd90-3745603b1a4b, f010c309-09be-4af2-80d6-e6af9c78bae7 |
+ +--------------------------------------+---------+----------------------------------------------------------------------------+
+
+A Neutron network is implemented as an OVN logical switch. Ovn driver
+creates logical switches with a name in the format neutron-.
+We can use ``ovn-nbctl`` to list the configured logical switches and see that
+their names correlate with the output from ``openstack network list``::
+
+ $ ovn-nbctl ls-list
+ 71206f5c-b0e6-49ce-b572-eb2e964b2c4e (neutron-40080dad-0064-480a-b1b0-592ae51c1471)
+ 8d8270e7-fd51-416f-ae85-16565200b8a4 (neutron-7ec986dd-aae4-40b5-86cf-8668feeeab67)
+
+ $ ovn-nbctl get Logical_Switch neutron-40080dad-0064-480a-b1b0-592ae51c1471 external_ids
+ {"neutron:network_name"=private}
+
+Booting VMs
+-----------
+
+In this section we'll go through the steps to create two VMs that have a
+virtual NIC attached to the ``private`` Neutron network.
+
+DevStack uses libvirt as the Nova backend by default. If KVM is available, it
+will be used. Otherwise, it will just run qemu emulated guests. This is
+perfectly fine for our testing, as we only need these VMs to be able to send
+and receive a small amount of traffic so performance is not very important.
+
+1. Get the Network UUID.
+
+Start by getting the UUID for the ``private`` network from the output of
+``openstack network list`` from earlier and save it off::
+
+ $ PRIVATE_NET_ID=$(openstack network show private -c id -f value)
+
+2. Create an SSH keypair.
+
+Next create an SSH keypair in Nova. Later, when we boot a VM, we'll ask that
+the public key be put in the VM so we can SSH into it.
+
+::
+
+ $ openstack keypair create demo > id_rsa_demo
+ $ chmod 600 id_rsa_demo
+
+3. Choose a flavor.
+
+We need minimal resources for these test VMs, so the ``m1.nano`` flavor is
+sufficient.
+
+::
+
+ $ openstack flavor list
+ +----+-----------+-------+------+-----------+-------+-----------+
+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+ +----+-----------+-------+------+-----------+-------+-----------+
+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
+ | 2 | m1.small | 2048 | 20 | 0 | 1 | True |
+ | 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
+ | 4 | m1.large | 8192 | 80 | 0 | 4 | True |
+ | 42 | m1.nano | 64 | 0 | 0 | 1 | True |
+ | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
+ | 84 | m1.micro | 128 | 0 | 0 | 1 | True |
+ | c1 | cirros256 | 256 | 0 | 0 | 1 | True |
+ | d1 | ds512M | 512 | 5 | 0 | 1 | True |
+ | d2 | ds1G | 1024 | 10 | 0 | 1 | True |
+ | d3 | ds2G | 2048 | 10 | 0 | 2 | True |
+ | d4 | ds4G | 4096 | 20 | 0 | 4 | True |
+ +----+-----------+-------+------+-----------+-------+-----------+
+
+ $ FLAVOR_ID=$(openstack flavor show m1.nano -c id -f value)
+
+4. Choose an image.
+
+DevStack imports the CirrOS image by default, which is perfect for our testing.
+It's a very small test image.
+
+::
+
+ $ openstack image list
+ +--------------------------------------+--------------------------+--------+
+ | ID | Name | Status |
+ +--------------------------------------+--------------------------+--------+
+ | 849a8db2-3754-4cf6-9271-491fa4ff7195 | cirros-0.3.5-x86_64-disk | active |
+ +--------------------------------------+--------------------------+--------+
+
+ $ IMAGE_ID=$(openstack image list -c ID -f value)
+
+5. Setup a security rule so that we can access the VMs we will boot up next.
+
+By default, DevStack does not allow users to access VMs, to enable that, we
+will need to add a rule. We will allow both ICMP and SSH.
+
+::
+
+ $ openstack security group rule create --ingress --ethertype IPv4 --dst-port 22 --protocol tcp default
+ $ openstack security group rule create --ingress --ethertype IPv4 --protocol ICMP default
+ $ openstack security group rule list
+ +--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
+ | ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+ +--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
+ ...
+ | ade97198-db44-429e-9b30-24693d86d9b1 | tcp | 0.0.0.0/0 | 22:22 | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
+ | d0861a98-f90e-4d1a-abfb-827b416bc2f6 | icmp | 0.0.0.0/0 | | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
+ ...
+ +--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
+
+6. Boot some VMs.
+
+Now we will boot two VMs. We'll name them ``test1`` and ``test2``.
+
+::
+
+ $ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test1
+ +-----------------------------+-----------------------------------------------------------------+
+ | Field | Value |
+ +-----------------------------+-----------------------------------------------------------------+
+ | OS-DCF:diskConfig | MANUAL |
+ | OS-EXT-AZ:availability_zone | |
+ | OS-EXT-STS:power_state | NOSTATE |
+ | OS-EXT-STS:task_state | scheduling |
+ | OS-EXT-STS:vm_state | building |
+ | OS-SRV-USG:launched_at | None |
+ | OS-SRV-USG:terminated_at | None |
+ | accessIPv4 | |
+ | accessIPv6 | |
+ | addresses | |
+ | adminPass | BzAWWA6byGP6 |
+ | config_drive | |
+ | created | 2017-03-09T16:56:08Z |
+ | flavor | m1.nano (42) |
+ | hostId | |
+ | id | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 |
+ | image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
+ | key_name | demo |
+ | name | test1 |
+ | progress | 0 |
+ | project_id | b6522570f7344c06b1f24303abf3c479 |
+ | properties | |
+ | security_groups | name='default' |
+ | status | BUILD |
+ | updated | 2017-03-09T16:56:08Z |
+ | user_id | c68f77f1d85e43eb9e5176380a68ac1f |
+ | volumes_attached | |
+ +-----------------------------+-----------------------------------------------------------------+
+
+ $ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test2
+ +-----------------------------+-----------------------------------------------------------------+
+ | Field | Value |
+ +-----------------------------+-----------------------------------------------------------------+
+ | OS-DCF:diskConfig | MANUAL |
+ | OS-EXT-AZ:availability_zone | |
+ | OS-EXT-STS:power_state | NOSTATE |
+ | OS-EXT-STS:task_state | scheduling |
+ | OS-EXT-STS:vm_state | building |
+ | OS-SRV-USG:launched_at | None |
+ | OS-SRV-USG:terminated_at | None |
+ | accessIPv4 | |
+ | accessIPv6 | |
+ | addresses | |
+ | adminPass | YB8dmt5v88JV |
+ | config_drive | |
+ | created | 2017-03-09T16:56:50Z |
+ | flavor | m1.nano (42) |
+ | hostId | |
+ | id | 170d4f37-9299-4a08-b48b-2b90fce8e09b |
+ | image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
+ | key_name | demo |
+ | name | test2 |
+ | progress | 0 |
+ | project_id | b6522570f7344c06b1f24303abf3c479 |
+ | properties | |
+ | security_groups | name='default' |
+ | status | BUILD |
+ | updated | 2017-03-09T16:56:51Z |
+ | user_id | c68f77f1d85e43eb9e5176380a68ac1f |
+ | volumes_attached | |
+ +-----------------------------+-----------------------------------------------------------------+
+
+Once both VMs have been started, they will have a status of ``ACTIVE``::
+
+ $ openstack server list
+ +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
+ | ID | Name | Status | Networks | Image Name |
+ +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
+ | 170d4f37-9299-4a08-b48b-2b90fce8e09b | test2 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe24:49df, 10.0.0.3 | cirros-0.3.5-x86_64-disk |
+ | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 | test1 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe3f:953d, 10.0.0.10 | cirros-0.3.5-x86_64-disk |
+ +--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
+
+Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.10``. If we list
+Neutron ports, there are two new ports with these addresses associated
+with them::
+
+ $ openstack port list
+ +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
+ | ID | Name | MAC Address | Fixed IP Addresses | Status |
+ +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
+ ...
+ | 97c970b0-485d-47ec-868d-783c2f7acde3 | | fa:16:3e:3f:95:3d | ip_address='10.0.0.10', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
+ | | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe3f:953d', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
+ | e003044d-334a-4de3-96d9-35b2d2280454 | | fa:16:3e:24:49:df | ip_address='10.0.0.3', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
+ | | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe24:49df', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
+ ...
+ +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
+
+ $ TEST1_PORT_ID=97c970b0-485d-47ec-868d-783c2f7acde3
+ $ TEST2_PORT_ID=e003044d-334a-4de3-96d9-35b2d2280454
+
+Now we can look at OVN using ``ovn-nbctl`` to see the logical switch ports
+that were created for these two Neutron ports. The first part of the output
+is the OVN logical switch port UUID. The second part in parentheses is the
+logical switch port name. Neutron sets the logical switch port name equal to
+the Neutron port ID.
+
+::
+
+ $ ovn-nbctl lsp-list neutron-$PRIVATE_NET_ID
+ ...
+ fde1744b-e03b-46b7-b181-abddcbe60bf2 (97c970b0-485d-47ec-868d-783c2f7acde3)
+ 7ce284a8-a48a-42f5-bf84-b2bca62cd0fe (e003044d-334a-4de3-96d9-35b2d2280454)
+ ...
+
+
+These two ports correspond to the two VMs we created.
+
+VM Connectivity
+---------------
+
+We can connect to our VMs by associating a floating IP address from the public
+network.
+
+::
+
+ $ openstack floating ip create --port $TEST1_PORT_ID public
+ +---------------------+--------------------------------------+
+ | Field | Value |
+ +---------------------+--------------------------------------+
+ | created_at | 2017-03-09T18:58:12Z |
+ | description | |
+ | fixed_ip_address | 10.0.0.10 |
+ | floating_ip_address | 172.24.4.8 |
+ | floating_network_id | 7ec986dd-aae4-40b5-86cf-8668feeeab67 |
+ | id | 24ff0799-5a72-4a5b-abc0-58b301c9aee5 |
+ | name | None |
+ | port_id | 97c970b0-485d-47ec-868d-783c2f7acde3 |
+ | project_id | b6522570f7344c06b1f24303abf3c479 |
+ | revision_number | 1 |
+ | router_id | ee51adeb-0dd8-4da0-ab6f-7ce60e00e7b0 |
+ | status | DOWN |
+ | updated_at | 2017-03-09T18:58:12Z |
+ +---------------------+--------------------------------------+
+
+Devstack does not wire up the public network by default so we must do
+that before connecting to this floating IP address.
+
+::
+
+ $ sudo ip link set br-ex up
+ $ sudo ip route add 172.24.4.0/24 dev br-ex
+ $ sudo ip addr add 172.24.4.1/24 dev br-ex
+
+Now you should be able to connect to the VM via its floating IP address.
+First, ping the address.
+
+::
+
+ $ ping -c 1 172.24.4.8
+ PING 172.24.4.8 (172.24.4.8) 56(84) bytes of data.
+ 64 bytes from 172.24.4.8: icmp_seq=1 ttl=63 time=0.823 ms
+
+ --- 172.24.4.8 ping statistics ---
+ 1 packets transmitted, 1 received, 0% packet loss, time 0ms
+ rtt min/avg/max/mdev = 0.823/0.823/0.823/0.000 ms
+
+Now SSH to the VM::
+
+ $ ssh -i id_rsa_demo cirros@172.24.4.8 hostname
+ test1
+
+Adding Another Compute Node
+---------------------------
+
+After completing the earlier instructions for setting up devstack, you can use
+a second VM to emulate an additional compute node. This is important for OVN
+testing as it exercises the tunnels created by OVN between the hypervisors.
+
+Just as before, create a throwaway VM but make sure that this VM has a
+different host name. Having same host name for both VMs will confuse Nova and
+will not produce two hypervisors when you query nova hypervisor list later.
+Once the VM is setup, create the ``stack`` user::
+
+ $ git clone https://opendev.org/openstack/devstack.git
+ $ sudo ./devstack/tools/create-stack-user.sh
+
+Switch to the ``stack`` user and clone DevStack and neutron::
+
+ $ sudo su - stack
+ $ git clone https://opendev.org/openstack/devstack.git
+ $ git clone https://opendev.org/openstack/neutron.git
+
+networking-ovn comes with another sample configuration file that can be used
+for this::
+
+ $ cd devstack
+ $ cp ../neutron/devstack/ovn-computenode.conf.sample local.conf
+
+You must set SERVICE_HOST in local.conf. The value should be the IP address of
+the main DevStack host. You must also set HOST_IP to the IP address of this
+new host. See the text in the sample configuration file for more
+information. Once that is complete, run DevStack::
+
+ $ cd devstack
+ $ ./stack.sh
+
+This should complete in less time than before, as it's only running a single
+OpenStack service (nova-compute) along with OVN (ovn-controller, ovs-vswitchd,
+ovsdb-server). The final output will look something like this::
+
+
+ This is your host IP address: 172.16.189.30
+ This is your host IPv6 address: ::1
+ 2017-03-09 18:39:27.058 | stack.sh completed in 1149 seconds.
+
+Now go back to your main DevStack host. You can use admin credentials to
+verify that the additional hypervisor has been added to the deployment::
+
+ $ cd devstack
+ $ . openrc admin
+
+ $ openstack hypervisor list
+ +----+------------------------+-----------------+---------------+-------+
+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+ +----+------------------------+-----------------+---------------+-------+
+ | 1 | centos7-ovn-devstack | QEMU | 172.16.189.6 | up |
+ | 2 | centos7-ovn-devstack-2 | QEMU | 172.16.189.30 | up |
+ +----+------------------------+-----------------+---------------+-------+
+
+You can also look at OVN and OVS to see that the second host has shown up. For
+example, there will be a second entry in the Chassis table of the
+OVN_Southbound database. You can use the ``ovn-sbctl`` utility to list
+chassis, their configuration, and the ports bound to each of them::
+
+ $ ovn-sbctl show
+
+ Chassis "ddc8991a-d838-4758-8d15-71032da9d062"
+ hostname: "centos7-ovn-devstack"
+ Encap vxlan
+ ip: "172.16.189.6"
+ options: {csum="true"}
+ Encap geneve
+ ip: "172.16.189.6"
+ options: {csum="true"}
+ Port_Binding "97c970b0-485d-47ec-868d-783c2f7acde3"
+ Port_Binding "e003044d-334a-4de3-96d9-35b2d2280454"
+ Port_Binding "cr-lrp-08d1f28d-cc39-4397-b12b-7124080899a1"
+ Chassis "b194d07e-0733-4405-b795-63b172b722fd"
+ hostname: "centos7-ovn-devstack-2.os1.phx2.redhat.com"
+ Encap geneve
+ ip: "172.16.189.30"
+ options: {csum="true"}
+ Encap vxlan
+ ip: "172.16.189.30"
+ options: {csum="true"}
+
+You can also see a tunnel created to the other compute node::
+
+ $ ovs-vsctl show
+ ...
+ Bridge br-int
+ fail_mode: secure
+ ...
+ Port "ovn-b194d0-0"
+ Interface "ovn-b194d0-0"
+ type: geneve
+ options: {csum="true", key=flow, remote_ip="172.16.189.30"}
+ ...
+ ...
+
+Provider Networks
+-----------------
+
+Neutron has a "provider networks" API extension that lets you specify
+some additional attributes on a network. These attributes let you
+map a Neutron network to a physical network in your environment.
+The OVN ML2 driver is adding support for this API extension. It currently
+supports "flat" and "vlan" networks.
+
+Here is how you can test it:
+
+First you must create an OVS bridge that provides connectivity to the
+provider network on every host running ovn-controller. For trivial
+testing this could just be a dummy bridge. In a real environment, you
+would want to add a local network interface to the bridge, as well.
+
+::
+
+ $ ovs-vsctl add-br br-provider
+
+ovn-controller on each host must be configured with a mapping between
+a network name and the bridge that provides connectivity to that network.
+In this case we'll create a mapping from the network name "providernet"
+to the bridge 'br-provider".
+
+::
+
+ $ ovs-vsctl set open . \
+ external-ids:ovn-bridge-mappings=providernet:br-provider
+
+If you want to enable this chassis to host a gateway router for
+external connectivity, then set ovn-cms-options to enable-chassis-as-gw.
+
+::
+
+ $ ovs-vsctl set open . \
+ external-ids:ovn-cms-options="enable-chassis-as-gw"
+
+Now create a Neutron provider network.
+
+::
+
+ $ openstack network create provider --share \
+ --provider-physical-network providernet \
+ --provider-network-type flat
+
+Alternatively, you can define connectivity to a VLAN instead of a flat network:
+
+::
+
+ $ openstack network create provider-101 --share \
+ --provider-physical-network providernet \
+ --provider-network-type vlan
+ --provider-segment 101
+
+Observe that the OVN ML2 driver created a special logical switch port of type
+localnet on the logical switch to model the connection to the physical network.
+
+::
+
+ $ ovn-nbctl show
+ ...
+ switch 5bbccbbd-f5ca-411b-bad9-01095d6f1316 (neutron-729dbbee-db84-4a3d-afc3-82c0b3701074)
+ port provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
+ addresses: ["unknown"]
+ ...
+
+ $ ovn-nbctl lsp-get-type provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
+ localnet
+
+ $ ovn-nbctl lsp-get-options provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
+ network_name=providernet
+
+If VLAN is used, there will be a VLAN tag shown on the localnet port as well.
+
+Finally, create a Neutron port on the provider network.
+
+::
+
+ $ openstack port create --network provider myport
+
+or if you followed the VLAN example, it would be:
+
+::
+
+ $ openstack port create --network provider-101 myport
+
+Skydive
+-------
+
+`Skydive `_ is an open source
+real-time network topology and protocols analyzer. It aims to provide a
+comprehensive way of understanding what is happening in the network
+infrastructure. Skydive works by utilizing agents to collect host-local
+information, and sending this information to a central agent for
+further analysis. It utilizes elasticsearch to store the data.
+
+To enable Skydive support with OVN and devstack, enable it on the control
+and compute nodes.
+
+On the control node, enable it as follows:
+
+::
+
+ enable_plugin skydive https://github.com/skydive-project/skydive.git
+ enable_service skydive-analyzer
+
+On the compute nodes, enable it as follows:
+
+::
+
+ enable_plugin skydive https://github.com/skydive-project/skydive.git
+ enable_service skydive-agent
+
+Troubleshooting
+---------------
+
+If you run into any problems, take a look at our :doc:`/admin/ovn/troubleshooting`
+page.
+
+Additional Resources
+--------------------
+
+See the documentation and other references linked
+from the :doc:`/admin/ovn/ovn` page.
diff --git a/doc/source/index.rst b/doc/source/index.rst
index 74bb20d9094..a6a4c20f80f 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -34,7 +34,7 @@ Networking Guide
----------------
.. toctree::
- :maxdepth: 2
+ :maxdepth: 3
admin/index
@@ -53,6 +53,14 @@ CLI Reference
cli/index
+OVN Driver
+----------
+
+.. toctree::
+ :maxdepth: 2
+
+ ovn/index
+
Neutron Feature Classification
------------------------------
diff --git a/doc/source/install/index.rst b/doc/source/install/index.rst
index c9e823fa362..67168ccec53 100644
--- a/doc/source/install/index.rst
+++ b/doc/source/install/index.rst
@@ -13,6 +13,7 @@ Networking service Installation Guide
install-obs.rst
install-rdo.rst
install-ubuntu.rst
+ ovn/index.rst
This chapter explains how to install and configure the Networking
service (neutron) using the :ref:`provider networks ` or
diff --git a/doc/source/install/ovn/figures/ovn-initial-resources.png b/doc/source/install/ovn/figures/ovn-initial-resources.png
new file mode 100644
index 00000000000..51a169f3aec
Binary files /dev/null and b/doc/source/install/ovn/figures/ovn-initial-resources.png differ
diff --git a/doc/source/install/ovn/figures/ovn-initial-resources.svg b/doc/source/install/ovn/figures/ovn-initial-resources.svg
new file mode 100644
index 00000000000..6e55a65a83f
--- /dev/null
+++ b/doc/source/install/ovn/figures/ovn-initial-resources.svg
@@ -0,0 +1,1596 @@
+
+
+
+
diff --git a/doc/source/install/ovn/figures/tripleo-ovn-arch.png b/doc/source/install/ovn/figures/tripleo-ovn-arch.png
new file mode 100644
index 00000000000..bf9daf013a8
Binary files /dev/null and b/doc/source/install/ovn/figures/tripleo-ovn-arch.png differ
diff --git a/doc/source/install/ovn/figures/tripleo-ovn-arch.svg b/doc/source/install/ovn/figures/tripleo-ovn-arch.svg
new file mode 100644
index 00000000000..243c97a1560
--- /dev/null
+++ b/doc/source/install/ovn/figures/tripleo-ovn-arch.svg
@@ -0,0 +1,3175 @@
+
+
+
+
diff --git a/doc/source/install/ovn/index.rst b/doc/source/install/ovn/index.rst
new file mode 100644
index 00000000000..9bc9763064e
--- /dev/null
+++ b/doc/source/install/ovn/index.rst
@@ -0,0 +1,11 @@
+..
+
+=========================
+OVN Install Documentation
+=========================
+
+.. toctree::
+ :maxdepth: 1
+
+ manual_install.rst
+ tripleo_install.rst
diff --git a/doc/source/install/ovn/manual_install.rst b/doc/source/install/ovn/manual_install.rst
new file mode 100644
index 00000000000..e3e0c32dbde
--- /dev/null
+++ b/doc/source/install/ovn/manual_install.rst
@@ -0,0 +1,347 @@
+.. _manual_install:
+
+==============================
+Manual install & Configuration
+==============================
+
+This document discusses what is required for manual installation or
+integration into a production OpenStack deployment tool of conventional
+architectures that include the following types of nodes:
+
+* Controller - Runs OpenStack control plane services such as REST APIs
+ and databases.
+
+* Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents
+ for the Networking service. Some agents optional. Usually provides
+ connectivity between provider (public) and project (private) networks
+ via NAT and floating IP addresses.
+
+ .. note::
+
+ Some tools deploy these services on controller nodes.
+
+* Compute - Runs the hypervisor and layer-2 agent for the Networking
+ service.
+
+Packaging
+---------
+
+Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers
+it experimental. The Networking service integration for OVN is now one of
+the in-tree Neutron drivers so should be delivered with ``neutron`` package,
+but older versions of this integration were delivered with independent
+package, typically ``networking-ovn``.
+
+Building OVS from source automatically installs OVN. For deployment tools
+using distribution packages, the ``openvswitch-ovn`` package for RHEL/CentOS
+and compatible distributions automatically installs ``openvswitch`` as a
+dependency. Ubuntu/Debian includes ``ovn-central``, ``ovn-host``,
+``ovn-docker``, and ``ovn-common`` packages that pull in the appropriate Open
+vSwitch dependencies as needed.
+
+A ``python-networking-ovn`` RPM may be obtained for Fedora or CentOS from
+the RDO project. A package based on the ``master`` branch of
+``networking-ovn`` can be found at https://trunk.rdoproject.org/.
+
+Fedora and CentOS RPM builds of OVS and OVN from the ``master`` branch of
+``ovs`` can be found in this COPR repository:
+https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
+
+Controller nodes
+----------------
+
+Each controller node runs the OVS service (including dependent services such
+as ``ovsdb-server``) and the ``ovn-northd`` service. However, only a single
+instance of the ``ovsdb-server`` and ``ovn-northd`` services can operate in
+a deployment. However, deployment tools can implement active/passive
+high-availability using a management tool that monitors service health
+and automatically starts these services on another node after failure of the
+primary node. See the :doc:`/ovn/faq/index` for more information.
+
+#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
+
+#. Start the OVS service. The central OVS service starts the ``ovsdb-server``
+ service that manages OVN databases.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start openvswitch
+
+ Using the ``ovs-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
+
+#. Configure the ``ovsdb-server`` component. By default, the ``ovsdb-server``
+ service only permits local access to databases via Unix socket. However,
+ OVN services on compute nodes require access to these databases.
+
+ * Permit remote database access.
+
+ .. code-block:: console
+
+ # ovn-nbctl set-connection ptcp:6641:0.0.0.0 -- \
+ set connection . inactivity_probe=60000
+ # ovn-sbctl set-connection ptcp:6642:0.0.0.0 -- \
+ set connection . inactivity_probe=60000
+ # if using the VTEP functionality:
+ # ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:0.0.0.0
+
+ Replace ``0.0.0.0`` with the IP address of the management network
+ interface on the controller node to avoid listening on all interfaces.
+
+ .. note::
+
+ Permit remote access to TCP ports: 6640 (OVS) to VTEPS (if you use vteps),
+ 6642 (SBDB) to hosts running neutron-server, gateway nodes that run ovn-controller,
+ and compute node services like ovn-controller an ovn-metadata-agent. 6641 (NBDB) to
+ hosts running neutron-server.
+
+#. Start the ``ovn-northd`` service.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start ovn-northd
+
+ Using the ``ovn-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovn-ctl start_northd
+
+ Options for *start_northd*:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovn-ctl start_northd --help
+ # ...
+ # DB_NB_SOCK="/usr/local/etc/openvswitch/nb_db.sock"
+ # DB_NB_PID="/usr/local/etc/openvswitch/ovnnb_db.pid"
+ # DB_SB_SOCK="usr/local/etc/openvswitch/sb_db.sock"
+ # DB_SB_PID="/usr/local/etc/openvswitch/ovnsb_db.pid"
+ # ...
+
+#. Configure the Networking server component. The Networking service
+ implements OVN as an ML2 driver. Edit the ``/etc/neutron/neutron.conf``
+ file:
+
+ * Enable the ML2 core plug-in.
+
+ .. code-block:: ini
+
+ [DEFAULT]
+ ...
+ core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
+
+ * Enable the OVN layer-3 service.
+
+ .. code-block:: ini
+
+ [DEFAULT]
+ ...
+ service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
+
+#. Configure the ML2 plug-in. Edit the
+ ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file:
+
+ * Configure the OVN mechanism driver, network type drivers, self-service
+ (tenant) network types, and enable the port security extension.
+
+ .. code-block:: ini
+
+ [ml2]
+ ...
+ mechanism_drivers = ovn
+ type_drivers = local,flat,vlan,geneve
+ tenant_network_types = geneve
+ extension_drivers = port_security
+ overlay_ip_version = 4
+
+ .. note::
+
+ To enable VLAN self-service networks, make sure that OVN
+ version 2.11 (or higher) is used, then add ``vlan`` to the
+ ``tenant_network_types`` option. The first network type in the
+ list becomes the default self-service network type.
+
+ To use IPv6 for all overlay (tunnel) network endpoints,
+ set the ``overlay_ip_version`` option to ``6``.
+
+ * Configure the Geneve ID range and maximum header size. The IP version
+ overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added
+ to the maximum header size based on the ML2 ``overlay_ip_version``
+ option.
+
+ .. code-block:: ini
+
+ [ml2_type_geneve]
+ ...
+ vni_ranges = 1:65536
+ max_header_size = 38
+
+ .. note::
+
+ The Networking service uses the ``vni_ranges`` option to allocate
+ network segments. However, OVN ignores the actual values. Thus, the ID
+ range only determines the quantity of Geneve networks in the
+ environment. For example, a range of ``5001:6000`` defines a maximum
+ of 1000 Geneve networks.
+
+ * Optionally, enable support for VLAN provider and self-service
+ networks on one or more physical networks. If you specify only
+ the physical network, only administrative (privileged) users can
+ manage VLAN networks. Additionally specifying a VLAN ID range for
+ a physical network enables regular (non-privileged) users to
+ manage VLAN networks. The Networking service allocates the VLAN ID
+ for each self-service network using the VLAN ID range for the
+ physical network.
+
+ .. code-block:: ini
+
+ [ml2_type_vlan]
+ ...
+ network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
+
+ Replace ``PHYSICAL_NETWORK`` with the physical network name and
+ optionally define the minimum and maximum VLAN IDs. Use a comma
+ to separate each physical network.
+
+ For example, to enable support for administrative VLAN networks
+ on the ``physnet1`` network and self-service VLAN networks on
+ the ``physnet2`` network using VLAN IDs 1001 to 2000:
+
+ .. code-block:: ini
+
+ network_vlan_ranges = physnet1,physnet2:1001:2000
+
+ * Enable security groups.
+
+ .. code-block:: ini
+
+ [securitygroup]
+ ...
+ enable_security_group = true
+
+ .. note::
+
+ The ``firewall_driver`` option under ``[securitygroup]`` is ignored
+ since the OVN ML2 driver itself handles security groups.
+
+ * Configure OVS database access and L3 scheduler
+
+ .. code-block:: ini
+
+ [ovn]
+ ...
+ ovn_nb_connection = tcp:IP_ADDRESS:6641
+ ovn_sb_connection = tcp:IP_ADDRESS:6642
+ ovn_l3_scheduler = OVN_L3_SCHEDULER
+
+ .. note::
+
+ Replace ``IP_ADDRESS`` with the IP address of the controller node that
+ runs the ``ovsdb-server`` service. Replace ``OVN_L3_SCHEDULER`` with
+ ``leastloaded`` if you want the scheduler to select a compute node with
+ the least number of gateway ports or ``chance`` if you want the
+ scheduler to randomly select a compute node from the available list of
+ compute nodes.
+
+ * Set ovn-cms-options with enable-chassis-as-gw in Open_vSwitch table's
+ external_ids column. Then if this chassis has proper bridge mappings,
+ it will be selected for scheduling gateway routers.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw
+
+#. Start the ``neutron-server`` service.
+
+Network nodes
+-------------
+
+Deployments using OVN native layer-3 and DHCP services do not require
+conventional network nodes because connectivity to external networks
+(including VTEP gateways) and routing occurs on compute nodes.
+
+Compute nodes
+-------------
+
+Each compute node runs the OVS and ``ovn-controller`` services. The
+``ovn-controller`` service replaces the conventional OVS layer-2 agent.
+
+#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
+
+#. Start the OVS service.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start openvswitch
+
+ Using the ``ovs-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
+
+#. Configure the OVS service.
+
+ * Use OVS databases on the controller node.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
+
+ Replace ``IP_ADDRESS`` with the IP address of the controller node
+ that runs the ``ovsdb-server`` service.
+
+ * Enable one or more overlay network protocols. At a minimum, OVN requires
+ enabling the ``geneve`` protocol. Deployments using VTEP gateways should
+ also enable the ``vxlan`` protocol.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
+
+ .. note::
+
+ Deployments without VTEP gateways can safely enable both protocols.
+
+ * Configure the overlay network local endpoint IP address.
+
+ .. code-block:: console
+
+ # ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS
+
+ Replace ``IP_ADDRESS`` with the IP address of the overlay network
+ interface on the compute node.
+
+#. Start the ``ovn-controller`` service.
+
+ Using the *systemd* unit:
+
+ .. code-block:: console
+
+ # systemctl start ovn-controller
+
+ Using the ``ovn-ctl`` script:
+
+ .. code-block:: console
+
+ # /usr/share/openvswitch/scripts/ovn-ctl start_controller
+
+Verify operation
+----------------
+
+#. Each compute node should contain an ``ovn-controller`` instance.
+
+ .. code-block:: console
+
+ # ovn-sbctl show
+