Merge networking-ovn documentation into neutron

It also adds 2 sample of devstack's local.conf file
for deploying Neutron with OVN mechanism driver.

Needed to create PNG files out of the existing SVG
ones in order to pass the pdf doc build.

Co-Authored-By: Aaron Rosen <aaronorosen@gmail.com>
Co-Authored-By: Akihiro Motoki <amotoki@gmail.com>
Co-Authored-By: Amitabha Biswas <abiswas@us.ibm.com>
Co-Authored-By: Andreas Jaeger <aj@suse.com>
Co-Authored-By: Anh Tran <anhtt@vn.fujitsu.com>
Co-Authored-By: Assaf Muller <amuller@redhat.com>
Co-Authored-By: Babu Shanmugam <bschanmu@redhat.com>
Co-Authored-By: Brian Haley <bhaley@redhat.com>
Co-Authored-By: Chandra S Vejendla <csvejend@us.ibm.com>
Co-Authored-By: Daniel Alvarez <dalvarez@redhat.com>
Co-Authored-By: Dong Jun <dongj@dtdream.com>
Co-Authored-By: Emilien Macchi <emilien@redhat.com>
Co-Authored-By: Flavio Fernandes <flavio@flaviof.com>
Co-Authored-By: Gal Sagie <gal.sagie@huawei.com>
Co-Authored-By: Gary Kotton <gkotton@vmware.com>
Co-Authored-By: Guoshuai Li <ligs@dtdream.com>
Co-Authored-By: Han Zhou <zhouhan@gmail.com>
Co-Authored-By: Hong Hui Xiao <xiaohhui@cn.ibm.com>
Co-Authored-By: Jakub Libosvar <libosvar@redhat.com>
Co-Authored-By: Jeff Feng <jianhua@us.ibm.com>
Co-Authored-By: Jenkins <jenkins@review.openstack.org>
Co-Authored-By: Jonathan Herlin <jonte@jherlin.se>
Co-Authored-By: Kyle Mestery <mestery@mestery.com>
Co-Authored-By: Le Hou <houl7@chinaunicom.cn>
Co-Authored-By: Lucas Alvares Gomes <lucasagomes@gmail.com>
Co-Authored-By: Matthew Kassawara <mkassawara@gmail.com>
Co-Authored-By: Miguel Angel Ajo <majopela@redhat.com>
Co-Authored-By: Murali Rangachari <muralirdev@gmail.com>
Co-Authored-By: Numan Siddique <nusiddiq@redhat.com>
Co-Authored-By: Reedip <rbanerje@redhat.com>
Co-Authored-By: Richard Theis <rtheis@us.ibm.com>
Co-Authored-By: Russell Bryant <rbryant@redhat.com>
Co-Authored-By: Ryan Moats <rmoats@us.ibm.com>
Co-Authored-By: Simon Pasquier <spasquier@mirantis.com>
Co-Authored-By: Terry Wilson <twilson@redhat.com>
Co-Authored-By: Tong Li <litong01@us.ibm.com>
Co-Authored-By: Yunxiang Tao <taoyunxiang@cmss.chinamobile.com>
Co-Authored-By: Yushiro FURUKAWA <y.furukawa_2@jp.fujitsu.com>
Co-Authored-By: chen-li <shchenli@cn.ibm.com>
Co-Authored-By: gong yong sheng <gong.yongsheng@99cloud.net>
Co-Authored-By: lidong <lidongbj@inspur.com>
Co-Authored-By: lzklibj <lzklibj@cn.ibm.com>
Co-Authored-By: melissaml <ma.lei@99cloud.net>
Co-Authored-By: pengyuesheng <pengyuesheng@gohighsec.com>
Co-Authored-By: reedip <rbanerje@redhat.com>
Co-Authored-By: venkata anil <anilvenkata@redhat.com>
Co-Authored-By: xurong00037997 <xu.rong@zte.com.cn>
Co-Authored-By: zhangdebo <zhangdebo@inspur.com>
Co-Authored-By: zhangyanxian <zhang.yanxian@zte.com.cn>
Co-Authored-By: zhangyanxian <zhangyanxianmail@163.com>

Change-Id: Ia121ec5146c1d35b3282e44fd1eb98932939ea8c
Partially-Implements: blueprint neutron-ovn-merge
This commit is contained in:
Slawek Kaplonski 2020-01-08 17:10:46 +01:00 committed by Akihiro Motoki
parent afc788bcd2
commit cd66232c2b
74 changed files with 41191 additions and 3 deletions

View File

@ -231,6 +231,16 @@ or adding subports to an existing trunk.
| tags | [] | | tags | [] |
+----------------+-------------------------------------------------------------------------------------------------+ +----------------+-------------------------------------------------------------------------------------------------+
* When using the OVN driver, additional logical switch port information
is available using the following commands:
.. code-block:: console
$ ovn-nbctl lsp-get-parent 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3
73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38
$ ovn-nbctl lsp-get-tag 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3
Launch an instance on the trunk Launch an instance on the trunk
------------------------------- -------------------------------

View File

@ -19,4 +19,5 @@ manage OpenStack Networking (neutron).
ops ops
migration migration
misc misc
ovn/index
archives/index archives/index

View File

@ -0,0 +1,29 @@
.. _ovn_dpdk:
===================
DPDK Support in OVN
===================
Configuration Settings
----------------------
The following configuration parameter needs to be set in the Neutron ML2
plugin configuration file under the 'ovn' section to enable DPDK support.
**vhost_sock_dir**
This is the directory path in which vswitch daemon in all the compute
nodes creates the virtio socket. Follow the instructions in
INSTALL.DPDK.md in openvswitch source tree to know how to configure DPDK
support in vswitch daemons.
Configuration Settings in compute hosts
---------------------------------------
Compute nodes configured with OVS DPDK should set the datapath_type as
"netdev" for the integration bridge (managed by OVN) and all other bridges if
connected to the integration bridge via patch ports. The below command can be
used to set the datapath_type.
.. code-block:: console
$ sudo ovs-vsctl set Bridge br-int datapath_type=netdev

View File

@ -0,0 +1,102 @@
.. _features:
Features
========
Open Virtual Network (OVN) offers the following virtual network
services:
* Layer-2 (switching)
Native implementation. Replaces the conventional Open vSwitch (OVS)
agent.
* Layer-3 (routing)
Native implementation that supports distributed routing. Replaces the
conventional Neutron L3 agent. This includes transparent L3HA :doc::`routing`
support, based on BFD monitorization integrated in core OVN.
* DHCP
Native distributed implementation. Replaces the conventional Neutron DHCP
agent. Note that the native implementation does not yet support DNS
features.
* DPDK
OVN and ovn mechanism driver may be used with OVS using either the Linux
kernel datapath or the DPDK datapath.
* Trunk driver
Uses OVN's functionality of parent port and port tagging to support trunk
service plugin. One has to enable the 'trunk' service plugin in neutron
configuration files to use this feature.
* VLAN tenant networks
The ovn driver does support VLAN tenant networks when used
with OVN version 2.11 (or higher).
* DNS
Native implementation. Since the version 2.8 OVN contains a built-in
DNS implementation.
The following Neutron API extensions are supported with OVN:
+----------------------------------+---------------------------+
| Extension Name | Extension Alias |
+==================================+===========================+
| Allowed Address Pairs | allowed-address-pairs |
+----------------------------------+---------------------------+
| Auto Allocated Topology Services | auto-allocated-topology |
+----------------------------------+---------------------------+
| Availability Zone | availability_zone |
+----------------------------------+---------------------------+
| Default Subnetpools | default-subnetpools |
+----------------------------------+---------------------------+
| Multi Provider Network | multi-provider |
+----------------------------------+---------------------------+
| Network IP Availability | network-ip-availability |
+----------------------------------+---------------------------+
| Neutron external network | external-net |
+----------------------------------+---------------------------+
| Neutron Extra DHCP opts | extra_dhcp_opt |
+----------------------------------+---------------------------+
| Neutron Extra Route | extraroute |
+----------------------------------+---------------------------+
| Neutron L3 external gateway | ext-gw-mode |
+----------------------------------+---------------------------+
| Neutron L3 Router | router |
+----------------------------------+---------------------------+
| Network MTU | net-mtu |
+----------------------------------+---------------------------+
| Port Binding | binding |
+----------------------------------+---------------------------+
| Port Security | port-security |
+----------------------------------+---------------------------+
| Provider Network | provider |
+----------------------------------+---------------------------+
| Quality of Service | qos |
+----------------------------------+---------------------------+
| Quota management support | quotas |
+----------------------------------+---------------------------+
| RBAC Policies | rbac-policies |
+----------------------------------+---------------------------+
| Resource revision numbers | standard-attr-revisions |
+----------------------------------+---------------------------+
| security-group | security-group |
+----------------------------------+---------------------------+
| standard-attr-description | standard-attr-description |
+----------------------------------+---------------------------+
| Subnet Allocation | subnet_allocation |
+----------------------------------+---------------------------+
| Tag support | standard-attr-tag |
+----------------------------------+---------------------------+
| Time Stamp Fields | standard-attr-timestamp |
+----------------------------------+---------------------------+
| Domain Name System (DNS) | dns_integration |
+----------------------------------+---------------------------+

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 181 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 164 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 190 KiB

View File

@ -0,0 +1,14 @@
===============================
OVN Driver Administration Guide
===============================
.. toctree::
:maxdepth: 1
ovn
features
routing
tutorial
refarch/refarch
dpdk
troubleshooting

View File

@ -0,0 +1,72 @@
.. _ovn_ovn:
===============
OVN information
===============
The original OVN project announcement can be found here:
* https://networkheresy.com/2015/01/13/ovn-bringing-native-virtual-networking-to-ovs/
The OVN architecture is described here:
* http://www.openvswitch.org/support/dist-docs/ovn-architecture.7.html
Here are two tutorials that help with learning different aspects of OVN:
* http://blog.spinhirne.com/p/blog-series.html#introToOVN
* http://docs.openvswitch.org/en/stable/tutorials/ovn-sandbox/
There is also an in depth tutorial on using OVN with OpenStack:
* http://docs.openvswitch.org/en/stable/tutorials/ovn-openstack/
OVN DB schemas and other man pages:
* http://www.openvswitch.org/support/dist-docs/ovn-nb.5.html
* http://www.openvswitch.org/support/dist-docs/ovn-sb.5.html
* http://www.openvswitch.org/support/dist-docs/ovn-nbctl.8.html
* http://www.openvswitch.org/support/dist-docs/ovn-sbctl.8.html
* http://www.openvswitch.org/support/dist-docs/ovn-northd.8.html
* http://www.openvswitch.org/support/dist-docs/ovn-controller.8.html
* http://www.openvswitch.org/support/dist-docs/ovn-controller-vtep.8.html
or find a full list of OVS and OVN man pages here:
* http://docs.openvswitch.org/en/latest/ref/
The openvswitch web page includes a list of presentations, some of which are
about OVN:
* http://openvswitch.org/support/
Here are some direct links to past OVN presentations:
* `OVN talk at OpenStack Summit in Boston, Spring 2017
<https://www.youtube.com/watch?v=sgc7myiX6ts>`_
* `OVN talk at OpenStack Summit in Barcelona, Fall 2016
<https://www.youtube.com/watch?v=q3cJ6ezPnCU>`_
* `OVN talk at OpenStack Summit in Austin, Spring 2016
<https://www.youtube.com/watch?v=okralc7LrZo>`_
* OVN Project Update at the OpenStack Summit in Tokyo, Fall 2015 -
`Slides <http://openvswitch.org/support/slides/OVN_Tokyo.pdf>`__ -
`Video <https://www.youtube.com/watch?v=3IrG2xghJjs>`__
* OVN at OpenStack Summit in Vancouver, Sping 2015 -
`Slides <http://openvswitch.org/support/slides/OVN-Vancouver.pdf>`__ -
`Video <https://www.youtube.com/watch?v=kEzXTq2fPDg>`__
* `OVS Conference 2015 <https://www.youtube.com/watch?v=JLGZOYi_Cqc>`_
These blog resources may also help with testing and understanding OVN:
* http://networkop.co.uk/blog/2016/11/27/ovn-part1/
* http://networkop.co.uk/blog/2016/12/10/ovn-part2/
* https://blog.russellbryant.net/2016/12/19/comparing-openstack-neutron-ml2ovs-and-ovn-control-plane/
* https://blog.russellbryant.net/2016/11/11/ovn-logical-flows-and-ovn-trace/
* https://blog.russellbryant.net/2016/09/29/ovs-2-6-and-the-first-release-of-ovn/
* http://galsagie.github.io/2015/11/23/ovn-l3-deepdive/
* http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/
* http://galsagie.github.io/sdn/openstack/ovs/2015/05/30/ovn-deep-dive/
* http://blog.russellbryant.net/2015/05/14/an-ez-bake-ovn-for-openstack/
* http://galsagie.github.io/sdn/openstack/ovs/2015/04/26/ovn-containers/
* http://blog.russellbryant.net/2015/04/21/ovn-and-openstack-status-2015-04-21/
* http://blog.russellbryant.net/2015/04/08/ovn-and-openstack-integration-development-update/

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

View File

@ -0,0 +1,982 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="210mm"
height="297mm"
viewBox="0 0 744.09448 1052.3622"
id="svg6654"
version="1.1"
inkscape:version="0.92.2 (5c3e80d, 2017-08-06)"
sodipodi:docname="ovn-compute1.svg"
inkscape:export-filename="/Users/ajo/Documents/work/redhat/ovn/docs/networking-ovn/doc/source/admin/refarch/figures/ovn-compute1.png"
inkscape:export-xdpi="77.139999"
inkscape:export-ydpi="77.139999">
<defs
id="defs6656">
<linearGradient
id="linearGradient8990"
osb:paint="solid">
<stop
style="stop-color:#000000;stop-opacity:1;"
offset="0"
id="stop8992" />
</linearGradient>
<filter
style="color-interpolation-filters:sRGB"
inkscape:label="Drop Shadow"
id="filter9167">
<feFlood
flood-opacity="0.498039"
flood-color="rgb(0,0,0)"
result="flood"
id="feFlood9169" />
<feComposite
in="flood"
in2="SourceGraphic"
operator="in"
result="composite1"
id="feComposite9171" />
<feGaussianBlur
in="composite1"
stdDeviation="1.7"
result="blur"
id="feGaussianBlur9173" />
<feOffset
dx="2.7"
dy="3.2"
result="offset"
id="feOffset9175" />
<feComposite
in="SourceGraphic"
in2="offset"
operator="over"
result="composite2"
id="feComposite9177" />
</filter>
<linearGradient
spreadMethod="pad"
id="linearGradient9873"
y2="0.13733999"
gradientUnits="userSpaceOnUse"
y1="44.836544"
gradientTransform="translate(-5.5836,1.0285)"
x2="428.06"
x1="509.15939"
inkscape:collect="always">
<stop
id="stop9875"
style="stop-color:#b58900;stop-opacity:1"
offset="0" />
<stop
id="stop9877"
style="stop-color:#856500;stop-opacity:1"
offset="1" />
</linearGradient>
<linearGradient
xlink:href="#linearGradient9407"
inkscape:collect="always"
x1="509.15939"
x2="428.06"
gradientTransform="translate(-5.5836,1.0285)"
y1="44.836544"
gradientUnits="userSpaceOnUse"
y2="0.13733999"
id="linearGradient12509">
<stop
offset="0"
style="stop-color:#2aa198;stop-opacity:1"
id="stop9403" />
<stop
offset="1"
style="stop-color:#1c6c66;stop-opacity:1"
id="stop9405" />
</linearGradient>
<filter
id="filter9167-2"
inkscape:label="Drop Shadow"
style="color-interpolation-filters:sRGB;">
<feFlood
id="feFlood9169-8"
result="flood"
flood-color="rgb(0,0,0)"
flood-opacity="0.498039" />
<feComposite
id="feComposite9171-5"
result="composite1"
operator="in"
in2="SourceGraphic"
in="flood" />
<feGaussianBlur
id="feGaussianBlur9173-9"
result="blur"
stdDeviation="1.7"
in="composite1" />
<feOffset
id="feOffset9175-4"
result="offset"
dy="3.2"
dx="2.7" />
<feComposite
id="feComposite9177-3"
result="composite2"
operator="over"
in2="offset"
in="SourceGraphic" />
</filter>
<linearGradient
id="linearGradient9407"
y2=".13734"
gradientUnits="userSpaceOnUse"
y1="47.867"
gradientTransform="translate(-5.5836,1.0285)"
x2="428.06"
x1="513.2"
inkscape:collect="always">
<stop
id="stop5707"
style="stop-color:#d3d3d3"
offset="0" />
<stop
id="stop5709"
style="stop-color:#ffffff"
offset="1" />
</linearGradient>
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient12509"
id="linearGradient14089"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(1.2133594,0,0,1.011658,-16.747638,-13.472448)"
x1="509.15939"
y1="44.836544"
x2="428.06"
y2="0.13733999" />
<linearGradient
inkscape:collect="always"
xlink:href="#linearGradient9873"
id="linearGradient14135"
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(1.2133594,0,0,1.011658,-233.96282,9.2513029)"
x1="509.15939"
y1="44.836544"
x2="428.06"
y2="0.13733999" />
<filter
style="color-interpolation-filters:sRGB"
inkscape:label="Drop Shadow"
id="filter9167-0">
<feFlood
flood-opacity="0.498039"
flood-color="rgb(0,0,0)"
result="flood"
id="feFlood9169-4" />
<feComposite
in="flood"
in2="SourceGraphic"
operator="in"
result="composite1"
id="feComposite9171-1" />
<feGaussianBlur
in="composite1"
stdDeviation="1.7"
result="blur"
id="feGaussianBlur9173-0" />
<feOffset
dx="2.7"
dy="3.2"
result="offset"
id="feOffset9175-8" />
<feComposite
in="SourceGraphic"
in2="offset"
operator="over"
result="composite2"
id="feComposite9177-1" />
</filter>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.49497475"
inkscape:cx="551.3951"
inkscape:cy="539.11138"
inkscape:document-units="px"
inkscape:current-layer="g9020"
showgrid="false"
inkscape:window-width="1440"
inkscape:window-height="802"
inkscape:window-x="13"
inkscape:window-y="119"
inkscape:window-maximized="0"
showguides="true"
inkscape:guide-bbox="true"
inkscape:snap-perpendicular="true"
inkscape:snap-tangential="true"
gridtolerance="10000"
objecttolerance="3">
<sodipodi:guide
position="305.35713,461.38394"
orientation="0,1"
id="guide5597"
inkscape:locked="false" />
</sodipodi:namedview>
<metadata
id="metadata6659">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-3.464567e-6)">
<g
id="g9020"
transform="matrix(1,0,0,1.03616,-11.428571,294.19464)">
<path
style="fill:#8c8c8c;fill-opacity:0.61568627;fill-rule:evenodd;stroke:#8b8b8b;stroke-width:3.68398499;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 124.90482,-113.21997 -7.94053,-53.90853"
id="path5575"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cc" />
<path
inkscape:connector-curvature="0"
id="path5577"
d="M 182.19941,-84.429925 116.96429,-167.1285"
style="fill:#8c8c8c;fill-opacity:0.61568627;fill-rule:evenodd;stroke:#8b8b8b;stroke-width:3.68398499;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
sodipodi:nodetypes="cc" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="35.524117"
y="107.22813"
id="text9193"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
id="tspan9195"
x="35.524117"
y="107.22813"
style="font-size:39.29584122px;line-height:1.25"> </tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="32.015564"
y="139.50685"
id="text9207"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
id="tspan9209"
x="32.015564"
y="139.50685"
style="font-size:39.29584122px;line-height:1.25"> </tspan></text>
<rect
transform="matrix(0.94433144,0,0,0.85207019,-18.952158,-372.93854)"
id="rect6478"
width="172.35803"
height="88.208359"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="120.05329"
y="-222.97113"
id="text6480"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="120.05329"
y="-222.97113"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
id="tspan6486">networking-ovn-</tspan><tspan
sodipodi:role="line"
x="120.05329"
y="-204.60835"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
id="tspan5468">metadata-agent</tspan></text>
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="316.35751"
width="173.87082"
id="rect6500"
transform="matrix(0.94433144,0,0,0.85207019,217.74428,-372.38709)" />
<text
transform="scale(1.0208028,0.97962114)"
id="text6502"
y="-221.00081"
x="353.32568"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan6506"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-221.00081"
x="353.32568"
sodipodi:role="line">Open vSwitch</tspan></text>
<g
id="g4571"
transform="translate(137.63971,-242.67808)">
<rect
transform="matrix(0.94433144,0,0,0.85207019,231.45857,3.7443199)"
id="rect7536"
width="130.00002"
height="54.228699"
x="87.602554"
y="166.55212"
style="fill:#27958c;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="365.36102"
y="179.28119"
id="text7538"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="365.36102"
y="179.28119"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke-width:0.85448378px"
id="tspan7544">Interface</tspan></text>
</g>
<path
style="fill:none;fill-rule:evenodd;stroke:#6b6b6b;stroke-width:3.92958403;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="M 450.40442,4.6714687 428.62301,4.4851456"
id="path7797"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cc" />
<g
id="g5535"
transform="translate(107.94643,-10.857395)"
style="opacity:0.74199997">
<rect
transform="matrix(0.94433144,0,0,0.77783252,-77.523588,-192.42798)"
id="rect6490"
width="169.33247"
height="206.32814"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.43876505;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="10.252723"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="62.675419"
y="-50.124882"
id="text6492"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="62.675419"
y="-50.124882"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
id="tspan6498">METADATA</tspan><tspan
id="tspan5523"
sodipodi:role="line"
x="62.675423"
y="-31.762098"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px">Namespace</tspan><tspan
id="tspan5525"
sodipodi:role="line"
x="62.675419"
y="-13.399316"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px">ovn-meta</tspan></text>
<rect
transform="matrix(0.94433144,0,0,0.85207019,-68.234968,-109.16686)"
id="rect7634"
width="145.61551"
height="50.451252"
x="66.423546"
y="151.98941"
style="fill:#e7dbb1;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="63.858227"
y="43.302197"
id="text7636"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="63.858227"
y="43.302197"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
id="tspan7638">haproxy</tspan></text>
</g>
<g
id="g4548"
transform="matrix(1.055721,0,0,1,-18.655123,-74.192207)"
style="stroke-width:0.97325224">
<rect
transform="matrix(0.94433144,0,0,0.60726809,211.03447,-201.38876)"
id="rect7560"
width="142.05502"
height="95.604927"
x="87.602554"
y="166.55212"
style="fill:#f4e6b6;fill-opacity:1;stroke:#657b83;stroke-width:3.78774524;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="13.132422"
rx="8.6047659" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.83162826px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="351.14102"
y="-99.812614"
id="text7660"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="351.14102"
y="-96.866386"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.83162826px"
id="tspan7662" /><tspan
id="tspan7664"
sodipodi:role="line"
x="351.14102"
y="-86.410278"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.83162826px">Integration</tspan><tspan
sodipodi:role="line"
x="351.14102"
y="-68.047493"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.83162826px"
id="tspan4447">Bridge</tspan><tspan
sodipodi:role="line"
x="351.14102"
y="-49.684456"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:14.69050503px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.83162826px"
id="tspan4449">br-int</tspan></text>
</g>
<g
id="g4588"
transform="translate(125.89286,-78.069848)">
<path
d="m 603.98713,24.025163 c 15.16699,22.762287 -21.36726,20.981771 -32.66364,19.193165 -12.13359,7.587414 -27.30059,5.058278 -33.36738,-2.529146 -9.67048,6.899494 -39.25218,7.890924 -33.36739,-12.645714 -12.13359,-5.05829 -18.20039,-22.7623053 3.0334,-30.3497407 -7.12242,-14.8612563 17.38744,-23.0779423 30.33399,-15.1748703 6.06679,-17.704015 32.48163,-13.436841 38.73043,1.040491 15.16699,-7.587435 46.28966,3.960843 27.30059,15.1748703 18.20039,7.5874354 18.20039,20.2331607 0,25.2914507 z"
style="fill:#666666;stroke-width:1.10792816"
sodipodi:nodetypes="ccccccccc"
id="path14073"
inkscape:connector-curvature="0" />
<path
d="m 602.07002,21.019527 c 15.16699,22.762291 -21.37939,20.980765 -32.66364,19.192159 -12.13359,7.58742 -27.30058,5.058278 -33.36738,-2.529136 -9.67047,6.899484 -39.25218,7.890914 -33.36738,-12.645729 -12.1336,-5.05829 -18.20039,-22.7623055 3.03339,-30.3497409 -7.12241,-14.8612561 17.37531,-23.0769311 30.33399,-15.1748701 6.0668,-17.704015 32.48163,-13.43583 38.73043,1.040996 15.16699,-7.587435 46.28966,3.960843 27.30059,15.1748702 18.20039,7.5874354 18.20039,20.2331608 0,25.2914508 z"
style="fill:url(#linearGradient14089);fill-opacity:1;stroke-width:1.10792816"
sodipodi:nodetypes="ccccccccc"
id="path14075"
inkscape:connector-curvature="0" />
<path
d="m 607.32386,25.036821 c 13.6503,22.762285 -22.31367,22.170469 -34.88408,20.246293 -13.51682,8.16507 -28.12567,6.888364 -34.88408,-1.27772 -10.77463,7.425554 -40.95088,3.793712 -37.91748,-16.439428 -13.51683,-5.44272 -20.62711,-27.24192758 3.0334,-35.408031 -7.93537,-15.99229 17.4117,-23.678867 31.85068,-15.17487 6.0668,-15.17487 33.9862,-14.314961 40.95088,1.265078 16.9021,-8.165597 48.53438,6.322863 33.36738,18.9685883 9.1002,3.7937172 18.75854,22.3778757 -1.5167,27.8205957 z"
style="fill:none;stroke:#b2b2b2;stroke-width:2.15154552;stroke-linejoin:round;stroke-dasharray:6.4545173, 2.15150576"
sodipodi:nodetypes="ccccccccc"
id="path14077"
inkscape:connector-curvature="0" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="503.30649"
y="18.388397"
id="text14083"
transform="scale(1.0179195,0.98239596)"><tspan
sodipodi:role="line"
id="tspan14081"
x="503.30649"
y="18.388397"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:19.64792061px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1">Internet</tspan></text>
<path
inkscape:connector-curvature="0"
inkscape:connector-type="polyline"
id="path14157"
d="m 455.08232,1.5504455 32.18752,0.3015936"
style="fill:#268f87;fill-opacity:1;fill-rule:evenodd;stroke:#268f87;stroke-width:5.8943758;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:5.89437557, 11.78875115;stroke-dashoffset:0;stroke-opacity:1"
sodipodi:nodetypes="cc" />
<rect
rx="20.855568"
ry="18.818075"
style="fill:none;fill-opacity:1;stroke:#657b83;stroke-width:2.9471879;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:2.94718803, 2.94718803;stroke-dashoffset:0;stroke-opacity:1"
y="-26.079771"
x="151.95367"
height="237.07312"
width="469.03488"
id="rect14091" />
<text
transform="scale(1.0179195,0.98239596)"
id="text14097"
y="175.72643"
x="158.17299"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#999999"
y="175.72643"
x="158.17299"
sodipodi:role="line"
id="tspan14103">only mandatory for distributed floating IP</tspan><tspan
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#999999"
y="197.21634"
x="158.17299"
sodipodi:role="line"
id="tspan5461">or direct vm connectivity</tspan></text>
</g>
<g
id="g4525"
transform="translate(217.23968,-162.65573)">
<g
id="g4597"
transform="translate(122.16532,-11.881596)">
<path
inkscape:connector-curvature="0"
id="path14117"
sodipodi:nodetypes="ccccccccc"
style="fill:#666666;stroke-width:1.10792816"
d="m 386.77195,46.748913 c 15.16699,22.762305 -21.36726,20.981787 -32.66364,19.193175 -12.13359,7.587435 -27.30059,5.05829 -33.36738,-2.529145 -9.67048,6.899508 -39.25218,7.890933 -33.36739,-12.645725 -12.13359,-5.05829 -18.20039,-22.762305 3.0334,-30.34974 -7.12242,-14.8612563 17.38744,-23.0779426 30.33399,-15.1748703 6.06679,-17.7040147 32.48163,-13.436841 38.73043,1.040491 15.16699,-7.5874354 46.28966,3.9608433 27.30059,15.1748703 18.20039,7.587435 18.20039,20.23316 0,25.29145 z" />
<path
inkscape:connector-curvature="0"
id="path14119"
sodipodi:nodetypes="ccccccccc"
style="fill:url(#linearGradient14135);fill-opacity:1;stroke-width:1.10792816"
d="m 384.85484,43.743277 c 15.16699,22.762305 -21.37939,20.980775 -32.66364,19.192164 -12.13359,7.587435 -27.30058,5.05829 -33.36738,-2.529145 -9.67047,6.899507 -39.25218,7.890932 -33.36738,-12.645725 -12.1336,-5.05829 -18.20039,-22.762305 3.0334,-30.34974 -7.12242,-14.8612565 17.3753,-23.0769314 30.33398,-15.1748705 6.0668,-17.7040145 32.48163,-13.4358295 38.73043,1.0409961 15.16699,-7.5874354 46.28966,3.9608432 27.30059,15.1748704 18.20039,7.587435 18.20039,20.23316 0,25.29145 z" />
<path
inkscape:connector-curvature="0"
id="path14121"
sodipodi:nodetypes="ccccccccc"
style="fill:none;stroke:#b2b2b2;stroke-width:2.15154552;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none"
d="m 390.10868,47.760571 c 13.6503,22.762305 -22.31367,22.170485 -34.88408,20.246311 -13.51682,8.165092 -28.12567,6.88838 -34.88408,-1.277724 -10.77463,7.42557 -40.95088,3.793718 -37.91748,-16.439442 -13.51683,-5.44272 -20.62711,-27.241927 3.0334,-35.40803 -7.93537,-15.9922907 17.4117,-23.6788674 31.85068,-15.17487058 6.0668,-15.17486942 33.9862,-14.31496042 40.95088,1.2650781 16.9021,-8.16559692 48.53438,6.32286328 33.36738,18.96858848 9.1002,3.793717 18.75854,22.377875 -1.5167,27.820595 z" />
<text
transform="scale(1.0179195,0.98239596)"
id="text14131"
y="29.834743"
x="289.91501"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:19.64792061px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1"
y="29.834743"
x="289.91501"
id="tspan14127"
sodipodi:role="line">Overlay</tspan><tspan
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:19.64792061px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1"
y="54.394646"
x="289.91501"
sodipodi:role="line"
id="tspan14129">Network</tspan></text>
<path
inkscape:connector-curvature="0"
inkscape:connector-type="polyline"
id="path14133"
d="m 229.38344,27.624211 38.70368,0.251814"
style="fill:none;fill-opacity:1;fill-rule:evenodd;stroke:#b58900;stroke-width:5.8943758;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:5.89437557, 11.78875115;stroke-dashoffset:0;stroke-opacity:1"
sodipodi:nodetypes="cc" />
<ellipse
style="opacity:0.74199997;fill:#9d7700;fill-opacity:1;stroke:#9d7700;stroke-width:2.1514473;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:0.74117647"
id="path5585"
cx="-279.22641"
cy="338.68857"
rx="8.7053566"
ry="8.401557" />
<ellipse
ry="8.401557"
rx="8.7053566"
cy="337.65451"
cx="-59.315712"
id="ellipse5587"
style="opacity:0.74199997;fill:#27958c;fill-opacity:1;stroke:#27948c;stroke-width:2.1514473;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:0.74117647" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:36.83985138px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#999999;fill-opacity:1;stroke:none;stroke-width:0.92099625px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="-257.6015"
y="351.20505"
id="text5591"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
id="tspan5589"
x="-257.6015"
y="351.20505"
style="font-size:18.41992569px;fill:#999999;fill-opacity:1;stroke-width:0.92099625px">Overlay network</tspan></text>
<text
transform="scale(1.0179194,0.98239605)"
id="text5595"
y="351.11148"
x="-39.806793"
style="font-style:normal;font-weight:normal;font-size:36.83985138px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#999999;fill-opacity:1;stroke:none;stroke-width:0.92099625px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-size:18.41992569px;fill:#999999;fill-opacity:1;stroke-width:0.92099625px"
y="351.11148"
x="-39.806793"
id="tspan5593"
sodipodi:role="line">Provider network</tspan></text>
<ellipse
style="opacity:0.74199997;fill:#ac9d93;fill-opacity:1;stroke:#ac9d93;stroke-width:2.1514473;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
id="ellipse5599"
cx="163.67535"
cy="337.00824"
rx="8.7053566"
ry="8.401557" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:36.83985138px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#999999;fill-opacity:1;stroke:none;stroke-width:0.92099625px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="179.25868"
y="350.45361"
id="text5603"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
id="tspan5601"
x="179.25868"
y="350.45361"
style="font-size:18.41992569px;fill:#999999;fill-opacity:1;stroke-width:0.92099625px">Other prov. network</tspan></text>
</g>
</g>
<g
id="g4562"
transform="translate(0.66964285,-3.1021108)">
<rect
rx="8.6565285"
ry="13.906972"
style="fill:#f4e6b6;fill-opacity:1;stroke:#657b83;stroke-width:3.9095521;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="166.55212"
x="87.602554"
height="95.604927"
width="142.05502"
id="rect4550"
transform="matrix(0.99098897,0,0,0.57344625,202.58296,-193.06908)" />
<text
transform="scale(1.0208028,0.97962114)"
id="text4560"
y="-90.576546"
x="351.14102"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan4552"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-87.630318"
x="351.14102"
sodipodi:role="line" /><tspan
id="tspan4556"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-77.17421"
x="351.14102"
sodipodi:role="line">Provider Bridge</tspan><tspan
id="tspan4558"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:14.69050503px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-58.811169"
x="351.14102"
sodipodi:role="line">br-provider</tspan></text>
</g>
<g
transform="translate(139.41506,-315.33983)"
id="g4605">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#9d7700;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="166.55212"
x="87.602554"
height="54.228699"
width="130.00002"
id="rect4599"
transform="matrix(0.94433144,0,0,0.85207019,228.30407,-7.5441497)" />
<text
transform="scale(1.0208028,0.97962114)"
id="text4603"
y="168.0854"
x="362.89484"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan4601"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke-width:0.85448378px"
y="168.0854"
x="362.89484"
sodipodi:role="line">Interface</tspan></text>
</g>
<path
inkscape:connector-curvature="0"
id="path4607"
d="m 449.18003,-153.8888 -15.59214,-0.18632"
style="fill:none;fill-rule:evenodd;stroke:#6b6b6b;stroke-width:3.92958403;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
sodipodi:nodetypes="cc" />
<path
sodipodi:nodetypes="cc"
style="fill:none;fill-rule:evenodd;stroke:#6b6b6b;stroke-width:3.92958403;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 449.4574,-72.101922 -18.94036,-0.18632"
id="path4609"
inkscape:connector-curvature="0" />
<g
transform="translate(150.67485,53.286774)"
id="g4627">
<g
transform="translate(122.16532,-11.881596)"
id="g4625">
<g
id="g5455"
transform="translate(-158.15201,-7.3117511)">
<path
inkscape:connector-curvature="0"
id="path4611"
sodipodi:nodetypes="ccccccccc"
style="fill:#666666;stroke-width:1.10792816"
d="m 612.28092,2.4226158 c 15.16699,22.7623052 -21.36726,20.9817872 -32.66364,19.1931752 -12.13359,7.587435 -27.30059,5.05829 -33.36738,-2.529145 -9.67048,6.899508 -39.25218,7.890933 -33.36739,-12.6457253 -12.13359,-5.0582899 -18.20039,-22.7623047 3.0334,-30.3497397 -7.12242,-14.861256 17.38744,-23.077942 30.33399,-15.17487 6.06679,-17.704015 32.48163,-13.436841 38.73043,1.040491 15.16699,-7.587435 46.28966,3.960843 27.30059,15.17487 18.20039,7.587435 18.20039,20.2331598 0,25.2914498 z" />
<path
inkscape:connector-curvature="0"
id="path4613"
sodipodi:nodetypes="ccccccccc"
style="fill:#ac9d93;fill-opacity:1;stroke-width:1.10792816"
d="M 610.36381,-0.58302018 C 625.5308,22.179285 588.98442,20.397755 577.70017,18.609144 c -12.13359,7.587435 -27.30058,5.05829 -33.36738,-2.529145 -9.67047,6.899507 -39.25218,7.890932 -33.36738,-12.6457252 -12.1336,-5.05829 -18.20039,-22.7623048 3.0334,-30.3497398 -7.12242,-14.861256 17.3753,-23.076931 30.33398,-15.17487 6.0668,-17.704015 32.48163,-13.43583 38.73043,1.040996 15.16699,-7.587435 46.28966,3.960843 27.30059,15.17487 18.20039,7.587435 18.20039,20.2331598 0,25.29144982 z" />
<path
inkscape:connector-curvature="0"
id="path4615"
sodipodi:nodetypes="ccccccccc"
style="fill:none;stroke:#b2b2b2;stroke-width:2.1514473;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:4.30289442, 2.15144722;stroke-dashoffset:0"
d="m 615.61765,3.4342738 c 13.6503,22.7623052 -22.31367,22.1704852 -34.88408,20.2463112 -13.51682,8.165092 -28.12567,6.88838 -34.88408,-1.277724 -10.77463,7.42557 -40.95088,3.793718 -37.91748,-16.4394423 -13.51683,-5.44271988 -20.62711,-27.2419267 3.0334,-35.4080297 -7.93537,-15.99229 17.4117,-23.678867 31.85068,-15.17487 6.0668,-15.17487 33.9862,-14.314961 40.95088,1.265078 16.9021,-8.165597 48.53438,6.322863 33.36738,18.968588 9.1002,3.793717 18.75854,22.3778748 -1.5167,27.8205948 z" />
<text
transform="scale(1.0179195,0.98239596)"
id="text4621"
y="-15.285862"
x="511.4541"
style="font-style:normal;font-weight:normal;line-height:0%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:19.64792061px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1"
y="-15.285862"
x="511.4541"
sodipodi:role="line"
id="tspan4619">Other</tspan><tspan
id="tspan5446"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:19.64792061px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#ffffff;fill-opacity:1"
y="9.2740383"
x="511.4541"
sodipodi:role="line">network</tspan></text>
<path
inkscape:connector-curvature="0"
inkscape:connector-type="polyline"
id="path4623"
d="m 465.55578,-29.964124 37.64177,0.519513"
style="fill:#ac9d93;fill-opacity:1;fill-rule:evenodd;stroke:#ac9d93;stroke-width:5.8943758;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:5.89437557, 11.78875115;stroke-dashoffset:0;stroke-opacity:1"
sodipodi:nodetypes="cc" />
</g>
</g>
</g>
<g
id="g4635"
transform="translate(137.97739,-165.1877)">
<rect
transform="matrix(0.94433144,0,0,0.85207019,231.45857,3.7443199)"
id="rect4629"
width="130.00002"
height="54.228699"
x="87.602554"
y="166.55212"
style="fill:#ac9d93;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="365.36102"
y="179.28119"
id="text4633"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="365.36102"
y="179.28119"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#ffffff;fill-opacity:1;stroke-width:0.85448378px"
id="tspan4631">Interface</tspan></text>
</g>
<g
id="g4562-4"
transform="translate(0.32692379,67.663612)">
<rect
rx="8.6565285"
ry="13.906972"
style="fill:#f4e6b6;fill-opacity:1;stroke:#657b83;stroke-width:3.9095521;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167-0)"
y="166.55212"
x="87.602554"
height="95.604927"
width="142.05502"
id="rect4550-8"
transform="matrix(0.99098897,0,0,0.57344625,202.58296,-193.06908)" />
<text
transform="scale(1.0208028,0.97962114)"
id="text4560-8"
y="-90.576546"
x="351.14102"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan4552-4"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-87.630318"
x="351.14102"
sodipodi:role="line" /><tspan
id="tspan4556-5"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-77.17421"
x="351.14102"
sodipodi:role="line">Provider Bridge</tspan><tspan
id="tspan4558-7"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:14.69050503px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-58.811169"
x="351.14102"
sodipodi:role="line">br-other</tspan></text>
</g>
<g
transform="translate(66.696428,-35.415792)"
id="g5553"
style="opacity:0.83800001">
<rect
rx="9.0842314"
ry="10.252723"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.43876505;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="206.32814"
width="169.33247"
id="rect5537"
transform="matrix(0.94433144,0,0,0.77783252,-77.523588,-192.42798)" />
<text
transform="scale(1.0208028,0.97962114)"
id="text5545"
y="-50.124882"
x="62.675419"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan5539"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-50.124882"
x="62.675419"
sodipodi:role="line">METADATA</tspan><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-31.762098"
x="62.675423"
sodipodi:role="line"
id="tspan5541">Namespace</tspan><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="-13.399316"
x="62.675419"
sodipodi:role="line"
id="tspan5543">ovn-meta</tspan></text>
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#e7dbb1;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="50.451252"
width="145.61551"
id="rect5547"
transform="matrix(0.94433144,0,0,0.85207019,-68.234968,-109.16686)" />
<text
transform="scale(1.0208028,0.97962114)"
id="text5551"
y="43.302197"
x="63.858227"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan5549"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
y="43.302197"
x="63.858227"
sodipodi:role="line">haproxy</tspan></text>
</g>
<path
style="fill:#333333;fill-rule:evenodd;stroke:#4d4d4d;stroke-width:3.68398523;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 82.317868,-134.67442 34.633892,-32.4376"
id="path5573"
inkscape:connector-curvature="0" />
<g
id="g5571"
transform="translate(23.035715,-61.525245)"
style="opacity:1">
<rect
transform="matrix(0.94433144,0,0,0.77783252,-77.523588,-192.42798)"
id="rect5555"
width="169.33247"
height="206.32814"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.43876505;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="10.252723"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="62.675419"
y="-50.124882"
id="text5563"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="62.675419"
y="-50.124882"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
id="tspan5557">METADATA</tspan><tspan
id="tspan5559"
sodipodi:role="line"
x="62.675423"
y="-31.762098"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px">Namespace</tspan><tspan
id="tspan5561"
sodipodi:role="line"
x="62.675419"
y="-13.399316"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px">ovn-meta</tspan></text>
<rect
transform="matrix(0.94433144,0,0,0.85207019,-68.234968,-109.16686)"
id="rect5565"
width="145.61551"
height="50.451252"
x="66.423546"
y="151.98941"
style="fill:#e7dbb1;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:10.25380516px;line-height:0%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:0.85448378px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="63.858227"
y="43.302197"
id="text5569"
transform="scale(1.0208028,0.97962114)"><tspan
sodipodi:role="line"
x="63.858227"
y="43.302197"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:14.69022655px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke-width:0.85448378px"
id="tspan5567">haproxy</tspan></text>
</g>
<path
inkscape:connector-curvature="0"
id="path5579"
d="m 291.74449,-143.81413 -121.59162,21.65491"
style="fill:#333333;fill-rule:evenodd;stroke:#4d4d4d;stroke-width:3.68398523;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
sodipodi:nodetypes="cc" />
<path
sodipodi:nodetypes="cc"
inkscape:connector-curvature="0"
id="path5581"
d="m 291.74449,-143.81413 -80.60948,38.07029"
style="fill:#8c8c8c;fill-opacity:0.61568627;fill-rule:evenodd;stroke:#8b8b8b;stroke-width:3.68398499;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
<path
style="fill:#8c8c8c;fill-opacity:0.61568627;fill-rule:evenodd;stroke:#8b8b8b;stroke-width:3.68398499;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
d="m 291.74449,-143.81413 -40.02912,60.689883"
id="path5583"
inkscape:connector-curvature="0"
sodipodi:nodetypes="cc" />
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 96 KiB

View File

@ -0,0 +1,860 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="297mm"
height="210mm"
viewBox="0 0 1052.3622 744.09448"
id="svg6654"
version="1.1"
inkscape:version="0.91 r13725"
sodipodi:docname="ovn-services.svg"
inkscape:export-filename="/Users/ajo/Documents/work/redhat/ovn/docs/networking-ovn/doc/source/admin/refarch/figures/ovn-services.png"
inkscape:export-xdpi="71.498116"
inkscape:export-ydpi="71.498116">
<defs
id="defs6656">
<linearGradient
id="linearGradient8990"
osb:paint="solid">
<stop
style="stop-color:#000000;stop-opacity:1;"
offset="0"
id="stop8992" />
</linearGradient>
<filter
style="color-interpolation-filters:sRGB;"
inkscape:label="Drop Shadow"
id="filter9167">
<feFlood
flood-opacity="0.498039"
flood-color="rgb(0,0,0)"
result="flood"
id="feFlood9169" />
<feComposite
in="flood"
in2="SourceGraphic"
operator="in"
result="composite1"
id="feComposite9171" />
<feGaussianBlur
in="composite1"
stdDeviation="1.7"
result="blur"
id="feGaussianBlur9173" />
<feOffset
dx="2.7"
dy="3.2"
result="offset"
id="feOffset9175" />
<feComposite
in="SourceGraphic"
in2="offset"
operator="over"
result="composite2"
id="feComposite9177" />
</filter>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="0.35"
inkscape:cx="121.51632"
inkscape:cy="281.03287"
inkscape:document-units="px"
inkscape:current-layer="g6113"
showgrid="false"
inkscape:window-width="1440"
inkscape:window-height="802"
inkscape:window-x="3"
inkscape:window-y="10"
inkscape:window-maximized="0"
showguides="true"
inkscape:guide-bbox="true"
inkscape:snap-perpendicular="true"
inkscape:snap-tangential="true" />
<metadata
id="metadata6659">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title></dc:title>
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-308.26772)">
<g
id="g9020"
transform="matrix(1,0,0,1.03616,-11.428571,294.19464)">
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="92.625908"
y="182.74992"
id="text9193"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.982396)"><tspan
sodipodi:role="line"
id="tspan9195"
x="92.625908"
y="182.74992" /></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="89.117348"
y="215.02866"
id="text9207"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.982396)"><tspan
sodipodi:role="line"
id="tspan9209"
x="89.117348"
y="215.02866" /></text>
<g
id="g6204">
<g
transform="matrix(0.85690424,0,0,0.85207019,7.9434853,12.057662)"
id="g9229">
<g
id="g6232">
<rect
id="rect7554"
width="231.82626"
height="505.07404"
x="38.981712"
y="83.861069"
style="fill:none;fill-opacity:1;stroke:#657b83;stroke-width:3.44908571;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
ry="16.653866"
rx="15.363441" />
<g
id="g6113"
transform="translate(-15.772158,46.338267)">
<g
id="g6160"
transform="translate(7.0019493,0)">
<rect
rx="9.0842314"
ry="9.1202831"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.24329948;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="362.10504"
width="181.43475"
id="rect9045"
transform="matrix(1.1020268,0,0,1.0262228,-7.1868545,-4.0997074)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text9201"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line"
id="tspan6111">neutron-server</tspan></text>
</g>
<g
transform="translate(7.0019493,81.551195)"
id="g6166">
<rect
transform="matrix(1.1020268,0,0,1,-7.1868545,0)"
id="rect6168"
width="173.42773"
height="157.66496"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="184.5858"
id="text6170"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
x="155.16545"
y="184.5858"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6172">Networking</tspan><tspan
id="tspan6174"
sodipodi:role="line"
x="155.16545"
y="206.07571"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1">ML2 Plug-in</tspan></text>
</g>
<g
transform="translate(8.6690801,158.57176)"
id="g6176">
<rect
transform="matrix(1.1020268,0,0,1,-0.18490522,0)"
id="rect6178"
width="157.23018"
height="65.555252"
x="67.936325"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="184.5858"
id="text6180"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
id="tspan6184"
sodipodi:role="line"
x="155.16545"
y="184.5858"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1">OVN Mechanism</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="206.07571"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6188">Driver</tspan></text>
</g>
<g
id="g6190"
transform="translate(7.0019493,244.32996)">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="67.936325"
height="86.590279"
width="157.23018"
id="rect6192"
transform="matrix(1.1020268,0,0,1,1.583354,18.306476)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6194"
y="212.25668"
x="157.45834"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan6198"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="212.25668"
x="157.45834"
sodipodi:role="line">OVN Layer-3</tspan><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="233.7466"
x="157.45834"
sodipodi:role="line"
id="tspan6202">Service Plug-in</tspan></text>
</g>
</g>
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:28.74238205px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.93349"
y="129.21364"
id="text6154"
sodipodi:linespacing="125%"
transform="scale(1.0150442,0.98517878)"><tspan
sodipodi:role="line"
id="tspan6156"
x="155.93349"
y="129.21364">Controller</tspan><tspan
sodipodi:role="line"
x="155.93349"
y="165.14162"
id="tspan6158">Node</tspan></text>
</g>
<g
id="g6259"
transform="translate(266.74092,0)">
<g
id="g6343"
transform="translate(-23.339831,0)">
<rect
id="rect6261"
width="231.82626"
height="467.86197"
x="38.981712"
y="83.859184"
style="fill:none;fill-opacity:1;stroke:#657b83;stroke-width:3.44908571;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
ry="14.937626"
rx="13.005761" />
<g
id="g6263"
transform="translate(-15.772158,46.338267)">
<g
id="g6265"
transform="translate(7.0019493,0)">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6267"
transform="matrix(1.1020268,0,0,1,-7.1868545,0)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6269"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line"
id="tspan6273">OVN Northbound</tspan><tspan
id="tspan6313"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="206.07571"
x="155.16545"
sodipodi:role="line">Service</tspan><tspan
id="tspan6315"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="227.56563"
x="155.16545"
sodipodi:role="line">ovn-northd</tspan></text>
</g>
<g
transform="translate(7.0019493,81.551195)"
id="g6275" />
<g
transform="translate(8.6690801,158.57176)"
id="g6285">
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="184.5858"
id="text6289"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
x="155.16545"
y="184.5858"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6293" /></text>
</g>
<g
transform="translate(8.6690801,114.88362)"
id="g6319">
<rect
transform="matrix(1.1020268,0,0,1,-7.1868545,0)"
id="rect6321"
width="172.35803"
height="88.208359"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="184.5858"
id="text6323"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
id="tspan6325"
sodipodi:role="line"
x="155.16545"
y="184.5858"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1">OVN Northbound</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="206.07571"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6327">Database</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="227.56563"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6329">ovsdb-server</tspan></text>
</g>
<g
id="g6331"
transform="translate(8.6690801,230.41443)">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6333"
transform="matrix(1.1020268,0,0,1,-7.1868545,0)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6335"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line"
id="tspan6337">OVN Southbound</tspan><tspan
id="tspan6339"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="206.07571"
x="155.16545"
sodipodi:role="line">Database</tspan><tspan
id="tspan6341"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="227.56563"
x="155.16545"
sodipodi:role="line">ovsdb-server</tspan></text>
</g>
</g>
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:28.74238205px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.93349"
y="129.21364"
id="text6305"
sodipodi:linespacing="125%"
transform="scale(1.0150442,0.98517878)"><tspan
sodipodi:role="line"
id="tspan6307"
x="155.93349"
y="129.21364">Database</tspan><tspan
sodipodi:role="line"
x="155.93349"
y="165.14162"
id="tspan6309">Node</tspan></text>
</g>
<g
id="g6372"
transform="translate(261.40593,0)">
<g
transform="translate(-15.772158,46.338267)"
id="g6376">
<g
id="g6390"
transform="translate(7.0019493,81.551195)" />
<g
id="g6392"
transform="translate(8.6690801,158.57176)">
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6394"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan6396"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line" /></text>
</g>
<g
transform="translate(8.6690801,230.41443)"
id="g6410" />
</g>
<g
id="g6450">
<rect
id="rect6374"
width="231.82626"
height="467.86197"
x="1.6379815"
y="83.859184"
style="fill:none;fill-opacity:1;stroke:#657b83;stroke-width:3.44908571;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
ry="16.474392"
rx="14.184594" />
<g
id="g6378"
transform="translate(-46.113938,46.338267)"
style="">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6380"
transform="matrix(1.1020268,0,0,1,-7.1868545,0)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6382"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line"
id="tspan6384">OVN Controller</tspan><tspan
id="tspan6386"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="206.07571"
x="155.16545"
sodipodi:role="line">Service</tspan><tspan
id="tspan6388"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="227.56563"
x="155.16545"
sodipodi:role="line">ovn-controller</tspan></text>
</g>
<g
transform="translate(-44.446809,161.22189)"
id="g6398"
style="">
<rect
transform="matrix(1.1020268,0,0,1,-7.1868545,0)"
id="rect6400"
width="172.35803"
height="88.208359"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="184.5858"
id="text6402"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
id="tspan6404"
sodipodi:role="line"
x="155.16545"
y="184.5858"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1">OVS Local</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="206.07571"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6406">Database</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="227.56563"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6408">ovsdb-server</tspan></text>
</g>
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6412"
transform="matrix(1.1020268,0,0,1,-51.633657,276.7527)" />
<text
transform="matrix(1.0179194,0,0,0.98239605,-7.10308,276.7527)"
sodipodi:linespacing="125%"
id="text6414"
y="193.80943"
x="118.47914"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="193.80943"
x="118.47914"
sodipodi:role="line"
id="tspan6416">OVS Data Plane</tspan><tspan
id="tspan6420"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="215.29935"
x="118.47914"
sodipodi:role="line">ovs-vswitchd</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:28.74238205px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="119.14323"
y="129.21364"
id="text6422"
sodipodi:linespacing="125%"
transform="scale(1.0150442,0.98517878)"><tspan
sodipodi:role="line"
x="119.14323"
y="129.21364"
id="tspan6426">Gateway</tspan><tspan
id="tspan6430"
sodipodi:role="line"
x="119.14323"
y="165.14162">Nodes</tspan></text>
</g>
<g
id="g6472"
transform="translate(245.06822,0)">
<rect
rx="11.826921"
ry="17.798012"
style="fill:none;fill-opacity:1;stroke:#657b83;stroke-width:3.44908571;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
y="83.859184"
x="1.6379815"
height="469.48001"
width="440.21762"
id="rect6474" />
<g
transform="translate(159.60998,43.102109)"
id="g6476">
<rect
transform="matrix(1.1020268,0,0,1,-7.1868545,0)"
id="rect6478"
width="172.35803"
height="88.208359"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="184.5858"
id="text6480"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
id="tspan6482"
sodipodi:role="line"
x="155.16545"
y="184.5858"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1">OVN Controller</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="206.07571"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6484">Service</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="227.56563"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6486">ovn-controller</tspan></text>
</g>
<g
id="g6488"
transform="translate(161.27711,157.98573)">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6490"
transform="matrix(1.1020268,0,0,1,-7.1868545,0)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6492"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line"
id="tspan6494">OVS Local</tspan><tspan
id="tspan6496"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="206.07571"
x="155.16545"
sodipodi:role="line">Database</tspan><tspan
id="tspan6498"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="227.56563"
x="155.16545"
sodipodi:role="line">ovsdb-server</tspan></text>
</g>
<g
id="g6516"
transform="translate(14.003898,0)">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6500"
transform="matrix(1.1020268,0,0,1,140.08638,273.51654)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6502"
y="473.8743"
x="301.48389"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="473.8743"
x="301.48389"
sodipodi:role="line"
id="tspan6504">OVS Data Plane</tspan><tspan
id="tspan6506"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="495.3642"
x="301.48389"
sodipodi:role="line">ovs-vswitchd</tspan></text>
</g>
<text
transform="scale(1.0150442,0.98517878)"
sodipodi:linespacing="125%"
id="text6508"
y="129.21364"
x="219.33096"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:28.74238205px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
y="129.21364"
x="219.33095"
sodipodi:role="line"
id="tspan6512">Compute Nodes</tspan></text>
<g
id="g6522"
transform="translate(-51.115331,42.454877)">
<rect
rx="9.0842314"
ry="9.3594418"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
y="151.98941"
x="66.423546"
height="88.208359"
width="172.35803"
id="rect6524"
transform="matrix(1.1020268,0,0,1,-7.1868545,0)" />
<text
transform="scale(1.0179194,0.98239605)"
sodipodi:linespacing="125%"
id="text6526"
y="184.5858"
x="155.16545"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
xml:space="preserve"><tspan
id="tspan6532"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
y="184.5858"
x="155.16545"
sodipodi:role="line">nova-compute</tspan></text>
</g>
<g
transform="translate(-49.448201,157.3385)"
id="g6534">
<rect
transform="matrix(1.1020268,0,0,1,-7.1868545,0)"
id="rect6536"
width="172.35803"
height="88.208359"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="155.16545"
y="205.33897"
id="text6538"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
x="155.16545"
y="205.33897"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6544">KVM Hypervisor</tspan><tspan
sodipodi:role="line"
x="155.16545"
y="226.82889"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6568" /></text>
</g>
<g
id="g6546"
transform="translate(-196.72143,-0.64723163)">
<rect
transform="matrix(1.1020268,0,0,1,140.08638,273.51654)"
id="rect6548"
width="172.35803"
height="88.208359"
x="66.423546"
y="151.98941"
style="fill:#fcf4d7;fill-opacity:1;stroke:#657b83;stroke-width:3.28554845;stroke-linejoin:bevel;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;filter:url(#filter9167)"
ry="9.3594418"
rx="9.0842314" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:39.29584122px;line-height:125%;font-family:sans-serif;text-align:center;letter-spacing:0px;word-spacing:0px;text-anchor:middle;fill:#657b83;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
x="301.48389"
y="473.8743"
id="text6550"
sodipodi:linespacing="125%"
transform="scale(1.0179194,0.98239605)"><tspan
sodipodi:role="line"
x="301.48389"
y="473.8743"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6554">OVN Metadata</tspan><tspan
sodipodi:role="line"
x="301.48389"
y="495.3642"
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:17.19193077px;line-height:125%;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Bold';text-align:center;writing-mode:lr-tb;text-anchor:middle;fill:#657b83;fill-opacity:1"
id="tspan6572">Agent</tspan></text>
</g>
</g>
</g>
</g>
</g>
</g>
<flowRoot
transform="matrix(1,0,0,0.96510193,11.428574,13.581956)"
style="font-style:normal;font-weight:normal;font-size:40px;line-height:125%;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
id="flowRoot9454"
xml:space="preserve"><flowRegion
id="flowRegion9456"><rect
y="294.09448"
x="777.14288"
height="111.42857"
width="238.57143"
id="rect9458" /></flowRegion><flowPara
id="flowPara9460" /></flowRoot> </g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 52 KiB

View File

@ -0,0 +1,774 @@
.. _refarch-launch-instance-provider-network:
Launch an instance on a provider network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. On the controller node, source the credentials for a regular
(non-privileged) project. The following example uses the ``demo``
project.
#. On the controller node, launch an instance using the UUID of the
provider network.
.. code-block:: console
$ openstack server create --flavor m1.tiny --image cirros \
--nic net-id=0243277b-4aa8-46d8-9e10-5c9ad5e01521 \
--security-group default --key-name mykey provider-instance
+--------------------------------------+-----------------------------------------------+
| Property | Value |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | hdF4LMQqC5PB |
| config_drive | |
| created | 2015-09-17T21:58:18Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 181c52ba-aebc-4c32-a97d-2e8e82e4eaaf |
| image | cirros (38047887-61a7-41ea-9b49-27987d5e8bb9) |
| key_name | mykey |
| metadata | {} |
| name | provider-instance |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | f5b2ccaa75ac413591f12fcaa096aa5c |
| updated | 2015-09-17T21:58:18Z |
| user_id | 684286a9079845359882afc3aa5011fb |
+--------------------------------------+-----------------------------------------------+
OVN operations
^^^^^^^^^^^^^^
The OVN mechanism driver and OVN perform the following operations when
launching an instance.
#. The OVN mechanism driver creates a logical port for the instance.
.. code-block:: console
_uuid : cc891503-1259-47a1-9349-1c0293876664
addresses : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "cafd4862-c69c-46e4-b3d2-6141ce06b205"
options : {}
parent_name : []
port_security : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
tag : []
type : ""
up : true
#. The OVN mechanism driver updates the appropriate Address Set
entry with the address of this instance:
.. code-block:: console
_uuid : d0becdea-e1ed-48c4-9afc-e278cdef4629
addresses : ["203.0.113.103"]
external_ids : {"neutron:security_group_name"=default}
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
#. The OVN mechanism driver creates ACL entries for this port and
any other ports in the project.
.. code-block:: console
_uuid : f8d27bfc-4d74-4e73-8fac-c84585443efd
action : drop
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip"
priority : 1001
_uuid : a61d0068-b1aa-4900-9882-e0671d1fc131
action : allow
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4 && ip4.src == 203.0.113.0/24 && udp && udp.src == 67 && udp.dst == 68"
priority : 1002
_uuid : a5a787b8-7040-4b63-a20a-551bd73eb3d1
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip6"
priority : 1002
_uuid : 7b3f63b8-e69a-476c-ad3d-37de043232b2
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4 && ip4.src = $as_ip4_90a78a43_b5649_4bee_8822_21fcccab58dc"
priority : 1002
_uuid : 36dbb1b1-cd30-4454-a0bf-923646eb7c3f
action : allow
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4 && (ip4.dst == 255.255.255.255 || ip4.dst == 203.0.113.0/24) && udp && udp.src == 68 && udp.dst == 67"
priority : 1002
_uuid : 05a92f66-be48-461e-a7f1-b07bfbd3e667
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip4"
priority : 1002
_uuid : 37f18377-d6c3-4c44-9e4d-2170710e50ff
action : drop
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip"
priority : 1001
_uuid : 6d4db3cf-c1f1-4006-ad66-ae582a6acd21
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-6141ce06b205\" && ip6 && ip6.src = $as_ip6_90a78a43_b5649_4bee_8822_21fcccab58dc"
priority : 1002
#. The OVN mechanism driver updates the logical switch information with
the UUIDs of these objects.
.. code-block:: console
_uuid : 924500c4-8580-4d5f-a7ad-8769f6e58ff5
acls : [05a92f66-be48-461e-a7f1-b07bfbd3e667,
36dbb1b1-cd30-4454-a0bf-923646eb7c3f,
37f18377-d6c3-4c44-9e4d-2170710e50ff,
7b3f63b8-e69a-476c-ad3d-37de043232b2,
a5a787b8-7040-4b63-a20a-551bd73eb3d1,
a61d0068-b1aa-4900-9882-e0671d1fc131,
f8d27bfc-4d74-4e73-8fac-c84585443efd]
external_ids : {"neutron:network_name"=provider}
name : "neutron-670efade-7cd0-4d87-8a04-27f366eb8941"
ports : [38cf8b52-47c4-4e93-be8d-06bf71f6a7c9,
5e144ab9-3e08-4910-b936-869bbbf254c8,
a576b812-9c3e-4cfb-9752-5d8500b3adf9,
cc891503-1259-47a1-9349-1c0293876664]
#. The OVN northbound service creates port bindings for the logical
ports and adds them to the appropriate multicast group.
* Port bindings
.. code-block:: console
_uuid : e73e3fcd-316a-4418-bbd5-a8a42032b1c3
chassis : fc5ab9e7-bc28-40e8-ad52-2949358cc088
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
logical_port : "cafd4862-c69c-46e4-b3d2-6141ce06b205"
mac : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
options : {}
parent_port : []
tag : []
tunnel_key : 4
type : ""
* Multicast groups
.. code-block:: console
_uuid : 39b32ccd-fa49-4046-9527-13318842461e
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
name : _MC_flood
ports : [030024f4-61c3-4807-859b-07727447c427,
904c3108-234d-41c0-b93c-116b7e352a75,
cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46,
e73e3fcd-316a-4418-bbd5-a8a42032b1c3]
tunnel_key : 65535
#. The OVN northbound service translates the Address Set change into
the new Address Set in the OVN southbound database.
.. code-block:: console
_uuid : 2addbee3-7084-4fff-8f7b-15b1efebdaff
addresses : ["203.0.113.103"]
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
#. The OVN northbound service translates the ACL and logical port objects
into logical flows in the OVN southbound database.
.. code-block:: console
Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.src == {fa:16:3e:1c:ca:6a}),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 90,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.src == fa:16:3e:1c:ca:6a && ip4.src == {203.0.113.103}),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 90,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.src == fa:16:3e:1c:ca:6a && ip4.src == 0.0.0.0 &&
ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 67),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 80,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.src == fa:16:3e:1c:ca:6a && ip),
action=(drop;)
table= 2( ls_in_port_sec_nd), priority= 90,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.src == fa:16:3e:1c:ca:6a &&
arp.sha == fa:16:3e:1c:ca:6a && (arp.spa == 203.0.113.103 )),
action=(next;)
table= 2( ls_in_port_sec_nd), priority= 80,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
(arp || nd)),
action=(drop;)
table= 3( ls_in_pre_acl), priority= 110,
match=(nd),
action=(next;)
table= 3( ls_in_pre_acl), priority= 100,
match=(ip),
action=(reg0[0] = 1; next;)
table= 6( ls_in_acl), priority=65535,
match=(ct.inv),
action=(drop;)
table= 6( ls_in_acl), priority=65535,
match=(nd),
action=(next;)
table= 6( ls_in_acl), priority=65535,
match=(ct.est && !ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 6( ls_in_acl), priority=65535,
match=(!ct.est && ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 6( ls_in_acl), priority= 2002,
match=(ct.new && (inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205"
&& ip6)),
action=(reg0[1] = 1; next;)
table= 6( ls_in_acl), priority= 2002,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip4 &&
(ip4.dst == 255.255.255.255 || ip4.dst == 203.0.113.0/24) &&
udp && udp.src == 68 && udp.dst == 67),
action=(reg0[1] = 1; next;)
table= 6( ls_in_acl), priority= 2002,
match=(ct.new && (inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
ip4)),
action=(reg0[1] = 1; next;)
table= 6( ls_in_acl), priority= 2001,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip),
action=(drop;)
table= 6( ls_in_acl), priority= 1,
match=(ip),
action=(reg0[1] = 1; next;)
table= 9( ls_in_arp_rsp), priority= 50,
match=(arp.tpa == 203.0.113.103 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:1c:ca:6a;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:1c:ca:6a; arp.tpa = arp.spa;
arp.spa = 203.0.113.103; outport = inport;
inport = ""; /* Allow sending out inport. */ output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:1c:ca:6a),
action=(outport = "cafd4862-c69c-46e4-b3d2-6141ce06b205"; output;)
Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: egress
table= 1( ls_out_pre_acl), priority= 110,
match=(nd),
action=(next;)
table= 1( ls_out_pre_acl), priority= 100,
match=(ip),
action=(reg0[0] = 1; next;)
table= 4( ls_out_acl), priority=65535,
match=(!ct.est && ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 4( ls_out_acl), priority=65535,
match=(ct.est && !ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 4( ls_out_acl), priority=65535,
match=(ct.inv),
action=(drop;)
table= 4( ls_out_acl), priority=65535,
match=(nd),
action=(next;)
table= 4( ls_out_acl), priority= 2002,
match=(ct.new &&
(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip6 &&
ip6.src == $as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc)),
action=(reg0[1] = 1; next;)
table= 4( ls_out_acl), priority= 2002,
match=(ct.new &&
(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip4 &&
ip4.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc)),
action=(reg0[1] = 1; next;)
table= 4( ls_out_acl), priority= 2002,
match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip4 &&
ip4.src == 203.0.113.0/24 && udp && udp.src == 67 &&
udp.dst == 68),
action=(reg0[1] = 1; next;)
table= 4( ls_out_acl), priority= 2001,
match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" && ip),
action=(drop;)
table= 4( ls_out_acl), priority= 1,
match=(ip),
action=(reg0[1] = 1; next;)
table= 6( ls_out_port_sec_ip), priority= 90,
match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.dst == fa:16:3e:1c:ca:6a &&
ip4.dst == {255.255.255.255, 224.0.0.0/4, 203.0.113.103}),
action=(next;)
table= 6( ls_out_port_sec_ip), priority= 80,
match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.dst == fa:16:3e:1c:ca:6a && ip),
action=(drop;)
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
eth.dst == {fa:16:3e:1c:ca:6a}),
action=(output;)
#. The OVN controller service on each compute node translates these objects
into flows on the integration bridge ``br-int``. Exact flows depend on
whether the compute node containing the instance also contains a DHCP agent
on the subnet.
* On the compute node containing the instance, the Compute service creates
a port that connects the instance to the integration bridge and OVN
creates the following flows:
.. code-block:: console
# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
9(tapcafd4862-c6): addr:fe:16:3e:1c:ca:6a
config: 0
state: 0
current: 10MB-FD COPPER
speed: 10 Mbps now, 0 Mbps max
.. code-block:: console
cookie=0x0, duration=184.992s, table=0, n_packets=175, n_bytes=15270,
idle_age=15, priority=100,in_port=9
actions=load:0x3->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
load:0x4->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=191.687s, table=16, n_packets=175, n_bytes=15270,
idle_age=15, priority=50,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=resubmit(,17)
cookie=0x0, duration=191.687s, table=17, n_packets=2, n_bytes=684,
idle_age=112, priority=90,udp,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=191.687s, table=17, n_packets=146, n_bytes=12780,
idle_age=20, priority=90,ip,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,nw_src=203.0.113.103
actions=resubmit(,18)
cookie=0x0, duration=191.687s, table=17, n_packets=17, n_bytes=1386,
idle_age=92, priority=80,ipv6,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=191.687s, table=17, n_packets=0, n_bytes=0,
idle_age=191, priority=80,ip,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=191.687s, table=18, n_packets=10, n_bytes=420,
idle_age=15, priority=90,arp,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,arp_spa=203.0.113.103,
arp_sha=fa:16:3e:1c:ca:6a
actions=resubmit(,19)
cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
idle_age=191, priority=80,icmp6,reg6=0x4,metadata=0x4,
icmp_type=136,icmp_code=0
actions=drop
cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
idle_age=191, priority=80,icmp6,reg6=0x4,metadata=0x4,
icmp_type=135,icmp_code=0
actions=drop
cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
idle_age=191, priority=80,arp,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.033s, table=19, n_packets=0, n_bytes=0,
idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=75.032s, table=19, n_packets=0, n_bytes=0,
idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=75.032s, table=19, n_packets=34, n_bytes=5170,
idle_age=49, priority=100,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=75.032s, table=19, n_packets=0, n_bytes=0,
idle_age=75, priority=100,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=13, n_bytes=1118,
idle_age=49, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x4
actions=resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x4
actions=resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,ct_state=+inv+trk,metadata=0x4
actions=drop
cookie=0x0, duration=75.033s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=2002,ct_state=+new+trk,ipv6,reg6=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=15, n_bytes=1816,
idle_age=49, priority=2002,ct_state=+new+trk,ip,reg6=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=2002,udp,reg6=0x4,metadata=0x4,
nw_dst=203.0.113.0/24,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=2002,udp,reg6=0x4,metadata=0x4,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=75.033s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=2001,ip,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=2001,ipv6,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.032s, table=22, n_packets=6, n_bytes=2236,
idle_age=54, priority=1,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=1,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=67.064s, table=25, n_packets=0, n_bytes=0,
idle_age=67, priority=50,arp,metadata=0x4,arp_tpa=203.0.113.103,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:1c:ca:6a,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ed63dca->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a81268->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=75.033s, table=26, n_packets=19, n_bytes=2776,
idle_age=44, priority=50,metadata=0x4,dl_dst=fa:16:3e:1c:ca:6a
actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=221031.310s, table=33, n_packets=72, n_bytes=6292,
idle_age=20, hard_age=65534, priority=100,reg7=0x3,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=184.992s, table=34, n_packets=2, n_bytes=684,
idle_age=112, priority=100,reg6=0x4,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.034s, table=49, n_packets=0, n_bytes=0,
idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=75.033s, table=49, n_packets=0, n_bytes=0,
idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=75.033s, table=49, n_packets=38, n_bytes=6566,
idle_age=49, priority=100,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=75.033s, table=49, n_packets=0, n_bytes=0,
idle_age=75, priority=100,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x4
actions=resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=13, n_bytes=1118,
idle_age=49, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x4
actions=resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,ct_state=+inv+trk,metadata=0x4
actions=drop
cookie=0x0, duration=75.034s, table=52, n_packets=4, n_bytes=1538,
idle_age=54, priority=2002,udp,reg7=0x4,metadata=0x4,
nw_src=203.0.113.0/24,tp_src=67,tp_dst=68
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
metadata=0x4,nw_src=203.0.113.103
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=2.041s, table=52, n_packets=0, n_bytes=0,
idle_age=2, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
metadata=0x4,ipv6_src=::2/::2
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=2, n_bytes=698,
idle_age=54, priority=2001,ip,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.033s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=2001,ipv6,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.034s, table=52, n_packets=0, n_bytes=0,
idle_age=75, priority=1,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=75.033s, table=52, n_packets=19, n_bytes=3212,
idle_age=49, priority=1,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=75.034s, table=54, n_packets=17, n_bytes=2656,
idle_age=49, priority=90,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a,nw_dst=203.0.113.103
actions=resubmit(,55)
cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
idle_age=75, priority=90,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a,nw_dst=255.255.255.255
actions=resubmit(,55)
cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
idle_age=75, priority=90,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a,nw_dst=224.0.0.0/4
actions=resubmit(,55)
cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
idle_age=75, priority=80,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=75.033s, table=54, n_packets=0, n_bytes=0,
idle_age=75, priority=80,ipv6,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=75.033s, table=55, n_packets=21, n_bytes=2860,
idle_age=44, priority=50,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a
actions=resubmit(,64)
cookie=0x0, duration=184.992s, table=64, n_packets=166, n_bytes=15088,
idle_age=15, priority=100,reg7=0x4,metadata=0x4
actions=output:9
* For each compute node that only contains a DHCP agent on the subnet, OVN
creates the following flows:
.. code-block:: console
cookie=0x0, duration=189.649s, table=16, n_packets=0, n_bytes=0,
idle_age=189, priority=50,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=resubmit(,17)
cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
idle_age=189, priority=90,udp,reg6=0x4,metadata=0x4,
dl_src=fa:14:3e:1c:ca:6a,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=189.649s, table=17, n_packets=0, n_bytes=0,
idle_age=189, priority=90,ip,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,nw_src=203.0.113.103
actions=resubmit(,18)
cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
idle_age=189, priority=80,ipv6,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
idle_age=189, priority=80,ip,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=189.650s, table=18, n_packets=0, n_bytes=0,
idle_age=189, priority=90,arp,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,arp_spa=203.0.113.103,
arp_sha=fa:16:3e:1c:ca:6a
actions=resubmit(,19)
cookie=0x0, duration=189.650s, table=18, n_packets=0, n_bytes=0,
idle_age=189, priority=80,icmp6,reg6=0x4,metadata=0x4,
icmp_type=136,icmp_code=0
actions=drop
cookie=0x0, duration=189.650s, table=18, n_packets=0, n_bytes=0,
idle_age=189, priority=80,icmp6,reg6=0x4,metadata=0x4,
icmp_type=135,icmp_code=0
actions=drop
cookie=0x0, duration=189.649s, table=18, n_packets=0, n_bytes=0,
idle_age=189, priority=80,arp,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.452s, table=19, n_packets=0, n_bytes=0,
idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=79.450s, table=19, n_packets=0, n_bytes=0,
idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=79.452s, table=19, n_packets=0, n_bytes=0,
idle_age=79, priority=100,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=79.450s, table=19, n_packets=18, n_bytes=3164,
idle_age=57, priority=100,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=79.450s, table=22, n_packets=6, n_bytes=510,
idle_age=57, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x4
actions=resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x4
actions=resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,ct_state=+inv+trk,metadata=0x4
actions=drop
cookie=0x0, duration=79.453s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,ct_state=+new+trk,ipv6,reg6=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,ct_state=+new+trk,ip,reg6=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,udp,reg6=0x4,metadata=0x4,
nw_dst=203.0.113.0/24,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,udp,reg6=0x4,metadata=0x4,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=79.452s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=2001,ip,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=2001,ipv6,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.450s, table=22, n_packets=0, n_bytes=0,
idle_age=79, priority=1,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=79.450s, table=22, n_packets=12, n_bytes=2654,
idle_age=57, priority=1,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=71.483s, table=25, n_packets=0, n_bytes=0,
idle_age=71, priority=50,arp,metadata=0x4,arp_tpa=203.0.113.103,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:1c:ca:6a,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ed63dca->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a81268->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=79.450s, table=26, n_packets=8, n_bytes=1258,
idle_age=57, priority=50,metadata=0x4,dl_dst=fa:16:3e:1c:ca:6a
actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=182.952s, table=33, n_packets=74, n_bytes=7040,
idle_age=18, priority=100,reg7=0x4,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=79.451s, table=49, n_packets=0, n_bytes=0,
idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=79.450s, table=49, n_packets=0, n_bytes=0,
idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=79.450s, table=49, n_packets=18, n_bytes=3164,
idle_age=57, priority=100,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=79.450s, table=49, n_packets=0, n_bytes=0,
idle_age=79, priority=100,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x4
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=6, n_bytes=510,
idle_age=57, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x4
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,ct_state=+inv+trk,metadata=0x4
actions=drop
cookie=0x0, duration=79.452s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,udp,reg7=0x4,metadata=0x4,
nw_src=203.0.113.0/24,tp_src=67,tp_dst=68
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
metadata=0x4,nw_src=203.0.113.103
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=71.483s, table=52, n_packets=0, n_bytes=0,
idle_age=71, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2001,ipv6,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2001,ip,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.453s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=1,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=12, n_bytes=2654,
idle_age=57, priority=1,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a,nw_dst=255.255.255.255
actions=resubmit(,55)
cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a,nw_dst=203.0.113.103
actions=resubmit(,55)
cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a,nw_dst=224.0.0.0/4
actions=resubmit(,55)
cookie=0x0, duration=79.450s, table=54, n_packets=0, n_bytes=0,
idle_age=79, priority=80,ip,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=79.450s, table=54, n_packets=0, n_bytes=0,
idle_age=79, priority=80,ipv6,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a
actions=drop
cookie=0x0, duration=79.450s, table=55, n_packets=0, n_bytes=0,
idle_age=79, priority=50,reg7=0x4,metadata=0x4,
dl_dst=fa:16:3e:1c:ca:6a
actions=resubmit(,64)

View File

@ -0,0 +1,757 @@
.. _refarch-launch-instance-selfservice-network:
Launch an instance on a self-service network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To launch an instance on a self-service network, follow the same steps as
:ref:`launching an instance on the provider network
<refarch-launch-instance-provider-network>`, but using the UUID of the
self-service network.
OVN operations
^^^^^^^^^^^^^^
The OVN mechanism driver and OVN perform the following operations when
launching an instance.
#. The OVN mechanism driver creates a logical port for the instance.
.. code-block:: console
_uuid : c754d1d2-a7fb-4dd0-b14c-c076962b06b9
addresses : ["fa:16:3e:15:7d:13 192.168.1.5"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"
options : {}
parent_name : []
port_security : ["fa:16:3e:15:7d:13 192.168.1.5"]
tag : []
type : ""
up : true
#. The OVN mechanism driver updates the appropriate Address Set object(s)
with the address of the new instance:
.. code-block:: console
_uuid : d0becdea-e1ed-48c4-9afc-e278cdef4629
addresses : ["192.168.1.5", "203.0.113.103"]
external_ids : {"neutron:security_group_name"=default}
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
#. The OVN mechanism driver creates ACL entries for this port and
any other ports in the project.
.. code-block:: console
_uuid : 00ecbe8f-c82a-4e18-b688-af2a1941cff7
action : allow
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && (ip4.dst == 255.255.255.255 || ip4.dst == 192.168.1.0/24) && udp && udp.src == 68 && udp.dst == 67"
priority : 1002
_uuid : 2bf5b7ed-008e-4676-bba5-71fe58897886
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4"
priority : 1002
_uuid : 330b4e27-074f-446a-849b-9ab0018b65c5
action : allow
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && ip4.src == 192.168.1.0/24 && udp && udp.src == 67 && udp.dst == 68"
priority : 1002
_uuid : 683f52f2-4be6-4bd7-a195-6c782daa7840
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip6"
priority : 1002
_uuid : 8160f0b4-b344-43d5-bbd4-ca63a71aa4fc
action : drop
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip"
priority : 1001
_uuid : 97c6b8ca-14ea-4812-8571-95d640a88f4f
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip6"
priority : 1002
_uuid : 9cfd8eb5-5daa-422e-8fe8-bd22fd7fa826
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && ip4.src == 0.0.0.0/0 && icmp4"
priority : 1002
_uuid : f72c2431-7a64-4cea-b84a-118bdc761be2
action : drop
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip"
priority : 1001
_uuid : f94133fa-ed27-4d5e-a806-0d528e539cb3
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip4 && ip4.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
priority : 1002
_uuid : 7f7a92ff-b7e9-49b0-8be0-0dc388035df3
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-5e562c40e7ae\" && ip6 && ip6.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
priority : 1002
#. The OVN mechanism driver updates the logical switch information with
the UUIDs of these objects.
.. code-block:: console
_uuid : 15e2c80b-1461-4003-9869-80416cd97de5
acls : [00ecbe8f-c82a-4e18-b688-af2a1941cff7,
2bf5b7ed-008e-4676-bba5-71fe58897886,
330b4e27-074f-446a-849b-9ab0018b65c5,
683f52f2-4be6-4bd7-a195-6c782daa7840,
7f7a92ff-b7e9-49b0-8be0-0dc388035df3,
8160f0b4-b344-43d5-bbd4-ca63a71aa4fc,
97c6b8ca-14ea-4812-8571-95d640a88f4f,
9cfd8eb5-5daa-422e-8fe8-bd22fd7fa826,
f72c2431-7a64-4cea-b84a-118bdc761be2,
f94133fa-ed27-4d5e-a806-0d528e539cb3]
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-6cc81cae-8c5f-4c09-aaf2-35d0aa95c084"
ports : [2df457a5-f71c-4a2f-b9ab-d9e488653872,
67c2737c-b380-492b-883b-438048b48e56,
c754d1d2-a7fb-4dd0-b14c-c076962b06b9]
#. With address sets, it is no longer necessary for the OVN mechanism
driver to create separate ACLs for other instances in the project.
That is handled automagically via address sets.
#. The OVN northbound service translates the updated Address Set object(s)
into updated Address Set objects in the OVN southbound database:
.. code-block:: console
_uuid : 2addbee3-7084-4fff-8f7b-15b1efebdaff
addresses : ["192.168.1.5", "203.0.113.103"]
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
#. The OVN northbound service adds a Port Binding for the new Logical
Switch Port object:
.. code-block:: console
_uuid : 7a558e7b-ed7a-424f-a0cf-ab67d2d832d7
chassis : b67d6da9-0222-4ab1-a852-ab2607610bf8
datapath : 3f6e16b5-a03a-48e5-9b60-7b7a0396c425
logical_port : "e9cb7857-4cb1-4e91-aae5-165a7ab5b387"
mac : ["fa:16:3e:b6:91:70 192.168.1.5"]
options : {}
parent_port : []
tag : []
tunnel_key : 3
type : ""
#. The OVN northbound service updates the flooding multicast group
for the logical datapath with the new port binding:
.. code-block:: console
_uuid : c08d0102-c414-4a47-98d9-dd3fa9f9901c
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
name : _MC_flood
ports : [3e463ca0-951c-46fd-b6cf-05392fa3aa1f,
794a6f03-7941-41ed-b1c6-0e00c1e18da0,
fa7b294d-2a62-45ae-8de3-a41c002de6de]
tunnel_key : 65535
#. The OVN northbound service adds Logical Flows based on the updated
Address Set, ACL and Logical_Switch_Port objects:
.. code-block:: console
Datapath: 3f6e16b5-a03a-48e5-9b60-7b7a0396c425 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.src == {fa:16:3e:b6:a3:54}),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 90,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.src == fa:16:3e:b6:a3:54 && ip4.src == 0.0.0.0 &&
ip4.dst == 255.255.255.255 && udp.src == 68 && udp.dst == 67),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 90,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.src == fa:16:3e:b6:a3:54 && ip4.src == {192.168.1.5}),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 80,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.src == fa:16:3e:b6:a3:54 && ip),
action=(drop;)
table= 2( ls_in_port_sec_nd), priority= 90,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.src == fa:16:3e:b6:a3:54 && arp.sha == fa:16:3e:b6:a3:54 &&
(arp.spa == 192.168.1.5 )),
action=(next;)
table= 2( ls_in_port_sec_nd), priority= 80,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
(arp || nd)),
action=(drop;)
table= 3( ls_in_pre_acl), priority= 110, match=(nd),
action=(next;)
table= 3( ls_in_pre_acl), priority= 100, match=(ip),
action=(reg0[0] = 1; next;)
table= 6( ls_in_acl), priority=65535,
match=(!ct.est && ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 6( ls_in_acl), priority=65535,
match=(ct.est && !ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 6( ls_in_acl), priority=65535, match=(ct.inv),
action=(drop;)
table= 6( ls_in_acl), priority=65535, match=(nd),
action=(next;)
table= 6( ls_in_acl), priority= 2002,
match=(ct.new && (inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
ip6)),
action=(reg0[1] = 1; next;)
table= 6( ls_in_acl), priority= 2002,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip4 &&
(ip4.dst == 255.255.255.255 || ip4.dst == 192.168.1.0/24) &&
udp && udp.src == 68 && udp.dst == 67),
action=(reg0[1] = 1; next;)
table= 6( ls_in_acl), priority= 2002,
match=(ct.new && (inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
ip4)),
action=(reg0[1] = 1; next;)
table= 6( ls_in_acl), priority= 2001,
match=(inport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip),
action=(drop;)
table= 6( ls_in_acl), priority= 1, match=(ip),
action=(reg0[1] = 1; next;)
table= 9( ls_in_arp_nd_rsp), priority= 50,
match=(arp.tpa == 192.168.1.5 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:b6:a3:54; arp.op = 2; /* ARP reply */ arp.tha = arp.sha; arp.sha = fa:16:3e:b6:a3:54; arp.tpa = arp.spa; arp.spa = 192.168.1.5; outport = inport; inport = ""; /* Allow sending out inport. */ output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:b6:a3:54),
action=(outport = "e9cb7857-4cb1-4e91-aae5-165a7ab5b387"; output;)
Datapath: 3f6e16b5-a03a-48e5-9b60-7b7a0396c425 Pipeline: egress
table= 1( ls_out_pre_acl), priority= 110, match=(nd),
action=(next;)
table= 1( ls_out_pre_acl), priority= 100, match=(ip),
action=(reg0[0] = 1; next;)
table= 4( ls_out_acl), priority=65535, match=(nd),
action=(next;)
table= 4( ls_out_acl), priority=65535,
match=(!ct.est && ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 4( ls_out_acl), priority=65535,
match=(ct.est && !ct.rel && !ct.new && !ct.inv),
action=(next;)
table= 4( ls_out_acl), priority=65535, match=(ct.inv),
action=(drop;)
table= 4( ls_out_acl), priority= 2002,
match=(ct.new &&
(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip6 &&
ip6.src == $as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc)),
action=(reg0[1] = 1; next;)
table= 4( ls_out_acl), priority= 2002,
match=(ct.new &&
(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip4 &&
ip4.src == $as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc)),
action=(reg0[1] = 1; next;)
table= 4( ls_out_acl), priority= 2002,
match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip4 &&
ip4.src == 192.168.1.0/24 && udp && udp.src == 67 && udp.dst == 68),
action=(reg0[1] = 1; next;)
table= 4( ls_out_acl), priority= 2001,
match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" && ip),
action=(drop;)
table= 4( ls_out_acl), priority= 1, match=(ip),
action=(reg0[1] = 1; next;)
table= 6( ls_out_port_sec_ip), priority= 90,
match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.dst == fa:16:3e:b6:a3:54 &&
ip4.dst == {255.255.255.255, 224.0.0.0/4, 192.168.1.5}),
action=(next;)
table= 6( ls_out_port_sec_ip), priority= 80,
match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.dst == fa:16:3e:b6:a3:54 && ip),
action=(drop;)
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "e9cb7857-4cb1-4e91-aae5-165a7ab5b387" &&
eth.dst == {fa:16:3e:b6:a3:54}),
action=(output;)
#. The OVN controller service on each compute node translates these objects
into flows on the integration bridge ``br-int``. Exact flows depend on
whether the compute node containing the instance also contains a DHCP agent
on the subnet.
* On the compute node containing the instance, the Compute service creates
a port that connects the instance to the integration bridge and OVN
creates the following flows:
.. code-block:: console
# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
12(tapeaf36f62-56): addr:fe:16:3e:15:7d:13
config: 0
state: 0
current: 10MB-FD COPPER
.. code-block:: console
cookie=0x0, duration=179.460s, table=0, n_packets=122, n_bytes=10556,
idle_age=1, priority=100,in_port=12
actions=load:0x4->NXM_NX_REG5[],load:0x5->OXM_OF_METADATA[],
load:0x3->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=187.408s, table=16, n_packets=122, n_bytes=10556,
idle_age=1, priority=50,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=resubmit(,17)
cookie=0x0, duration=187.408s, table=17, n_packets=2, n_bytes=684,
idle_age=84, priority=90,udp,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,nw_src=0.0.0.0,nw_dst=255.255.255.255,
tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=187.408s, table=17, n_packets=98, n_bytes=8276,
idle_age=1, priority=90,ip,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,nw_src=192.168.1.5
actions=resubmit(,18)
cookie=0x0, duration=187.408s, table=17, n_packets=17, n_bytes=1386,
idle_age=55, priority=80,ipv6,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=187.408s, table=17, n_packets=0, n_bytes=0,
idle_age=187, priority=80,ip,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=187.408s, table=18, n_packets=5, n_bytes=210,
idle_age=10, priority=90,arp,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,arp_spa=192.168.1.5,
arp_sha=fa:16:3e:15:7d:13
actions=resubmit(,19)
cookie=0x0, duration=187.408s, table=18, n_packets=0, n_bytes=0,
idle_age=187, priority=80,icmp6,reg6=0x3,metadata=0x5,
icmp_type=135,icmp_code=0
actions=drop
cookie=0x0, duration=187.408s, table=18, n_packets=0, n_bytes=0,
idle_age=187, priority=80,icmp6,reg6=0x3,metadata=0x5,
icmp_type=136,icmp_code=0
actions=drop
cookie=0x0, duration=187.408s, table=18, n_packets=0, n_bytes=0,
idle_age=187, priority=80,arp,reg6=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=33, n_bytes=4081,
idle_age=0, priority=100,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=100,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=47.068s, table=22, n_packets=15, n_bytes=1392,
idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=16, n_bytes=1922,
idle_age=2, priority=2002,ct_state=+new+trk,ip,reg6=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
nw_dst=192.168.1.0/24,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.069s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2001,ipv6,reg6=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2001,ip,reg6=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=2, n_bytes=767,
idle_age=27, priority=1,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=1,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=179.457s, table=25, n_packets=2, n_bytes=84,
idle_age=33, priority=50,arp,metadata=0x5,arp_tpa=192.168.1.5,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:15:7d:13,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163e157d13->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80105->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=187.408s, table=26, n_packets=50, n_bytes=4806,
idle_age=1, priority=50,metadata=0x5,dl_dst=fa:16:3e:15:7d:13
actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=469.575s, table=33, n_packets=74, n_bytes=7040,
idle_age=305, priority=100,reg7=0x4,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=179.460s, table=34, n_packets=2, n_bytes=684,
idle_age=84, priority=100,reg6=0x3,reg7=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.069s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=34, n_bytes=4455,
idle_age=0, priority=100,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=100,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=47.069s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
actions=drop
cookie=0x0, duration=47.069s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=22, n_bytes=2000,
idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x5
actions=resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x5
actions=resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,ct_state=+new+trk,ip,reg7=0x3,
metadata=0x5,nw_src=192.168.1.5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,ct_state=+new+trk,ip,reg7=0x3,
metadata=0x5,nw_src=203.0.113.103
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=3, n_bytes=1141,
idle_age=27, priority=2002,udp,reg7=0x3,metadata=0x5,
nw_src=192.168.1.0/24,tp_src=67,tp_dst=68
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=39.497s, table=52, n_packets=0, n_bytes=0,
idle_age=39, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=2001,ip,reg7=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=2001,ipv6,reg7=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=52, n_packets=9, n_bytes=1314,
idle_age=2, priority=1,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=47.068s, table=52, n_packets=0, n_bytes=0,
idle_age=47, priority=1,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=47.068s, table=54, n_packets=23, n_bytes=2945,
idle_age=0, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=192.168.1.11
actions=resubmit(,55)
cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
idle_age=47, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=255.255.255.255
actions=resubmit(,55)
cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
idle_age=47, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=224.0.0.0/4
actions=resubmit(,55)
cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
idle_age=47, priority=80,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=47.068s, table=54, n_packets=0, n_bytes=0,
idle_age=47, priority=80,ipv6,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=47.068s, table=55, n_packets=25, n_bytes=3029,
idle_age=0, priority=50,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:15:7d:13
actions=resubmit(,64)
cookie=0x0, duration=179.460s, table=64, n_packets=116, n_bytes=10623,
idle_age=1, priority=100,reg7=0x3,metadata=0x5
actions=output:12
* For each compute node that only contains a DHCP agent on the subnet,
OVN creates the following flows:
.. code-block:: console
cookie=0x0, duration=192.587s, table=16, n_packets=0, n_bytes=0,
idle_age=192, priority=50,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=resubmit(,17)
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=90,ip,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,nw_src=192.168.1.5
actions=resubmit(,18)
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=90,udp,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=80,ipv6,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=80,ip,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
idle_age=192, priority=90,arp,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,arp_spa=192.168.1.5,
arp_sha=fa:16:3e:15:7d:13
actions=resubmit(,19)
cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
idle_age=192, priority=80,arp,reg6=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
idle_age=192, priority=80,icmp6,reg6=0x3,metadata=0x5,
icmp_type=135,icmp_code=0
actions=drop
cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
idle_age=192, priority=80,icmp6,reg6=0x3,metadata=0x5,
icmp_type=136,icmp_code=0
actions=drop
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=33, n_bytes=4081,
idle_age=0, priority=100,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=100,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=47.068s, table=22, n_packets=15, n_bytes=1392,
idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=16, n_bytes=1922,
idle_age=2, priority=2002,ct_state=+new+trk,ip,reg6=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,udp,reg6=0x3,metadata=0x5,
nw_dst=192.168.1.0/24,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.069s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2001,ipv6,reg6=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2001,ip,reg6=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=2, n_bytes=767,
idle_age=27, priority=1,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=1,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=179.457s, table=25, n_packets=2, n_bytes=84,
idle_age=33, priority=50,arp,metadata=0x5,arp_tpa=192.168.1.5,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:15:7d:13,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163e157d13->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80105->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=192.587s, table=26, n_packets=61, n_bytes=5607,
idle_age=6, priority=50,metadata=0x5,dl_dst=fa:16:3e:15:7d:13
actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=184.640s, table=32, n_packets=61, n_bytes=5607,
idle_age=6, priority=100,reg7=0x3,metadata=0x5
actions=load:0x5->NXM_NX_TUN_ID[0..23],
set_field:0x3/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:4
cookie=0x0, duration=47.069s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=34, n_bytes=4455,
idle_age=0, priority=100,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=100,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=65535,ct_state=+inv+trk,
metadata=0x5
actions=drop
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x5
actions=resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=27, n_bytes=2316,
idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x5
actions=resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=2002,ct_state=+new+trk,icmp,reg7=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=2002,udp,reg7=0x3,metadata=0x5,
nw_src=192.168.1.0/24,tp_src=67,tp_dst=68
actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=2002,ct_state=+new+trk,ip,reg7=0x3,
metadata=0x5,nw_src=203.0.113.103
actions=load:0x1->NXM_NX_REG0[1],resubmit(,50)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=2001,ip,reg7=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=2001,ipv6,reg7=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=192.587s, table=52, n_packets=25, n_bytes=2604,
idle_age=6, priority=1,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=192.587s, table=52, n_packets=0, n_bytes=0,
idle_age=192, priority=1,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
idle_age=192, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=224.0.0.0/4
actions=resubmit(,55)
cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
idle_age=192, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=255.255.255.255
actions=resubmit(,55)
cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
idle_age=192, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=192.168.1.5
actions=resubmit(,55)
cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
idle_age=192, priority=80,ipv6,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=192.587s, table=54, n_packets=0, n_bytes=0,
idle_age=192, priority=80,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=192.587s, table=55, n_packets=0, n_bytes=0,
idle_age=192, priority=50,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13
actions=resubmit(,64)
* For each compute node that contains neither the instance nor a DHCP
agent on the subnet, OVN creates the following flows:
.. code-block:: console
cookie=0x0, duration=189.763s, table=52, n_packets=0, n_bytes=0,
idle_age=189, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=189.763s, table=52, n_packets=0, n_bytes=0,
idle_age=189, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
metadata=0x4,nw_src=192.168.1.5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)

View File

@ -0,0 +1,656 @@
.. _refarch-provider-networks:
Provider networks
-----------------
A provider (external) network bridges instances to physical network
infrastructure that provides layer-3 services. In most cases, provider networks
implement layer-2 segmentation using VLAN IDs. A provider network maps to a
provider bridge on each compute node that supports launching instances on the
provider network. You can create more than one provider bridge, each one
requiring a unique name and underlying physical network interface to prevent
switching loops. Provider networks and bridges can use arbitrary names,
but each mapping must reference valid provider network and bridge names.
Each provider bridge can contain one ``flat`` (untagged) network and up to
the maximum number of ``vlan`` (tagged) networks that the physical network
infrastructure supports, typically around 4000.
Creating a provider network involves several commands at the host, OVS,
and Networking service levels that yield a series of operations at the
OVN level to create the virtual network components. The following example
creates a ``flat`` provider network ``provider`` using the provider bridge
``br-provider`` and binds a subnet to it.
Create a provider network
~~~~~~~~~~~~~~~~~~~~~~~~~
#. On each compute node, create the provider bridge, map the provider
network to it, and add the underlying physical or logical (typically
a bond) network interface to it.
.. code-block:: console
# ovs-vsctl --may-exist add-br br-provider -- set bridge br-provider \
protocols=OpenFlow13
# ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=provider:br-provider
# ovs-vsctl --may-exist add-port br-provider INTERFACE_NAME
Replace ``INTERFACE_NAME`` with the name of the underlying network
interface.
.. note::
These commands provide no output if successful.
#. On the controller node, source the administrative project credentials.
#. On the controller node, to enable this chassis to host gateway routers
for external connectivity, set ovn-cms-options to enable-chassis-as-gw.
.. code-block:: console
# ovs-vsctl set Open_vSwitch . external-ids:ovn-cms-options="enable-chassis-as-gw"
.. note::
This command provide no output if successful.
#. On the controller node, create the provider network in the Networking
service. In this case, instances and routers in other projects can use
the network.
.. code-block:: console
$ openstack network create --external --share \
--provider-physical-network provider --provider-network-type flat \
provider
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 2016-06-15 15:50:37+00:00 |
| description | |
| id | 0243277b-4aa8-46d8-9e10-5c9ad5e01521 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| mtu | 1500 |
| name | provider |
| project_id | b1ebf33664df402693f729090cfab861 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| router:external | External |
| shared | True |
| status | ACTIVE |
| subnets | 32a61337-c5a3-448a-a1e7-c11d6f062c21 |
| tags | [] |
| updated_at | 2016-06-15 15:50:37+00:00 |
+---------------------------+--------------------------------------+
.. note::
The value of ``--provider-physical-network`` must refer to the
provider network name in the mapping.
OVN operations
^^^^^^^^^^^^^^
.. todo: I don't like going this deep with headers, so a future patch
will probably break this content into multiple files.
The OVN mechanism driver and OVN perform the following operations during
creation of a provider network.
#. The mechanism driver translates the network into a logical switch
in the OVN northbound database.
.. code-block:: console
_uuid : 98edf19f-2dbc-4182-af9b-79cafa4794b6
acls : []
external_ids : {"neutron:network_name"=provider}
load_balancer : []
name : "neutron-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
ports : [92ee7c2f-cd22-4cac-a9d9-68a374dc7b17]
.. note::
The ``neutron:network_name`` field in ``external_ids`` contains
the network name and ``name`` contains the network UUID.
#. In addition, because the provider network is handled by a separate
bridge, the following logical port is created in the OVN northbound
database.
.. code-block:: console
_uuid : 92ee7c2f-cd22-4cac-a9d9-68a374dc7b17
addresses : [unknown]
enabled : []
external_ids : {}
name : "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
options : {network_name=provider}
parent_name : []
port_security : []
tag : []
type : localnet
up : false
#. The OVN northbound service translates these objects into datapath bindings,
port bindings, and the appropriate multicast groups in the OVN southbound
database.
* Datapath bindings
.. code-block:: console
_uuid : f1f0981f-a206-4fac-b3a1-dc2030c9909f
external_ids : {logical-switch="98edf19f-2dbc-4182-af9b-79cafa4794b6"}
tunnel_key : 109
* Port bindings
.. code-block:: console
_uuid : 8427506e-46b5-41e5-a71b-a94a6859e773
chassis : []
datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
logical_port : "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
mac : [unknown]
options : {network_name=provider}
parent_port : []
tag : []
tunnel_key : 1
type : localnet
* Logical flows
.. code-block:: console
Datapath: f1f0981f-a206-4fac-b3a1-dc2030c9909f Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 100, match=(eth.src[40]),
action=(drop;)
table= 0( ls_in_port_sec_l2), priority= 100, match=(vlan.present),
action=(drop;)
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"),
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 2( ls_in_port_sec_nd), priority= 0, match=(1),
action=(next;)
table= 3( ls_in_pre_acl), priority= 0, match=(1),
action=(next;)
table= 4( ls_in_pre_lb), priority= 0, match=(1),
action=(next;)
table= 5( ls_in_pre_stateful), priority= 100, match=(reg0[0] == 1),
action=(ct_next;)
table= 5( ls_in_pre_stateful), priority= 0, match=(1),
action=(next;)
table= 6( ls_in_acl), priority= 0, match=(1),
action=(next;)
table= 7( ls_in_lb), priority= 0, match=(1),
action=(next;)
table= 8( ls_in_stateful), priority= 100, match=(reg0[1] == 1),
action=(ct_commit; next;)
table= 8( ls_in_stateful), priority= 100, match=(reg0[2] == 1),
action=(ct_lb;)
table= 8( ls_in_stateful), priority= 0, match=(1),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 100,
match=(inport == "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 0, match=(1),
action=(next;)
table=10( ls_in_l2_lkup), priority= 100, match=(eth.mcast),
action=(outport = "_MC_flood"; output;)
table=10( ls_in_l2_lkup), priority= 0, match=(1),
action=(outport = "_MC_unknown"; output;)
Datapath: f1f0981f-a206-4fac-b3a1-dc2030c9909f Pipeline: egress
table= 0( ls_out_pre_lb), priority= 0, match=(1),
action=(next;)
table= 1( ls_out_pre_acl), priority= 0, match=(1),
action=(next;)
table= 2(ls_out_pre_stateful), priority= 100, match=(reg0[0] == 1),
action=(ct_next;)
table= 2(ls_out_pre_stateful), priority= 0, match=(1),
action=(next;)
table= 3( ls_out_lb), priority= 0, match=(1),
action=(next;)
table= 4( ls_out_acl), priority= 0, match=(1),
action=(next;)
table= 5( ls_out_stateful), priority= 100, match=(reg0[1] == 1),
action=(ct_commit; next;)
table= 5( ls_out_stateful), priority= 100, match=(reg0[2] == 1),
action=(ct_lb;)
table= 5( ls_out_stateful), priority= 0, match=(1),
action=(next;)
table= 6( ls_out_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 7( ls_out_port_sec_l2), priority= 100, match=(eth.mcast),
action=(output;)
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"),
action=(output;)
* Multicast groups
.. code-block:: console
_uuid : 0102f08d-c658-4d0a-a18a-ec8adcaddf4f
datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
name : _MC_unknown
ports : [8427506e-46b5-41e5-a71b-a94a6859e773]
tunnel_key : 65534
_uuid : fbc38e51-ac71-4c57-a405-e6066e4c101e
datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
name : _MC_flood
ports : [8427506e-46b5-41e5-a71b-a94a6859e773]
tunnel_key : 65535
Create a subnet on the provider network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The provider network requires at least one subnet that contains the IP
address allocation available for instances, default gateway IP address,
and metadata such as name resolution.
#. On the controller node, create a subnet bound to the provider network
``provider``.
.. code-block:: console
$ openstack subnet create --network provider --subnet-range \
203.0.113.0/24 --allocation-pool start=203.0.113.101,end=203.0.113.250 \
--dns-nameserver 8.8.8.8,8.8.4.4 --gateway 203.0.113.1 provider-v4
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 203.0.113.101-203.0.113.250 |
| cidr | 203.0.113.0/24 |
| created_at | 2016-06-15 15:50:45+00:00 |
| description | |
| dns_nameservers | 8.8.8.8, 8.8.4.4 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | 32a61337-c5a3-448a-a1e7-c11d6f062c21 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider-v4 |
| network_id | 0243277b-4aa8-46d8-9e10-5c9ad5e01521 |
| project_id | b1ebf33664df402693f729090cfab861 |
| subnetpool_id | None |
| updated_at | 2016-06-15 15:50:45+00:00 |
+-------------------+--------------------------------------+
If using DHCP to manage instance IP addresses, adding a subnet causes a series
of operations in the Networking service and OVN.
* The Networking service schedules the network on appropriate number of DHCP
agents. The example environment contains three DHCP agents.
* Each DHCP agent spawns a network namespace with a ``dnsmasq`` process using
an IP address from the subnet allocation.
* The OVN mechanism driver creates a logical switch port object in the OVN
northbound database for each ``dnsmasq`` process.
OVN operations
^^^^^^^^^^^^^^
The OVN mechanism driver and OVN perform the following operations
during creation of a subnet on the provider network.
#. If the subnet uses DHCP for IP address management, create logical ports
ports for each DHCP agent serving the subnet and bind them to the logical
switch. In this example, the subnet contains two DHCP agents.
.. code-block:: console
_uuid : 5e144ab9-3e08-4910-b936-869bbbf254c8
addresses : ["fa:16:3e:57:f9:ca 203.0.113.101"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "6ab052c2-7b75-4463-b34f-fd3426f61787"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : 38cf8b52-47c4-4e93-be8d-06bf71f6a7c9
addresses : ["fa:16:3e:e0:eb:6d 203.0.113.102"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "94aee636-2394-48bc-b407-8224ab6bb1ab"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : 924500c4-8580-4d5f-a7ad-8769f6e58ff5
acls : []
external_ids : {"neutron:network_name"=provider}
load_balancer : []
name : "neutron-670efade-7cd0-4d87-8a04-27f366eb8941"
ports : [38cf8b52-47c4-4e93-be8d-06bf71f6a7c9,
5e144ab9-3e08-4910-b936-869bbbf254c8,
a576b812-9c3e-4cfb-9752-5d8500b3adf9]
#. The OVN northbound service creates port bindings for these logical
ports and adds them to the appropriate multicast group.
* Port bindings
.. code-block:: console
_uuid : 030024f4-61c3-4807-859b-07727447c427
chassis : fc5ab9e7-bc28-40e8-ad52-2949358cc088
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
logical_port : "6ab052c2-7b75-4463-b34f-fd3426f61787"
mac : ["fa:16:3e:57:f9:ca 203.0.113.101"]
options : {}
parent_port : []
tag : []
tunnel_key : 2
type : ""
_uuid : cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46
chassis : 6a9d0619-8818-41e6-abef-2f3d9a597c03
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
logical_port : "94aee636-2394-48bc-b407-8224ab6bb1ab"
mac : ["fa:16:3e:e0:eb:6d 203.0.113.102"]
options : {}
parent_port : []
tag : []
tunnel_key : 3
type : ""
* Multicast groups
.. code-block:: console
_uuid : 39b32ccd-fa49-4046-9527-13318842461e
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
name : _MC_flood
ports : [030024f4-61c3-4807-859b-07727447c427,
904c3108-234d-41c0-b93c-116b7e352a75,
cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46]
tunnel_key : 65535
#. The OVN northbound service translates the logical ports into
additional logical flows in the OVN southbound database.
.. code-block:: console
Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "94aee636-2394-48bc-b407-8224ab6bb1ab"),
action=(next;)
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "6ab052c2-7b75-4463-b34f-fd3426f61787"),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 50,
match=(arp.tpa == 203.0.113.101 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:57:f9:ca;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:57:f9:ca; arp.tpa = arp.spa;
arp.spa = 203.0.113.101; outport = inport; inport = "";
/* Allow sending out inport. */ output;)
table= 9( ls_in_arp_rsp), priority= 50,
match=(arp.tpa == 203.0.113.102 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:e0:eb:6d;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:e0:eb:6d; arp.tpa = arp.spa;
arp.spa = 203.0.113.102; outport = inport;
inport = ""; /* Allow sending out inport. */ output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:57:f9:ca),
action=(outport = "6ab052c2-7b75-4463-b34f-fd3426f61787"; output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:e0:eb:6d),
action=(outport = "94aee636-2394-48bc-b407-8224ab6bb1ab"; output;)
Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: egress
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "6ab052c2-7b75-4463-b34f-fd3426f61787"),
action=(output;)
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "94aee636-2394-48bc-b407-8224ab6bb1ab"),
action=(output;)
#. For each compute node without a DHCP agent on the subnet:
* The OVN controller service translates the logical flows into flows on the
integration bridge ``br-int``.
.. code-block:: console
cookie=0x0, duration=22.303s, table=32, n_packets=0, n_bytes=0,
idle_age=22, priority=100,reg7=0xffff,metadata=0x4
actions=load:0x4->NXM_NX_TUN_ID[0..23],
set_field:0xffff/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],
output:5,output:4,resubmit(,33)
#. For each compute node with a DHCP agent on a subnet:
* Creation of a DHCP network namespace adds two virtual switch ports.
The first port connects the DHCP agent with ``dnsmasq`` process to the
integration bridge and the second port patches the integration bridge
to the provider bridge ``br-provider``.
.. code-block:: console
# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
7(tap6ab052c2-7b): addr:00:00:00:00:10:7f
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
8(patch-br-int-to): addr:6a:8c:30:3f:d7:dd
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
# ovs-ofctl -O OpenFlow13 show br-provider
OFPT_FEATURES_REPLY (OF1.3) (xid=0x2): dpid:0000080027137c4a
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS GROUP_STATS QUEUE_STATS
OFPST_PORT_DESC reply (OF1.3) (xid=0x3):
1(patch-provnet-0): addr:fa:42:c5:3f:d7:6f
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
* The OVN controller service translates these logical flows into flows on
the integration bridge.
.. code-block:: console
cookie=0x0, duration=17.731s, table=0, n_packets=3, n_bytes=258,
idle_age=16, priority=100,in_port=7
actions=load:0x2->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
load:0x2->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=17.730s, table=0, n_packets=15, n_bytes=954,
idle_age=2, priority=100,in_port=8,vlan_tci=0x0000/0x1000
actions=load:0x1->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
load:0x1->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=17.730s, table=0, n_packets=0, n_bytes=0,
idle_age=17, priority=100,in_port=8,dl_vlan=0
actions=strip_vlan,load:0x1->NXM_NX_REG5[],
load:0x4->OXM_OF_METADATA[],load:0x1->NXM_NX_REG6[],
resubmit(,16)
cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
idle_age=17, priority=100,metadata=0x4,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
idle_age=17, priority=100,metadata=0x4,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=17.732s, table=16, n_packets=3, n_bytes=258,
idle_age=16, priority=50,reg6=0x2,metadata=0x4 actions=resubmit(,17)
cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
idle_age=17, priority=50,reg6=0x3,metadata=0x4 actions=resubmit(,17)
cookie=0x0, duration=17.732s, table=16, n_packets=15, n_bytes=954,
idle_age=2, priority=50,reg6=0x1,metadata=0x4 actions=resubmit(,17)
cookie=0x0, duration=21.714s, table=17, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,18)
cookie=0x0, duration=21.714s, table=18, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,19)
cookie=0x0, duration=21.714s, table=19, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,20)
cookie=0x0, duration=21.714s, table=20, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,21)
cookie=0x0, duration=21.714s, table=21, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x1/0x1,metadata=0x4
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=21.714s, table=21, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x1/0x1,metadata=0x4
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=21.714s, table=21, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,22)
cookie=0x0, duration=21.714s, table=22, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,23)
cookie=0x0, duration=21.714s, table=23, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,24)
cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x4/0x4,metadata=0x4
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x4/0x4,metadata=0x4
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x2/0x2,metadata=0x4
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=21.714s, table=24, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x2/0x2,metadata=0x4
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=21.714s, table=24, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,25)
cookie=0x0, duration=21.714s, table=25, n_packets=15, n_bytes=954,
idle_age=6, priority=100,reg6=0x1,metadata=0x4 actions=resubmit(,26)
cookie=0x0, duration=21.714s, table=25, n_packets=0, n_bytes=0,
idle_age=21, priority=50,arp,metadata=0x4,
arp_tpa=203.0.113.101,arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:f9:5d:f3,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ef95df3->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a81264->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=21.714s, table=25, n_packets=0, n_bytes=0,
idle_age=21, priority=50,arp,metadata=0x4,
arp_tpa=203.0.113.102,arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:f0:a5:9f,
load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ef0a59f->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a81265->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=21.714s, table=25, n_packets=3, n_bytes=258,
idle_age=20, priority=0,metadata=0x4 actions=resubmit(,26)
cookie=0x0, duration=21.714s, table=26, n_packets=18, n_bytes=1212,
idle_age=6, priority=100,metadata=0x4,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=21.714s, table=26, n_packets=0, n_bytes=0,
idle_age=21, priority=50,metadata=0x4,dl_dst=fa:16:3e:f0:a5:9f
actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=21.714s, table=26, n_packets=0, n_bytes=0,
idle_age=21, priority=50,metadata=0x4,dl_dst=fa:16:3e:f9:5d:f3
actions=load:0x2->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=21.714s, table=26, n_packets=0, n_bytes=0,
idle_age=21, priority=0,metadata=0x4
actions=load:0xfffe->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=17.731s, table=33, n_packets=0, n_bytes=0,
idle_age=17, priority=100,reg7=0x2,metadata=0x4
actions=load:0x2->NXM_NX_REG5[],resubmit(,34)
cookie=0x0, duration=118.126s, table=33, n_packets=0, n_bytes=0,
idle_age=118, hard_age=17, priority=100,reg7=0xfffe,metadata=0x4
actions=load:0x1->NXM_NX_REG5[],load:0x1->NXM_NX_REG7[],
resubmit(,34),load:0xfffe->NXM_NX_REG7[]
cookie=0x0, duration=118.126s, table=33, n_packets=18, n_bytes=1212,
idle_age=2, hard_age=17, priority=100,reg7=0xffff,metadata=0x4
actions=load:0x2->NXM_NX_REG5[],load:0x2->NXM_NX_REG7[],
resubmit(,34),load:0x1->NXM_NX_REG5[],load:0x1->NXM_NX_REG7[],
resubmit(,34),load:0xffff->NXM_NX_REG7[]
cookie=0x0, duration=17.730s, table=33, n_packets=0, n_bytes=0,
idle_age=17, priority=100,reg7=0x1,metadata=0x4
actions=load:0x1->NXM_NX_REG5[],resubmit(,34)
cookie=0x0, duration=17.697s, table=33, n_packets=0, n_bytes=0,
idle_age=17, priority=100,reg7=0x3,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=17.731s, table=34, n_packets=3, n_bytes=258,
idle_age=16, priority=100,reg6=0x2,reg7=0x2,metadata=0x4
actions=drop
cookie=0x0, duration=17.730s, table=34, n_packets=15, n_bytes=954,
idle_age=2, priority=100,reg6=0x1,reg7=0x1,metadata=0x4
actions=drop
cookie=0x0, duration=21.714s, table=48, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,49)
cookie=0x0, duration=21.714s, table=49, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,50)
cookie=0x0, duration=21.714s, table=50, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x1/0x1,metadata=0x4
actions=ct(table=51,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=21.714s, table=50, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x1/0x1,metadata=0x4
actions=ct(table=51,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=21.714s, table=50, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,51)
cookie=0x0, duration=21.714s, table=51, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,52)
cookie=0x0, duration=21.714s, table=52, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,53)
cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x4/0x4,metadata=0x4
actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x4/0x4,metadata=0x4
actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x2/0x2,metadata=0x4
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
cookie=0x0, duration=21.714s, table=53, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x2/0x2,metadata=0x4
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
cookie=0x0, duration=21.714s, table=53, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,54)
cookie=0x0, duration=21.714s, table=54, n_packets=18, n_bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,55)
cookie=0x0, duration=21.714s, table=55, n_packets=18, n_bytes=1212,
idle_age=6, priority=100,metadata=0x4,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,64)
cookie=0x0, duration=21.714s, table=55, n_packets=0, n_bytes=0,
idle_age=21, priority=50,reg7=0x3,metadata=0x4
actions=resubmit(,64)
cookie=0x0, duration=21.714s, table=55, n_packets=0, n_bytes=0,
idle_age=21, priority=50,reg7=0x2,metadata=0x4
actions=resubmit(,64)
cookie=0x0, duration=21.714s, table=55, n_packets=0, n_bytes=0,
idle_age=21, priority=50,reg7=0x1,metadata=0x4
actions=resubmit(,64)
cookie=0x0, duration=21.712s, table=64, n_packets=15, n_bytes=954,
idle_age=6, priority=100,reg7=0x3,metadata=0x4 actions=output:7
cookie=0x0, duration=21.711s, table=64, n_packets=3, n_bytes=258,
idle_age=20, priority=100,reg7=0x1,metadata=0x4 actions=output:8

View File

@ -0,0 +1,311 @@
.. _refarch-refarch:
======================
Reference architecture
======================
The reference architecture defines the minimum environment necessary
to deploy OpenStack with Open Virtual Network (OVN) integration for
the Networking service in production with sufficient expectations
of scale and performance. For evaluation purposes, you can deploy this
environment using the :doc:`Installation Guide </install/ovn/index>` or
`Vagrant <https://github.com/openstack/neutron/tree/master/tools/ovn_vagrant>`_.
Any scaling or performance evaluations should use bare metal instead of
virtual machines.
Layout
------
The reference architecture includes a minimum of four nodes.
The controller node contains the following components that provide enough
functionality to launch basic instances:
* One network interface for management
* Identity service
* Image service
* Networking management with ML2 mechanism driver for OVN (control plane)
* Compute management (control plane)
The database node contains the following components:
* One network interface for management
* OVN northbound service (``ovn-northd``)
* Open vSwitch (OVS) database service (``ovsdb-server``) for the OVN
northbound database (``ovnnb.db``)
* Open vSwitch (OVS) database service (``ovsdb-server``) for the OVN
southbound database (``ovnsb.db``)
.. note::
For functional evaluation only, you can combine the controller and
database nodes.
The two compute nodes contain the following components:
* Two or three network interfaces for management, overlay networks, and
optionally provider networks
* Compute management (hypervisor)
* Hypervisor (KVM)
* OVN controller service (``ovn-controller``)
* OVS data plane service (``ovs-vswitchd``)
* OVS database service (``ovsdb-server``) with OVS local configuration
(``conf.db``) database
* OVN metadata agent (``ovn-metadata-agent``)
The gateway nodes contain the following components:
* Three network interfaces for management, overlay networks and provider
networks.
* OVN controller service (``ovn-controller``)
* OVS data plane service (``ovs-vswitchd``)
* OVS database service (``ovsdb-server``) with OVS local configuration
(``conf.db``) database
.. note::
Each OVN metadata agent provides metadata service locally on the compute
nodes in a lightweight way. Each network being accessed by the instances of
the compute node will have a corresponding metadata ovn-metadata-$net_uuid
namespace, and inside an haproxy will funnel the requests to the
ovn-metadata-agent over a unix socket.
Such namespace can be very helpful for debug purposes to access the local
instances on the compute node. If you login as root on such compute node
you can execute:
ip netns ovn-metadata-$net_uuid exec ssh user@my.instance.ip.address
Hardware layout
~~~~~~~~~~~~~~~
.. image:: figures/ovn-hw.png
:alt: Hardware layout
:align: center
Service layout
~~~~~~~~~~~~~~
.. image:: figures/ovn-services.png
:alt: Service layout
:align: center
Networking service with OVN integration
---------------------------------------
The reference architecture deploys the Networking service with OVN
integration as described in the following scenarios:
.. image:: figures/ovn-architecture1.png
:alt: Architecture for Networking service with OVN integration
:align: center
With ovn driver, all the E/W traffic which traverses a virtual
router is completely distributed, going from compute to compute node
without passing through the gateway nodes.
N/S traffic that needs SNAT (without floating IPs) will always pass
through the centralized gateway nodes, although, as soon as you
have more than one gateway node ovn driver will make use of
the HA capabilities of ovn.
Centralized Floating IPs
~~~~~~~~~~~~~~~~~~~~~~~~
In this architecture, all the N/S router traffic (snat and floating
IPs) goes through the gateway nodes.
The compute nodes don't need connectivity to the external network,
although it could be provided if we wanted to have direct connectivity
to such network from some instances.
For external connectivity, gateway nodes have to set ``ovn-cms-options``
with ``enable-chassis-as-gw`` in Open_vSwitch table's external_ids column,
for example:
.. code-block:: console
$ ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw"
Distributed Floating IPs (DVR)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this architecture, the floating IP N/S traffic flows directly
from/to the compute nodes through the specific provider network
bridge. In this case compute nodes need connectivity to the external
network.
Each compute node contains the following network components:
.. image:: figures/ovn-compute1.png
:alt: Compute node network components
:align: center
.. note::
The Networking service creates a unique network namespace for each
virtual network that enables the metadata service.
Several external connections can be optionally created via provider
bridges. Those can be used for direct vm connectivity to the specific
networks or the use of distributed floating ips.
.. _refarch_database-access:
Accessing OVN database content
------------------------------
OVN stores configuration data in a collection of OVS database tables.
The following commands show the contents of the most common database
tables in the northbound and southbound databases. The example database
output in this section uses these commands with various output filters.
.. code-block:: console
$ ovn-nbctl list Logical_Switch
$ ovn-nbctl list Logical_Switch_Port
$ ovn-nbctl list ACL
$ ovn-nbctl list Address_Set
$ ovn-nbctl list Logical_Router
$ ovn-nbctl list Logical_Router_Port
$ ovn-nbctl list Gateway_Chassis
$ ovn-sbctl list Chassis
$ ovn-sbctl list Encap
$ ovn-nbctl list Address_Set
$ ovn-sbctl lflow-list
$ ovn-sbctl list Multicast_Group
$ ovn-sbctl list Datapath_Binding
$ ovn-sbctl list Port_Binding
$ ovn-sbctl list MAC_Binding
$ ovn-sbctl list Gateway_Chassis
.. note::
By default, you must run these commands from the node containing
the OVN databases.
.. _refarch-adding-compute-node:
Adding a compute node
---------------------
When you add a compute node to the environment, the OVN controller
service on it connects to the OVN southbound database and registers
the node as a chassis.
.. code-block:: console
_uuid : 9be8639d-1d0b-4e3d-9070-03a655073871
encaps : [2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e]
external_ids : {ovn-bridge-mappings=""}
hostname : "compute1"
name : "410ee302-850b-4277-8610-fa675d620cb7"
vtep_logical_switches: []
The ``encaps`` field value refers to tunnel endpoint information
for the compute node.
.. code-block:: console
_uuid : 2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e
ip : "10.0.0.32"
options : {}
type : geneve
Security Groups/Rules
---------------------
Each security group will map to 2 Address_Sets in the OVN NB and SB
tables, one for ipv4 and another for ipv6, which will be used to hold ip
addresses for the ports that belong to the security group, so that rules
with remote_group_id can be efficiently applied.
.. todo: add block with openstack security group rule example
OVN operations
~~~~~~~~~~~~~~
#. Creating a security group will cause the OVN mechanism driver to create
2 new entries in the Address Set table of the northbound DB:
.. code-block:: console
_uuid : 9a9d01bd-4afc-4d12-853a-cd21b547911d
addresses : []
external_ids : {"neutron:security_group_name"=default}
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
_uuid : 27a91327-636e-4125-99f0-6f2937a3b6d8
addresses : []
external_ids : {"neutron:security_group_name"=default}
name : "as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc"
In the above entries, the address set name include the protocol (IPv4
or IPv6, written as ip4 or ip6) and the UUID of the Openstack security
group, dashes translated to underscores.
#. In turn, these new entries will be translated by the OVN northd daemon
into entries in the southbound DB:
.. code-block:: console
_uuid : 886d7b3a-e460-470f-8af2-7c7d88ce45d2
addresses : []
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
_uuid : 355ddcba-941d-4f1c-b823-dc811cec59ca
addresses : []
name : "as_ip6_90a78a43_b549_4bee_8822_21fcccab58dc"
Networks
--------
.. toctree::
:maxdepth: 1
provider-networks
selfservice-networks
Routers
-------
.. toctree::
:maxdepth: 1
routers
.. todo: Explain L3HA modes available starting at OVS 2.8
Instances
---------
Launching an instance causes the same series of operations regardless
of the network. The following example uses the ``provider`` provider
network, ``cirros`` image, ``m1.tiny`` flavor, ``default`` security
group, and ``mykey`` key.
.. toctree::
:maxdepth: 1
launch-instance-provider-network
launch-instance-selfservice-network
.. todo: Add north-south when OVN gains support for it.
Traffic flows
-------------
East-west for instances on the same provider network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
East-west for instances on different provider networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
East-west for instances on the same self-service network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
East-west for instances on different self-service networks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -0,0 +1,855 @@
.. _refarch-routers:
Routers
-------
Routers pass traffic between layer-3 networks.
Create a router
~~~~~~~~~~~~~~~
#. On the controller node, source the credentials for a regular
(non-privileged) project. The following example uses the ``demo``
project.
#. On the controller node, create router in the Networking service.
.. code-block:: console
$ openstack router create router
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | UP |
| description | |
| external_gateway_info | null |
| headers | |
| id | 24addfcd-5506-405d-a59f-003644c3d16a |
| name | router |
| project_id | b1ebf33664df402693f729090cfab861 |
| routes | |
| status | ACTIVE |
+-----------------------+--------------------------------------+
OVN operations
^^^^^^^^^^^^^^
The OVN mechanism driver and OVN perform the following operations when
creating a router.
#. The OVN mechanism driver translates the router into a logical
router object in the OVN northbound database.
.. code-block:: console
_uuid : 1c2e340d-dac9-496b-9e86-1065f9dab752
default_gw : []
enabled : []
external_ids : {"neutron:router_name"="router"}
name : "neutron-a24fd760-1a99-4eec-9f02-24bb284ff708"
ports : []
static_routes : []
#. The OVN northbound service translates this object into logical flows
and datapath bindings in the OVN southbound database.
* Datapath bindings
.. code-block:: console
_uuid : 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa
external_ids : {logical-router="1c2e340d-dac9-496b-9e86-1065f9dab752"}
tunnel_key : 3
* Logical flows
.. code-block:: console
Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: ingress
table= 0( lr_in_admission), priority= 100,
match=(vlan.present || eth.src[40]),
action=(drop;)
table= 1( lr_in_ip_input), priority= 100,
match=(ip4.mcast || ip4.src == 255.255.255.255 ||
ip4.src == 127.0.0.0/8 || ip4.dst == 127.0.0.0/8 ||
ip4.src == 0.0.0.0/8 || ip4.dst == 0.0.0.0/8),
action=(drop;)
table= 1( lr_in_ip_input), priority= 50, match=(ip4.mcast),
action=(drop;)
table= 1( lr_in_ip_input), priority= 50, match=(eth.bcast),
action=(drop;)
table= 1( lr_in_ip_input), priority= 30,
match=(ip4 && ip.ttl == {0, 1}), action=(drop;)
table= 1( lr_in_ip_input), priority= 0, match=(1),
action=(next;)
table= 2( lr_in_unsnat), priority= 0, match=(1),
action=(next;)
table= 3( lr_in_dnat), priority= 0, match=(1),
action=(next;)
table= 5( lr_in_arp_resolve), priority= 0, match=(1),
action=(get_arp(outport, reg0); next;)
table= 6( lr_in_arp_request), priority= 100,
match=(eth.dst == 00:00:00:00:00:00),
action=(arp { eth.dst = ff:ff:ff:ff:ff:ff; arp.spa = reg1;
arp.op = 1; output; };)
table= 6( lr_in_arp_request), priority= 0, match=(1),
action=(output;)
Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: egress
table= 0( lr_out_snat), priority= 0, match=(1),
action=(next;)
#. The OVN controller service on each compute node translates these objects
into flows on the integration bridge ``br-int``.
.. code-block:: console
# ovs-ofctl dump-flows br-int
cookie=0x0, duration=6.402s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x5,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=6.402s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x5,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_dst=127.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_dst=0.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_dst=224.0.0.0/4
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=50,ip,metadata=0x5,nw_dst=224.0.0.0/4
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_src=255.255.255.255
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_src=127.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_src=0.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,arp,metadata=0x5,arp_op=2
actions=push:NXM_NX_REG0[],push:NXM_OF_ETH_SRC[],
push:NXM_NX_ARP_SHA[],push:NXM_OF_ARP_SPA[],
pop:NXM_NX_REG0[],pop:NXM_OF_ETH_SRC[],
controller(userdata=00.00.00.01.00.00.00.00),
pop:NXM_OF_ETH_SRC[],pop:NXM_NX_REG0[]
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=50,metadata=0x5,dl_dst=ff:ff:ff:ff:ff:ff
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=30,ip,metadata=0x5,nw_ttl=0
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=30,ip,metadata=0x5,nw_ttl=1
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x5
actions=resubmit(,18)
cookie=0x0, duration=6.402s, table=18, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x5
actions=resubmit(,19)
cookie=0x0, duration=6.402s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x5
actions=resubmit(,20)
cookie=0x0, duration=6.402s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x5
actions=resubmit(,32)
cookie=0x0, duration=6.402s, table=48, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x5
actions=resubmit(,49)
Attach a self-service network to the router
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Self-service networks, particularly subnets, must interface with a
router to enable connectivity with other self-service and provider
networks.
#. On the controller node, add the self-service network subnet
``selfservice-v4`` to the router ``router``.
.. code-block:: console
$ openstack router add subnet router selfservice-v4
.. note::
This command provides no output.
OVN operations
^^^^^^^^^^^^^^
The OVN mechanism driver and OVN perform the following operations when
adding a subnet as an interface on a router.
#. The OVN mechanism driver translates the operation into logical
objects and devices in the OVN northbound database and performs a
series of operations on them.
* Create a logical port.
.. code-block:: console
_uuid : 4c9e70b1-fff0-4d0d-af8e-42d3896eb76f
addresses : ["fa:16:3e:0c:55:62 192.168.1.1"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "5b72d278-5b16-44a6-9aa0-9e513a429506"
options : {router-port="lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"}
parent_name : []
port_security : []
tag : []
type : router
up : false
* Add the logical port to logical switch.
.. code-block:: console
_uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
acls : []
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
ports : [1ed7c28b-dc69-42b8-bed6-46477bb8b539,
4c9e70b1-fff0-4d0d-af8e-42d3896eb76f,
ae10a5e0-db25-4108-b06a-d2d5c127d9c4]
* Create a logical router port object.
.. code-block:: console
_uuid : f60ccb93-7b3d-4713-922c-37104b7055dc
enabled : []
external_ids : {}
mac : "fa:16:3e:0c:55:62"
name : "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"
network : "192.168.1.1/24"
peer : []
* Add the logical router port to the logical router object.
.. code-block:: console
_uuid : 1c2e340d-dac9-496b-9e86-1065f9dab752
default_gw : []
enabled : []
external_ids : {"neutron:router_name"="router"}
name : "neutron-a24fd760-1a99-4eec-9f02-24bb284ff708"
ports : [f60ccb93-7b3d-4713-922c-37104b7055dc]
static_routes : []
#. The OVN northbound service translates these objects into logical flows,
datapath bindings, and the appropriate multicast groups in the OVN
southbound database.
* Logical flows in the logical router datapath
.. code-block:: console
Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: ingress
table= 0( lr_in_admission), priority= 50,
match=((eth.mcast || eth.dst == fa:16:3e:0c:55:62) &&
inport == "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"),
action=(next;)
table= 1( lr_in_ip_input), priority= 100,
match=(ip4.src == {192.168.1.1, 192.168.1.255}), action=(drop;)
table= 1( lr_in_ip_input), priority= 90,
match=(ip4.dst == 192.168.1.1 && icmp4.type == 8 &&
icmp4.code == 0),
action=(ip4.dst = ip4.src; ip4.src = 192.168.1.1; ip.ttl = 255;
icmp4.type = 0;
inport = ""; /* Allow sending out inport. */ next; )
table= 1( lr_in_ip_input), priority= 90,
match=(inport == "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506" &&
arp.tpa == 192.168.1.1 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:0c:55:62;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:0c:55:62; arp.tpa = arp.spa;
arp.spa = 192.168.1.1;
outport = "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506";
inport = ""; /* Allow sending out inport. */ output;)
table= 1( lr_in_ip_input), priority= 60,
match=(ip4.dst == 192.168.1.1), action=(drop;)
table= 4( lr_in_ip_routing), priority= 24,
match=(ip4.dst == 192.168.1.0/255.255.255.0),
action=(ip.ttl--; reg0 = ip4.dst; reg1 = 192.168.1.1;
eth.src = fa:16:3e:0c:55:62;
outport = "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506";
next;)
Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: egress
table= 1( lr_out_delivery), priority= 100,
match=(outport == "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506),
action=(output;)
* Logical flows in the logical switch datapath
.. code-block:: console
Datapath: 611d35e8-b1e1-442c-bc07-7c6192ad6216 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "5b72d278-5b16-44a6-9aa0-9e513a429506"),
action=(next;)
table= 3( ls_in_pre_acl), priority= 110,
match=(ip && inport == "5b72d278-5b16-44a6-9aa0-9e513a429506"),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 50,
match=(arp.tpa == 192.168.1.1 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:0c:55:62;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:0c:55:62; arp.tpa = arp.spa;
arp.spa = 192.168.1.1; outport = inport;
inport = ""; /* Allow sending out inport. */ output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:fa:76:8f),
action=(outport = "f112b99a-8ccc-4c52-8733-7593fa0966ea"; output;)
Datapath: 611d35e8-b1e1-442c-bc07-7c6192ad6216 Pipeline: egress
table= 1( ls_out_pre_acl), priority= 110,
match=(ip && outport == "f112b99a-8ccc-4c52-8733-7593fa0966ea"),
action=(next;)
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "f112b99a-8ccc-4c52-8733-7593fa0966ea"),
action=(output;)
* Port bindings
.. code-block:: console
_uuid : 0f86395b-a0d8-40fd-b22c-4c9e238a7880
chassis : []
datapath : 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa
logical_port : "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"
mac : []
options : {peer="5b72d278-5b16-44a6-9aa0-9e513a429506"}
parent_port : []
tag : []
tunnel_key : 1
type : patch
_uuid : 8d95ab8c-c2ea-4231-9729-7ecbfc2cd676
chassis : []
datapath : 4aef86e4-e54a-4c83-bb27-d65c670d4b51
logical_port : "5b72d278-5b16-44a6-9aa0-9e513a429506"
mac : ["fa:16:3e:0c:55:62 192.168.1.1"]
options : {peer="lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"}
parent_port : []
tag : []
tunnel_key : 3
type : patch
* Multicast groups
.. code-block:: console
_uuid : 4a6191aa-d8ac-4e93-8306-b0d8fbbe4e35
datapath : 4aef86e4-e54a-4c83-bb27-d65c670d4b51
name : _MC_flood
ports : [8d95ab8c-c2ea-4231-9729-7ecbfc2cd676,
be71fac3-9f04-41c9-9951-f3f7f1fa1ec5,
da5c1269-90b7-4df2-8d76-d4575754b02d]
tunnel_key : 65535
In addition, if the self-service network contains ports with IP addresses
(typically instances or DHCP servers), OVN creates a logical flow for
each port, similar to the following example.
.. code-block:: console
Datapath: 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa Pipeline: ingress
table= 5( lr_in_arp_resolve), priority= 100,
match=(outport == "lrp-f112b99a-8ccc-4c52-8733-7593fa0966ea" &&
reg0 == 192.168.1.11),
action=(eth.dst = fa:16:3e:b6:91:70; next;)
#. On each compute node, the OVN controller service creates patch ports,
similar to the following example.
.. code-block:: console
7(patch-f112b99a-): addr:4e:01:91:2a:73:66
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
8(patch-lrp-f112b): addr:be:9d:7b:31:bb:87
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
#. On all compute nodes, the OVN controller service creates the
following additional flows:
.. code-block:: console
cookie=0x0, duration=6.667s, table=0, n_packets=0, n_bytes=0,
idle_age=6, priority=100,in_port=8
actions=load:0x9->OXM_OF_METADATA[],load:0x1->NXM_NX_REG6[],
resubmit(,16)
cookie=0x0, duration=6.667s, table=0, n_packets=0, n_bytes=0,
idle_age=6, priority=100,in_port=7
actions=load:0x7->OXM_OF_METADATA[],load:0x4->NXM_NX_REG6[],
resubmit(,16)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x4,metadata=0x7
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x1,metadata=0x9,
dl_dst=fa:16:3e:fa:76:8f
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x1,metadata=0x9,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x9,nw_src=192.168.1.1
actions=drop
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x9,nw_src=192.168.1.255
actions=drop
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,arp,reg6=0x1,metadata=0x9,
arp_tpa=192.168.1.1,arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:fa:76:8f,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163efa768f->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80101->NXM_OF_ARP_SPA[],load:0x1->NXM_NX_REG7[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,icmp,metadata=0x9,nw_dst=192.168.1.1,
icmp_type=8,icmp_code=0
actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],mod_nw_src:192.168.1.1,
load:0xff->NXM_NX_IP_TTL[],load:0->NXM_OF_ICMP_TYPE[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,18)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=60,ip,metadata=0x9,nw_dst=192.168.1.1
actions=drop
cookie=0x0, duration=6.674s, table=20, n_packets=0, n_bytes=0,
idle_age=6, priority=24,ip,metadata=0x9,nw_dst=192.168.1.0/24
actions=dec_ttl(),move:NXM_OF_IP_DST[]->NXM_NX_REG0[],
load:0xc0a80101->NXM_NX_REG1[],mod_dl_src:fa:16:3e:fa:76:8f,
load:0x1->NXM_NX_REG7[],resubmit(,21)
cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg0=0xc0a80103,reg7=0x1,metadata=0x9
actions=mod_dl_dst:fa:16:3e:d5:00:02,resubmit(,22)
cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg0=0xc0a80102,reg7=0x1,metadata=0x9
actions=mod_dl_dst:fa:16:3e:82:8b:0e,resubmit(,22)
cookie=0x0, duration=6.673s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg0=0xc0a8010b,reg7=0x1,metadata=0x9
actions=mod_dl_dst:fa:16:3e:b6:91:70,resubmit(,22)
cookie=0x0, duration=6.673s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.1,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:fa:76:8f,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163efa768f->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80101->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.674s, table=26, n_packets=0, n_bytes=0,
idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:fa:76:8f
actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=6.667s, table=33, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x4,metadata=0x7
actions=resubmit(,34)
cookie=0x0, duration=6.667s, table=33, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x1,metadata=0x9
actions=resubmit(,34)
cookie=0x0, duration=6.667s, table=34, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg6=0x4,reg7=0x4,metadata=0x7
actions=drop
cookie=0x0, duration=6.667s, table=34, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg6=0x1,reg7=0x1,metadata=0x9
actions=drop
cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=110,ipv6,reg7=0x4,metadata=0x7
actions=resubmit(,50)
cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=110,ip,reg7=0x4,metadata=0x7
actions=resubmit(,50)
cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x1,metadata=0x9
actions=resubmit(,64)
cookie=0x0, duration=6.673s, table=55, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg7=0x4,metadata=0x7
actions=resubmit(,64)
cookie=0x0, duration=6.667s, table=64, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x4,metadata=0x7
actions=output:7
cookie=0x0, duration=6.667s, table=64, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x1,metadata=0x9
actions=output:8
#. On compute nodes not containing a port on the network, the OVN controller
also creates additional flows.
.. code-block:: console
cookie=0x0, duration=6.673s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x7,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x7,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x3,metadata=0x7,
dl_src=fa:16:3e:b6:91:70
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x2,metadata=0x7
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x1,metadata=0x7
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,ip,reg6=0x3,metadata=0x7,
dl_src=fa:16:3e:b6:91:70,nw_src=192.168.1.11
actions=resubmit(,18)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,udp,reg6=0x3,metadata=0x7,
dl_src=fa:16:3e:b6:91:70,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=80,ip,reg6=0x3,metadata=0x7,
dl_src=fa:16:3e:b6:91:70
actions=drop
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=80,ipv6,reg6=0x3,metadata=0x7,
dl_src=fa:16:3e:b6:91:70
actions=drop
cookie=0x0, duration=6.670s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,18)
cookie=0x0, duration=6.674s, table=18, n_packets=0, n_bytes=0,
idle_age=6, priority=90,arp,reg6=0x3,metadata=0x7,
dl_src=fa:16:3e:b6:91:70,arp_spa=192.168.1.11,
arp_sha=fa:16:3e:b6:91:70
actions=resubmit(,19)
cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
idle_age=6, priority=80,icmp6,reg6=0x3,metadata=0x7,icmp_type=135,
icmp_code=0
actions=drop
cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
idle_age=6, priority=80,icmp6,reg6=0x3,metadata=0x7,icmp_type=136,
icmp_code=0
actions=drop
cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
idle_age=6, priority=80,arp,reg6=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.673s, table=18, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,19)
cookie=0x0, duration=6.673s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=136,icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=6.673s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=135,icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=6.674s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x7
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=6.670s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,metadata=0x7
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=6.674s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,20)
cookie=0x0, duration=6.673s, table=20, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,21)
cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x1/0x1,metadata=0x7
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=6.670s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x1/0x1,metadata=0x7
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,22)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,metadata=0x7
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=-new-est+rel-inv+trk,metadata=0x7
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=+inv+trk,metadata=0x7
actions=drop
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,udp,reg6=0x3,metadata=0x7,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,udp,reg6=0x3,metadata=0x7,
nw_dst=192.168.1.0/24,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,ct_state=+new+trk,ip,reg6=0x3,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2001,ip,reg6=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2001,ipv6,reg6=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=1,ipv6,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=1,ip,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=23, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,24)
cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x2/0x2,metadata=0x7
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x2/0x2,metadata=0x7
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=6.673s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x4/0x4,metadata=0x7
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=6.670s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x4/0x4,metadata=0x7
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,25)
cookie=0x0, duration=6.673s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.11,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:b6:91:70,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163eb69170->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a8010b->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.670s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.3,arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:d5:00:02,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ed50002->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80103->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.670s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.2,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:82:8b:0e,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163e828b0e->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80102->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.674s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,26)
cookie=0x0, duration=6.674s, table=26, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x7,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=6.674s, table=26, n_packets=0, n_bytes=0,
idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:d5:00:02
actions=load:0x2->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=6.673s, table=26, n_packets=0, n_bytes=0,
idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:b6:91:70
actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=6.670s, table=26, n_packets=0, n_bytes=0,
idle_age=6, priority=50,metadata=0x7,dl_dst=fa:16:3e:82:8b:0e
actions=load:0x1->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=6.674s, table=32, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x3,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],
set_field:0x3/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:3
cookie=0x0, duration=6.673s, table=32, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x2,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],
set_field:0x2/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:3
cookie=0x0, duration=6.670s, table=32, n_packets=0, n_bytes=0,
idle_age=6, priority=100,reg7=0x1,metadata=0x7
actions=load:0x7->NXM_NX_TUN_ID[0..23],
set_field:0x1/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:5
cookie=0x0, duration=6.674s, table=48, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,49)
cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=135,icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=136,icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,metadata=0x7
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=6.673s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x7
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=6.674s, table=49, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,50)
cookie=0x0, duration=6.674s, table=50, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x1/0x1,metadata=0x7
actions=ct(table=51,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=6.673s, table=50, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x1/0x1,metadata=0x7
actions=ct(table=51,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=6.673s, table=50, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,51)
cookie=0x0, duration=6.670s, table=51, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,52)
cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=+inv+trk,metadata=0x7
actions=drop
cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,metadata=0x7
actions=resubmit(,53)
cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=-new-est+rel-inv+trk,metadata=0x7
actions=resubmit(,53)
cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=136,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=135,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,ct_state=+new+trk,ip,reg7=0x3,metadata=0x7,
nw_src=192.168.1.11
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,ct_state=+new+trk,ip,reg7=0x3,metadata=0x7,
nw_src=192.168.1.11
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,udp,reg7=0x3,metadata=0x7,
nw_src=192.168.1.0/24,tp_src=67,tp_dst=68
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=2001,ip,reg7=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.673s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=2001,ipv6,reg7=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=1,ip,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=1,ipv6,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.674s, table=52, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,53)
cookie=0x0, duration=6.674s, table=53, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x4/0x4,metadata=0x7
actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=6.674s, table=53, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x4/0x4,metadata=0x7
actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=6.673s, table=53, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x2/0x2,metadata=0x7
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
cookie=0x0, duration=6.673s, table=53, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x2/0x2,metadata=0x7
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
cookie=0x0, duration=6.674s, table=53, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,54)
cookie=0x0, duration=6.674s, table=54, n_packets=0, n_bytes=0,
idle_age=6, priority=90,ip,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:b6:91:70,nw_dst=255.255.255.255
actions=resubmit(,55)
cookie=0x0, duration=6.673s, table=54, n_packets=0, n_bytes=0,
idle_age=6, priority=90,ip,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:b6:91:70,nw_dst=192.168.1.11
actions=resubmit(,55)
cookie=0x0, duration=6.673s, table=54, n_packets=0, n_bytes=0,
idle_age=6, priority=90,ip,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:b6:91:70,nw_dst=224.0.0.0/4
actions=resubmit(,55)
cookie=0x0, duration=6.670s, table=54, n_packets=0, n_bytes=0,
idle_age=6, priority=80,ip,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:b6:91:70
actions=drop
cookie=0x0, duration=6.670s, table=54, n_packets=0, n_bytes=0,
idle_age=6, priority=80,ipv6,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:b6:91:70
actions=drop
cookie=0x0, duration=6.674s, table=54, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,55)
cookie=0x0, duration=6.673s, table=55, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x7,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,64)
cookie=0x0, duration=6.674s, table=55, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg7=0x3,metadata=0x7,
dl_dst=fa:16:3e:b6:91:70
actions=resubmit(,64)
cookie=0x0, duration=6.673s, table=55, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg7=0x1,metadata=0x7
actions=resubmit(,64)
cookie=0x0, duration=6.670s, table=55, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg7=0x2,metadata=0x7
actions=resubmit(,64)
#. On compute nodes containing a port on the network, the OVN controller
also creates an additional flow.
.. code-block:: console
cookie=0x0, duration=13.358s, table=52, n_packets=0, n_bytes=0,
idle_age=13, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
metadata=0x7,ipv6_src=::
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
.. todo: Future commit
Attach the router to a second self-service network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. todo: Add after NAT patches merge.
Attach the router to an external network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -0,0 +1,517 @@
.. _refarch-selfservice-networks:
Self-service networks
---------------------
A self-service (project) network includes only virtual components, thus
enabling projects to manage them without additional configuration of the
underlying physical network. The OVN mechanism driver supports Geneve
and VLAN network types with a preference toward Geneve. Projects can
choose to isolate self-service networks, connect two or more together
via routers, or connect them to provider networks via routers with
appropriate capabilities. Similar to provider networks, self-service
networks can use arbitrary names.
.. note::
Similar to provider networks, self-service VLAN networks map to a
unique bridge on each compute node that supports launching instances
on those networks. Self-service VLAN networks also require several
commands at the host and OVS levels. The following example assumes
use of Geneve self-service networks.
Create a self-service network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Creating a self-service network involves several commands at the
Networking service level that yield a series of operations at the OVN
level to create the virtual network components. The following example
creates a Geneve self-service network and binds a subnet to it. The
subnet uses DHCP to distribute IP addresses to instances.
#. On the controller node, source the credentials for a regular
(non-privileged) project. The following example uses the ``demo``
project.
#. On the controller node, create a self-service network in the Networking
service.
.. code-block:: console
$ openstack network create selfservice
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-06-09T15:42:41 |
| description | |
| id | f49791f7-e653-4b43-99b1-0f5557c313e4 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1442 |
| name | selfservice |
| port_security_enabled | True |
| project_id | 1ef26f483b9d44e8ac0c97388d6cb609 |
| router_external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2016-06-09T15:42:41 |
+-------------------------+--------------------------------------+
OVN operations
^^^^^^^^^^^^^^
The OVN mechanism driver and OVN perform the following operations
during creation of a self-service network.
#. The mechanism driver translates the network into a logical switch in
the OVN northbound database.
.. code-block:: console
uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
acls : []
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
ports : []
#. The OVN northbound service translates this object into new datapath
bindings and logical flows in the OVN southbound database.
* Datapath bindings
.. code-block:: console
_uuid : 0b214af6-8910-489c-926a-fd0ed16a8251
external_ids : {logical-switch="15e2c80b-1461-4003-9869-80416cd97de5"}
tunnel_key : 5
* Logical flows
.. code-block:: console
Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 100, match=(eth.src[40]),
action=(drop;)
table= 0( ls_in_port_sec_l2), priority= 100, match=(vlan.present),
action=(drop;)
table= 1( ls_in_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 2( ls_in_port_sec_nd), priority= 0, match=(1),
action=(next;)
table= 3( ls_in_pre_acl), priority= 0, match=(1),
action=(next;)
table= 4( ls_in_pre_lb), priority= 0, match=(1),
action=(next;)
table= 5( ls_in_pre_stateful), priority= 100, match=(reg0[0] == 1),
action=(ct_next;)
table= 5( ls_in_pre_stateful), priority= 0, match=(1),
action=(next;)
table= 6( ls_in_acl), priority= 0, match=(1),
action=(next;)
table= 7( ls_in_lb), priority= 0, match=(1),
action=(next;)
table= 8( ls_in_stateful), priority= 100, match=(reg0[2] == 1),
action=(ct_lb;)
table= 8( ls_in_stateful), priority= 100, match=(reg0[1] == 1),
action=(ct_commit; next;)
table= 8( ls_in_stateful), priority= 0, match=(1),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 0, match=(1),
action=(next;)
table=10( ls_in_l2_lkup), priority= 100, match=(eth.mcast),
action=(outport = "_MC_flood"; output;)
Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: egress
table= 0( ls_out_pre_lb), priority= 0, match=(1),
action=(next;)
table= 1( ls_out_pre_acl), priority= 0, match=(1),
action=(next;)
table= 2(ls_out_pre_stateful), priority= 100, match=(reg0[0] == 1),
action=(ct_next;)
table= 2(ls_out_pre_stateful), priority= 0, match=(1),
action=(next;)
table= 3( ls_out_lb), priority= 0, match=(1),
action=(next;)
table= 4( ls_out_acl), priority= 0, match=(1),
action=(next;)
table= 5( ls_out_stateful), priority= 100, match=(reg0[1] == 1),
action=(ct_commit; next;)
table= 5( ls_out_stateful), priority= 100, match=(reg0[2] == 1),
action=(ct_lb;)
table= 5( ls_out_stateful), priority= 0, match=(1),
action=(next;)
table= 6( ls_out_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 7( ls_out_port_sec_l2), priority= 100, match=(eth.mcast),
action=(output;)
.. note::
These actions do not create flows on any nodes.
Create a subnet on the self-service network
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A self-service network requires at least one subnet. In most cases,
the environment provides suitable values for IP address allocation for
instances, default gateway IP address, and metadata such as name
resolution.
#. On the controller node, create a subnet bound to the self-service network
``selfservice``.
.. code-block:: console
$ openstack subnet create --network selfservice --subnet-range 192.168.1.0/24 selfservice-v4
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.1.2-192.168.1.254 |
| cidr | 192.168.1.0/24 |
| created_at | 2016-06-16 00:19:08+00:00 |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.1.1 |
| headers | |
| host_routes | |
| id | 8f027f25-0112-45b9-a1b9-2f8097c57219 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | selfservice-v4 |
| network_id | 8ed4e43b-63ef-41ed-808b-b59f1120aec0 |
| project_id | b1ebf33664df402693f729090cfab861 |
| subnetpool_id | None |
| updated_at | 2016-06-16 00:19:08+00:00 |
+-------------------+--------------------------------------+
OVN operations
^^^^^^^^^^^^^^
.. todo: Update this part with the new agentless DHCP details
The OVN mechanism driver and OVN perform the following operations
during creation of a subnet on a self-service network.
#. If the subnet uses DHCP for IP address management, create logical ports
ports for each DHCP agent serving the subnet and bind them to the logical
switch. In this example, the subnet contains two DHCP agents.
.. code-block:: console
_uuid : 1ed7c28b-dc69-42b8-bed6-46477bb8b539
addresses : ["fa:16:3e:94:db:5e 192.168.1.2"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "0cfbbdca-ff58-4cf8-a7d3-77daaebe3056"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : ae10a5e0-db25-4108-b06a-d2d5c127d9c4
addresses : ["fa:16:3e:90:bd:f1 192.168.1.3"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "74930ace-d939-4bca-b577-fccba24c3fca"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
acls : []
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
ports : [1ed7c28b-dc69-42b8-bed6-46477bb8b539,
ae10a5e0-db25-4108-b06a-d2d5c127d9c4]
#. The OVN northbound service creates port bindings for these logical
ports and adds them to the appropriate multicast group.
* Port bindings
.. code-block:: console
_uuid : 3e463ca0-951c-46fd-b6cf-05392fa3aa1f
chassis : 6a9d0619-8818-41e6-abef-2f3d9a597c03
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
logical_port : "a203b410-97c1-4e4a-b0c3-558a10841c16"
mac : ["fa:16:3e:a1:dc:58 192.168.1.3"]
options : {}
parent_port : []
tag : []
tunnel_key : 2
type : ""
_uuid : fa7b294d-2a62-45ae-8de3-a41c002de6de
chassis : d63e8ae8-caf3-4a6b-9840-5c3a57febcac
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
logical_port : "39b23721-46f4-4747-af54-7e12f22b3397"
mac : ["fa:16:3e:1a:b4:23 192.168.1.2"]
options : {}
parent_port : []
tag : []
tunnel_key : 1
type : ""
* Multicast groups
.. code-block:: console
_uuid : c08d0102-c414-4a47-98d9-dd3fa9f9901c
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
name : _MC_flood
ports : [3e463ca0-951c-46fd-b6cf-05392fa3aa1f,
fa7b294d-2a62-45ae-8de3-a41c002de6de]
tunnel_key : 65535
#. The OVN northbound service translates the logical ports into logical flows
in the OVN southbound database.
.. code-block:: console
Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "39b23721-46f4-4747-af54-7e12f22b3397"),
action=(next;)
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "a203b410-97c1-4e4a-b0c3-558a10841c16"),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 50,
match=(arp.tpa == 192.168.1.2 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:1a:b4:23;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:1a:b4:23; arp.tpa = arp.spa;
arp.spa = 192.168.1.2; outport = inport;
inport = ""; /* Allow sending out inport. */ output;)
table= 9( ls_in_arp_rsp), priority= 50,
match=(arp.tpa == 192.168.1.3 && arp.op == 1),
action=(eth.dst = eth.src; eth.src = fa:16:3e:a1:dc:58;
arp.op = 2; /* ARP reply */ arp.tha = arp.sha;
arp.sha = fa:16:3e:a1:dc:58; arp.tpa = arp.spa;
arp.spa = 192.168.1.3; outport = inport;
inport = ""; /* Allow sending out inport. */ output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:a1:dc:58),
action=(outport = "a203b410-97c1-4e4a-b0c3-558a10841c16"; output;)
table=10( ls_in_l2_lkup), priority= 50,
match=(eth.dst == fa:16:3e:1a:b4:23),
action=(outport = "39b23721-46f4-4747-af54-7e12f22b3397"; output;)
Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: egress
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "39b23721-46f4-4747-af54-7e12f22b3397"),
action=(output;)
table= 7( ls_out_port_sec_l2), priority= 50,
match=(outport == "a203b410-97c1-4e4a-b0c3-558a10841c16"),
action=(output;)
#. For each compute node without a DHCP agent on the subnet:
* The OVN controller service translates these objects into flows on the
integration bridge ``br-int``.
.. code-block:: console
# ovs-ofctl dump-flows br-int
cookie=0x0, duration=9.054s, table=32, n_packets=0, n_bytes=0,
idle_age=9, priority=100,reg7=0xffff,metadata=0x5
actions=load:0x5->NXM_NX_TUN_ID[0..23],
set_field:0xffff/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],
output:4,output:3
#. For each compute node with a DHCP agent on the subnet:
* Creation of a DHCP network namespace adds a virtual switch ports that
connects the DHCP agent with the ``dnsmasq`` process to the integration
bridge.
.. code-block:: console
# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:000022024a1dc045
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
9(tap39b23721-46): addr:00:00:00:00:b0:5d
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
* The OVN controller service translates these objects into flows on the
integration bridge.
.. code-block:: console
cookie=0x0, duration=21.074s, table=0, n_packets=8, n_bytes=648,
idle_age=11, priority=100,in_port=9
actions=load:0x2->NXM_NX_REG5[],load:0x5->OXM_OF_METADATA[],
load:0x1->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=21.076s, table=16, n_packets=0, n_bytes=0,
idle_age=21, priority=100,metadata=0x5,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=21.075s, table=16, n_packets=0, n_bytes=0,
idle_age=21, priority=100,metadata=0x5,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=21.076s, table=16, n_packets=0, n_bytes=0,
idle_age=21, priority=50,reg6=0x2,metadata=0x5
actions=resubmit(,17)
cookie=0x0, duration=21.075s, table=16, n_packets=8, n_bytes=648,
idle_age=11, priority=50,reg6=0x1,metadata=0x5
actions=resubmit(,17)
cookie=0x0, duration=21.075s, table=17, n_packets=8, n_bytes=648,
idle_age=11, priority=0,metadata=0x5
actions=resubmit(,18)
cookie=0x0, duration=21.076s, table=18, n_packets=8, n_bytes=648,
idle_age=11, priority=0,metadata=0x5
actions=resubmit(,19)
cookie=0x0, duration=21.076s, table=19, n_packets=8, n_bytes=648,
idle_age=11, priority=0,metadata=0x5
actions=resubmit(,20)
cookie=0x0, duration=21.075s, table=20, n_packets=8, n_bytes=648,
idle_age=11, priority=0,metadata=0x5
actions=resubmit(,21)
cookie=0x0, duration=5.398s, table=21, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ipv6,reg0=0x1/0x1,metadata=0x5
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=5.398s, table=21, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ip,reg0=0x1/0x1,metadata=0x5
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=5.398s, table=22, n_packets=6, n_bytes=508,
idle_age=2, priority=0,metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=5.398s, table=23, n_packets=6, n_bytes=508,
idle_age=2, priority=0,metadata=0x5
actions=resubmit(,24)
cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ipv6,reg0=0x4/0x4,metadata=0x5
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ip,reg0=0x4/0x4,metadata=0x5
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ipv6,reg0=0x2/0x2,metadata=0x5
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=5.398s, table=24, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ip,reg0=0x2/0x2,metadata=0x5
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=5.399s, table=24, n_packets=6, n_bytes=508,
idle_age=2, priority=0,metadata=0x5 actions=resubmit(,25)
cookie=0x0, duration=5.398s, table=25, n_packets=0, n_bytes=0,
idle_age=5, priority=50,arp,metadata=0x5,
arp_tpa=192.168.1.2,arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:82:8b:0e,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163e828b0e->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80102->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=5.378s, table=25, n_packets=0, n_bytes=0,
idle_age=5, priority=50,arp,metadata=0x5,arp_tpa=192.168.1.3,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:d5:00:02,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ed50002->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80103->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=5.399s, table=25, n_packets=6, n_bytes=508,
idle_age=2, priority=0,metadata=0x5
actions=resubmit(,26)
cookie=0x0, duration=5.399s, table=26, n_packets=6, n_bytes=508,
idle_age=2, priority=100,metadata=0x5,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=5.398s, table=26, n_packets=0, n_bytes=0,
idle_age=5, priority=50,metadata=0x5,dl_dst=fa:16:3e:d5:00:02
actions=load:0x2->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=5.398s, table=26, n_packets=0, n_bytes=0,
idle_age=5, priority=50,metadata=0x5,dl_dst=fa:16:3e:82:8b:0e
actions=load:0x1->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=21.038s, table=32, n_packets=0, n_bytes=0,
idle_age=21, priority=100,reg7=0x2,metadata=0x5
actions=load:0x5->NXM_NX_TUN_ID[0..23],
set_field:0x2/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],output:4
cookie=0x0, duration=21.038s, table=32, n_packets=8, n_bytes=648,
idle_age=11, priority=100,reg7=0xffff,metadata=0x5
actions=load:0x5->NXM_NX_TUN_ID[0..23],
set_field:0xffff/0xffffffff->tun_metadata0,
move:NXM_NX_REG6[0..14]->NXM_NX_TUN_METADATA0[16..30],
output:4,resubmit(,33)
cookie=0x0, duration=5.397s, table=33, n_packets=12, n_bytes=1016,
idle_age=2, priority=100,reg7=0xffff,metadata=0x5
actions=load:0x1->NXM_NX_REG7[],resubmit(,34),
load:0xffff->NXM_NX_REG7[]
cookie=0x0, duration=5.397s, table=33, n_packets=0, n_bytes=0,
idle_age=5, priority=100,reg7=0x1,metadata=0x5
actions=resubmit(,34)
cookie=0x0, duration=21.074s, table=34, n_packets=8, n_bytes=648,
idle_age=11, priority=100,reg6=0x1,reg7=0x1,metadata=0x5
actions=drop
cookie=0x0, duration=21.076s, table=48, n_packets=8, n_bytes=648,
idle_age=11, priority=0,metadata=0x5 actions=resubmit(,49)
cookie=0x0, duration=21.075s, table=49, n_packets=8, n_bytes=648,
idle_age=11, priority=0,metadata=0x5 actions=resubmit(,50)
cookie=0x0, duration=5.398s, table=50, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ipv6,reg0=0x1/0x1,metadata=0x5
actions=ct(table=51,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=5.398s, table=50, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ip,reg0=0x1/0x1,metadata=0x5
actions=ct(table=51,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=5.398s, table=50, n_packets=6, n_bytes=508,
idle_age=3, priority=0,metadata=0x5
actions=resubmit(,51)
cookie=0x0, duration=5.398s, table=51, n_packets=6, n_bytes=508,
idle_age=3, priority=0,metadata=0x5
actions=resubmit(,52)
cookie=0x0, duration=5.398s, table=52, n_packets=6, n_bytes=508,
idle_age=3, priority=0,metadata=0x5
actions=resubmit(,53)
cookie=0x0, duration=5.399s, table=53, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ipv6,reg0=0x4/0x4,metadata=0x5
actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=5.398s, table=53, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ip,reg0=0x4/0x4,metadata=0x5
actions=ct(table=54,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=5.398s, table=53, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ip,reg0=0x2/0x2,metadata=0x5
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
cookie=0x0, duration=5.398s, table=53, n_packets=0, n_bytes=0,
idle_age=5, priority=100,ipv6,reg0=0x2/0x2,metadata=0x5
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,54)
cookie=0x0, duration=5.398s, table=53, n_packets=6, n_bytes=508,
idle_age=3, priority=0,metadata=0x5
actions=resubmit(,54)
cookie=0x0, duration=5.398s, table=54, n_packets=6, n_bytes=508,
idle_age=3, priority=0,metadata=0x5
actions=resubmit(,55)
cookie=0x0, duration=5.398s, table=55, n_packets=6, n_bytes=508,
idle_age=3, priority=100,metadata=0x5,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,64)
cookie=0x0, duration=5.398s, table=55, n_packets=0, n_bytes=0,
idle_age=5, priority=50,reg7=0x1,metadata=0x5
actions=resubmit(,64)
cookie=0x0, duration=5.398s, table=55, n_packets=0, n_bytes=0,
idle_age=5, priority=50,reg7=0x2,metadata=0x5
actions=resubmit(,64)
cookie=0x0, duration=5.397s, table=64, n_packets=6, n_bytes=508,
idle_age=3, priority=100,reg7=0x1,metadata=0x5
actions=output:9

View File

@ -0,0 +1,182 @@
.. _ovn_routing:
=======
Routing
=======
North/South
-----------
The different configurations are detailed in the :doc:`/admin/ovn/refarch/refarch`
Non distributed FIP
~~~~~~~~~~~~~~~~~~~
North/South traffic flows through the active chassis for each router for SNAT
traffic, and also for FIPs.
.. image:: figures/ovn-north-south.png
:alt: L3 North South non-distributed FIP
:align: center
Distributed Floating IP
~~~~~~~~~~~~~~~~~~~~~~~
In the following diagram we can see how VMs with no Floating IP (VM1, VM6)
still communicate throught the gateway nodes using SNAT on the edge routers
R1 and R2.
While VM3, VM4, and VM5 have an assigned floating IP, and it's traffic flows
directly through the local provider bridge/interface to the external network.
.. image:: figures/ovn-north-south-distributed-fip.png
:alt: L3 North South distributed FIP
:align: center
L3HA support
~~~~~~~~~~~~
Ovn driver implements L3 high availability in a transparent way. You
don't need to enable any config flags. As soon as you have more than
one chassis capable of acting as an l3 gateway to the specific external
network attached to the router it will schedule the router gateway port
to multiple chassis, making use of the ``gateway_chassis`` column on OVN's
``Logical_Router_Port`` table.
In order to have external connectivity, either:
* Some gateway nodes have ``ovn-cms-options`` with the value
``enable-chassis-as-gw`` in Open_vSwitch table's external_ids column, or
* if no gateway node exists with the external ids column set with that
value, then all nodes would be eligible to host gateway chassis.
Example to how to enabled chassis to host gateways:
.. code-block:: console
$ ovs-vsctl set open . external-ids:ovn-cms-options="enable-chassis-as-gw"
At the low level, functionality is all implemented mostly by OpenFlow rules
with bundle active_passive outputs. The ARP responder and router
enablement/disablement is handled by ovn-controller. Gratuitous ARPs for FIPs
and router external addresses are periodically sent by ovn-controller itself.
BFD monitoring
^^^^^^^^^^^^^^
OVN monitors the availability of the chassis via the BFD protocol, which is
encapsulated on top of the Geneve tunnels established from chassis to chassis.
.. image:: figures/ovn-l3ha-bfd.png
:alt: L3HA BFD monitoring
:align: center
Each chassis that is marked as a gateway chassis will monitor all the other
gateway chassis in the deployment as well as compute node chassis, to let the
gateways enable/disable routing of packets and ARP responses / announcements.
Each compute node chassis will monitor each gateway chassis via BFD to
automatically steer external traffic (snat/dnat) through the active chassis
for a given router.
.. image:: figures/ovn-l3ha-bfd-3gw.png
:alt: L3HA BFD monitoring (3 gateway nodes)
:align: center
The gateway nodes monitor each other in star topology. Compute nodes don't
monitor each other because that's not necessary.
Failover (detected by BFD)
~~~~~~~~~~~~~~~~~~~~~~~~~~
Look at the following example:
.. image:: figures/ovn-l3ha-bfd-failover.png
:alt: L3HA BFD monitoring failover
:align: center
Compute nodes BFD monitoring of the gateway nodes will detect that
tunnel endpoint going to gateway node 1 is down, so. So traffic output that
needs to get into the external network through the router will be directed
to the lower priority chassis for R1. R2 stays the same because Gateway Node
2 was already the highest priority chassis for R2.
Gateway node 2 will detect that tunnel endpoint to gateway node 1 is down, so
it will become responsible for the external leg of R1, and it's ovn-controller
will populate flows for the external ARP responder, traffic forwarding (N/S)
and periodic gratuitous ARPs.
Gateway node 2 will also bind the external port of the router (represented
as a chassis-redirect port on the South Bound database).
If Gateway node 1 is still alive, failure over interface 2 will be detected
because it's not seeing any other nodes.
No mechanisms are still present to detect external network failure, so as good
practice to detect network failure we recommend that all interfaces are handled
over a single bonded interface with VLANs.
Supported failure modes are:
- gateway chassis becomes disconnected from network (tunneling interface)
- ovs-vswitchd is stopped (it's responsible for BFD signaling)
- ovn-controller is stopped, as ovn-controller will remove himself as a
registered chassis.
.. note::
As a side note, it's also important to understand, that as for VRRP or CARP
protocols, this detection mechanism only works for link failures, but not
for routing failures.
Failback
~~~~~~~~
L3HA behaviour is preemptive in OVN (at least for the time being) since that
would balance back the routers to the original chassis, avoiding any of the
gateway nodes becoming a bottleneck.
.. image:: figures/ovn-l3ha-bfd.png
:alt: L3HA BFD monitoring (Fail back)
:align: center
East/West
---------
East/West traffic on ovn driver is completely distributed, that means
that routing will happen internally on the compute nodes without the need
to go through the gateway nodes.
Traffic going through a virtual router, different subnets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Traffic going through a virtual router, and going from a virtual network/subnet
to another will flow directly from compute to compute node encapsulated as
usual, while all the routing operations like decreasing TTL or switching MAC
addresses will be handled in OpenFlow at the source host of the packet.
.. image:: figures/ovn-east-west-3.png
:alt: East/West traffic across subnets
:align: center
Traffic across the same subnet
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Traffic across a subnet will happen as described in the following diagram,
although this kind of communication doesn't make use of routing at all (just
encapsulation) it's been included for completeness.
.. image:: figures/ovn-east-west-2.png
:alt: East/West traffic same subnet
:align: center
Traffic goes directly from instance to instance through br-int in the case
of both instances living in the same host (VM1 and VM2), or via
encapsulation when living on different hosts (VM3 and VM4).

View File

@ -0,0 +1,45 @@
.. _ovn_troubleshooting:
===============
Troubleshooting
===============
The following section describe common problems that you might
encounter after/during the installation of the OVN ML2 driver with
Devstack and possible solutions to these problems.
Launching VM's failure
-----------------------
Disable AppArmor
~~~~~~~~~~~~~~~~
Using Ubuntu you might encounter libvirt permission errors when trying
to create OVS ports after launching a VM (from the nova compute log).
Disabling AppArmor might help with this problem, check out
https://help.ubuntu.com/community/AppArmor for instructions on how to
disable it.
Multi-Node setup not working
-----------------------------
Geneve kernel module not supported
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default OVN creates tunnels between compute nodes using the Geneve protocol.
Older kernels (< 3.18) don't support the Geneve module and hence tunneling
can't work. You can check it with this command 'lsmod | grep openvswitch'
(geneve should show up in the result list)
For more information about which upstream Kernel version is required for
support of each tunnel type, see the answer to " Why do tunnels not work when
using a kernel module other than the one packaged with Open vSwitch?" in the
`OVS FAQ <http://docs.openvswitch.org/en/latest/faq/>`__.
MTU configuration
~~~~~~~~~~~~~~~~~
This problem is not unique to OVN but is amplified due to the possible larger
size of geneve header compared to other common tunneling protocols (VXLAN).
If you are using VM's as compute nodes make sure that you either lower the MTU
size on the virtual interface or enable fragmentation on it.

View File

@ -0,0 +1,10 @@
.. _ovn_tutorial:
==========================
OpenStack and OVN Tutorial
==========================
The OVN project documentation includes an in depth tutorial of using OVN with
OpenStack.
`OpenStack and OVN Tutorial <https://github.com/ovn-org/ovn/blob/master/Documentation/tutorials/ovn-openstack.rst>`_

View File

@ -269,6 +269,7 @@ _config_generator_config_files = [
'ml2_conf.ini', 'ml2_conf.ini',
'neutron.conf', 'neutron.conf',
'openvswitch_agent.ini', 'openvswitch_agent.ini',
'ovn.ini',
'sriov_agent.ini', 'sriov_agent.ini',
] ]

View File

@ -15,6 +15,7 @@ Sample Configuration Files
samples/macvtap-agent.rst samples/macvtap-agent.rst
samples/openvswitch-agent.rst samples/openvswitch-agent.rst
samples/sriov-agent.rst samples/sriov-agent.rst
samples/ovn.rst
.. toctree:: .. toctree::
:maxdepth: 1 :maxdepth: 1

View File

@ -0,0 +1,10 @@
.. _samples_ovn:
==============
Sample ovn.ini
==============
This sample configuration can also be viewed in `the raw format
<../../_static/config-samples/ovn.conf.sample>`_.
.. literalinclude:: ../../_static/config-samples/ovn.conf.sample

View File

@ -61,6 +61,7 @@ the developer guide includes information about Neutron testing infrastructure.
effective_neutron effective_neutron
development_environment development_environment
ovn_vagrant/index
contribute contribute
neutron_api neutron_api
client_command_extensions client_command_extensions

View File

@ -68,3 +68,4 @@ Neutron Internals
sriov_nic_agent sriov_nic_agent
tag tag
upgrade upgrade
ovn/index

View File

@ -0,0 +1,186 @@
.. _acl_optimizations:
========================================
ACL Handling optimizations in ovn driver
========================================
This document presents the current problem with ACLs and the design changes
proposed to core OVN as well as the necessary modifications to be made to
ovn driver to improve their usage.
Problem description
===================
There is basically two problems being addressed in this spec:
1. While in Neutron, a ``Security Group Rule`` is tied to a
``Security Group``, in OVN ``ACLs`` are created per port. Therefore,
we'll typically have *many* more ACLs than Security Group Rules, resulting
in a performance hit as the number of ports grows.
2. An ACL in OVN is applied to a ``Logical Switch``. As a result,
``ovn driver`` has to figure out which Logical Switches to apply the
generated ACLs per each Security Rule.
Let's highlight both problems with an example:
- Neutron Networks: NA, NB, NC
- Neutron Security Group: SG1
- Number of Neutron Security Group Rules in SG1: 10
- Neutron Ports in NA: 100
- Neutron Ports in NB: 100
- Neutron Ports in NC: 100
- All ports belong to SG1
When we implement the above scenario in OVN, this is what we'll get:
- OVN Logical Switches: NA, NB, NC
- Number of ACL rows in Northbound DB ACL table: 3000 (10 rules * 100 ports *
3 networks)
- Number of elements in acl column on each Logical_Switch row: 1000 (10 rules
* 100 ports).
And this is how, for example, the ACL match fields for the default Neutron
Security Group would look like::
outport == <port1_uuid> && ip4 && ip4.src == $as_ip4_<sg1_uuid>
outport == <port2_uuid> && ip4 && ip4.src == $as_ip4_<sg1_uuid>
outport == <port3_uuid> && ip4 && ip4.src == $as_ip4_<sg1_uuid>
...
outport == <port300_uuid> && ip4 && ip4.src == $as_ip4_<sg1_uuid>
As you can see, all of them look the same except for the outport field which
is clearly redundant and makes the NB database grow a lot at scale.
Also, ``ovn driver`` had to figure out for each rule in SG1 which Logical
Switches it had to apply the ACLs on (NA, NB and NC). This can be really costly
when the number of networks and port grows.
Proposed optimization
=====================
In the OpenStack context, we'll be facing this scenario most of the time
where the majority of the ACLs will look the same except for the
outport/inport fields in the match column. It would make sense to be able to
substitute all those ACLs by a single one which references all the ports
affected by that SG rule::
outport == @port_group1 && ip4 && ip4.src == $port_group1_ip4
Implementation Details
======================
Core OVN
--------
There's a series of patches in Core OVN that will enable us to achieve this
optimization:
https://github.com/openvswitch/ovs/commit/3d2848bafa93a2b483a4504c5de801454671dccf
https://github.com/openvswitch/ovs/commit/1beb60afd25a64f1779903b22b37ed3d9956d47c
https://github.com/openvswitch/ovs/commit/689829d53612a573f810271a01561f7b0948c8c8
In summary, these patches are:
- Adding a new entity called Port_Group which will hold a list of weak
references to the Logical Switch ports that belong to it.
- Automatically creating/updating two Address Sets (_ip4 and _ip6) in
Southbound database every time a new port is added to the group.
- Support adding a list of ACLs to a Port Group. As the SG rules may
span across different Logical Switches, we used to insert the ACLs in
all the Logical Switches where we have ports in within a SG. Figuring this
out is expensive and this new feature is a huge gain in terms of
performance when creating/deleting ports.
ovn driver
----------
In the OpenStack integration driver, the following changes are required to
accomplish this optimization:
- When a Neutron Security Group is created, create the equivalent Port Group
in OVN (pg-<security_group_id>), instead of creating a pair of Adress Sets
for IPv4 and IPv6. This Port Group will reference Neutron SG id in its
``external_ids`` column.
- When a Neutron Port is created, the equivalent Logical Port in OVN will be
added to those Port Groups associated to the Neutron Security Groups this
port belongs to.
- When a Neutron Port is deleted, we'll delete the associated Logical Port in
OVN. Since the schema includes a weak reference to the port, when the LSP
gets deleted, it will also be automatically deleted from any Port Group
entry where it was previously present.
- Instead of handling SG rules per port, we now need to handle them per SG
referencing the associated Port Group in the outport/inport fields. This
will be the biggest gain in terms of processing since we don't need to
iterate through all the ports anymore. For example:
.. code-block:: python
-def acl_direction(r, port):
+def acl_direction(r):
if r['direction'] == 'ingress':
portdir = 'outport'
else:
portdir = 'inport'
- return '%s == "%s"' % (portdir, port['id'])
+ return '%s == "@%s"' % (portdir, utils.ovn_name(r['security_group_id'])
- Every time a SG rule is created, instead of figuring out the ports affected
by its SG and inserting an ACL row which will be referrenced by different
Logical Switches, we will just reference it from the associated Port Group.
- For Neutron remote security groups, we just need to reference the
automatically created Address_Set for that Port Group.
As a bonus, we are tackling the race conditions that could happen in
Address_Sets right now when we're deleting and creating a port at the same
time. This is thanks to the fact that the Address_Sets in the SB table are
generated automatically by ovn-northd from the Port_Group contents and
Port Group is referencing actual Logical Switch Ports. More info at:
https://bugs.launchpad.net/networking-ovn/+bug/1611852
Backwards compatibility considerations
--------------------------------------
- If the schema doesn't include the ``Port_Group`` table, keep the old
behavior(Address Sets) for backwards compatibility.
- If the schema supports Port Groups, then a migration task will be performed
from an OvnWorker. This way we'll ensure that it'll happen only once across
the cloud thanks the OVSDB lock. This will be done right at the beginning of
the ovn_db_sync process to make sure that when neutron-server starts,
everything is in place to work with Port Groups. This migration process will
perform the following steps:
* Create the default drop Port Group and add all ports with port
security enabled to it.
* Create a Port Group for every existing Neutron Security Group and
add all its Security Group Rules as ACLs to that Port Group.
* Delete all existing Address Sets in NorthBound database which correspond to
a Neutron Security Group.
* Delete all the ACLs in every Logical Switch (Neutron network).
We should eventually remove the backwards compatibility and migration path. At
that point we should require OVS >= 2.10 from neutron ovn driver.
Special cases
-------------
Ports with no security groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When a port doesn't belong to any Security Group and port security is enabled,
we, by default, drop all the traffic to/from that port. In order to implement
this through Port Groups, we'll create a special Port Group with a fixed name
(``neutron_pg_drop``) which holds the ACLs to drop all the traffic.
This PG will be created automatically when we first need it, avoiding the need
to create it beforehand or during deployment.

View File

@ -0,0 +1,263 @@
.. _data_model:
===========================================
Mapping between Neutron and OVN data models
===========================================
The primary job of the Neutron OVN ML2 driver is to translate requests for
resources into OVN's data model. Resources are created in OVN by updating the
appropriate tables in the OVN northbound database (an ovsdb database). This
document looks at the mappings between the data that exists in Neutron and what
the resulting entries in the OVN northbound DB would look like.
Network
-------
::
Neutron Network:
id
name
subnets
admin_state_up
status
tenant_id
Once a network is created, we should create an entry in the Logical Switch
table.
::
OVN northbound DB Logical Switch:
external_ids: {
'neutron:network_name': network.name
}
Subnet
------
::
Neutron Subnet:
id
name
ip_version
network_id
cidr
gateway_ip
allocation_pools
dns_nameservers
host_routers
tenant_id
enable_dhcp
ipv6_ra_mode
ipv6_address_mode
Once a subnet is created, we should create an entry in the DHCP Options table
with the DHCPv4 or DHCPv6 options.
::
OVN northbound DB DHCP_Options:
cidr
options
external_ids: {
'subnet_id': subnet.id
}
Port
----
::
Neutron Port:
id
name
network_id
admin_state_up
mac_address
fixed_ips
device_id
device_owner
tenant_id
status
When a port is created, we should create an entry in the Logical Switch Ports
table in the OVN northbound DB.
::
OVN Northbound DB Logical Switch Port:
switch: reference to OVN Logical Switch
router_port: (empty)
name: port.id
up: (read-only)
macs: [port.mac_address]
port_security:
external_ids: {'neutron:port_name': port.name}
If the port has extra DHCP options defined, we should create an entry
in the DHCP Options table in the OVN northbound DB.
::
OVN northbound DB DHCP_Options:
cidr
options
external_ids: {
'subnet_id': subnet.id,
'port_id': port.id
}
Router
------
::
Neutron Router:
id
name
admin_state_up
status
tenant_id
external_gw_info:
network_id
external_fixed_ips: list of dicts
ip_address
subnet_id
::
OVN Northbound DB Logical Router:
ip:
default_gw:
external_ids:
Router Port
-----------
::
OVN Northbound DB Logical Router Port:
router: (reference to Logical Router)
network: (reference to network this port is connected to)
mac:
external_ids:
Security Groups
---------------
::
Neutron Port:
id
security_group: id
network_id
Neutron Security Group
id
name
tenant_id
security_group_rules
Neutron Security Group Rule
id
tenant_id
security_group_id
direction
remote_group_id
ethertype
protocol
port_range_min
port_range_max
remote_ip_prefix
::
OVN Northbound DB ACL Rule:
lswitch: (reference to Logical Switch - port.network_id)
priority: (0..65535)
match: boolean expressions according to security rule
Translation map (sg_rule ==> match expression)
-----------------------------------------------
sg_rule.direction="Ingress" => "inport=port.id"
sg_rule.direction="Egress" => "outport=port.id"
sg_rule.ethertype => "eth.type"
sg_rule.protocol => "ip.proto"
sg_rule.port_range_min/port_range_max =>
"port_range_min &lt;= tcp.src &lt;= port_range_max"
"port_range_min &lt;= udp.src &lt;= port_range_max"
sg_rule.remote_ip_prefix => "ip4.src/mask, ip4.dst/mask, ipv6.src/mask, ipv6.dst/mask"
(all match options for ACL can be found here:
http://openvswitch.org/support/dist-docs/ovn-nb.5.html)
action: "allow-related"
log: true/false
external_ids: {'neutron:port_id': port.id}
{'neutron:security_rule_id': security_rule.id}
Security groups maps between three neutron objects to one OVN-NB object, this
enable us to do the mapping in various ways, depending on OVN capabilities
The current implementation will use the first option in this list for
simplicity, but all options are kept here for future reference
1) For every <neutron port, security rule> pair, define an ACL entry::
Leads to many ACL entries.
acl.match = sg_rule converted
example: ((inport==port.id) && (ip.proto == "tcp") &&
(1024 &lt;= tcp.src &lt;= 4095) && (ip.src==192.168.0.1/16))
external_ids: {'neutron:port_id': port.id}
{'neutron:security_rule_id': security_rule.id}
2) For every <neutron port, security group> pair, define an ACL entry::
Reduce the number of ACL entries.
Means we have to manage the match field in case specific rule changes
example: (((inport==port.id) && (ip.proto == "tcp") &&
(1024 &lt;= tcp.src &lt;= 4095) && (ip.src==192.168.0.1/16)) ||
((outport==port.id) && (ip.proto == "udp") && (1024 &lt;= tcp.src &lt;= 4095)) ||
((inport==port.id) && (ip.proto == 6) ) ||
((inport==port.id) && (eth.type == 0x86dd)))
(This example is a security group with four security rules)
external_ids: {'neutron:port_id': port.id}
{'neutron:security_group_id': security_group.id}
3) For every <lswitch, security group> pair, define an ACL entry::
Reduce even more the number of ACL entries.
Manage complexity increase
example: (((inport==port.id) && (ip.proto == "tcp") && (1024 &lt;= tcp.src &lt;= 4095)
&& (ip.src==192.168.0.1/16)) ||
((outport==port.id) && (ip.proto == "udp") && (1024 &lt;= tcp.src &lt;= 4095)) ||
((inport==port.id) && (ip.proto == 6) ) ||
((inport==port.id) && (eth.type == 0x86dd))) ||
(((inport==port2.id) && (ip.proto == "tcp") && (1024 &lt;= tcp.src &lt;= 4095)
&& (ip.src==192.168.0.1/16)) ||
((outport==port2.id) && (ip.proto == "udp") && (1024 &lt;= tcp.src &lt;= 4095)) ||
((inport==port2.id) && (ip.proto == 6) ) ||
((inport==port2.id) && (eth.type == 0x86dd)))
external_ids: {'neutron:security_group': security_group.id}
Which option to pick depends on OVN match field length capabilities, and the
trade off between better performance due to less ACL entries compared to the
complexity to manage them.
If the default behaviour is not "drop" for unmatched entries, a rule with
lowest priority must be added to drop all traffic ("match==1")
Spoofing protection rules are being added by OVN internally and we need to
ignore the automatically added rules in Neutron

View File

@ -0,0 +1,442 @@
.. _database_consistency:
================================
Neutron/OVN Database consistency
================================
This document presents the problem and proposes a solution for the data
consistency issue between the Neutron and OVN databases. Although the
focus of this document is OVN this problem is common enough to be present
in other ML2 drivers (e.g OpenDayLight, BigSwitch, etc...). Some of them
already contain a mechanism in place for dealing with it.
Problem description
===================
In a common Neutron deployment model there could have multiple Neutron
API workers processing requests. For each request, the worker will update
the Neutron database and then invoke the ML2 driver to translate the
information to that specific SDN data model.
There are at least two situations that could lead to some inconsistency
between the Neutron and the SDN databases, for example:
.. _problem_1:
Problem 1: Neutron API workers race condition
---------------------------------------------
.. code-block:: python
In Neutron:
with neutron_db_transaction:
update_neutron_db()
ml2_driver.update_port_precommit()
ml2_driver.update_port_postcommit()
In the ML2 driver:
def update_port_postcommit:
port = neutron_db.get_port()
update_port_in_ovn(port)
Imagine the case where a port is being updated twice and each request
is being handled by a different API worker. The method responsible for
updating the resource in the OVN (``update_port_postcommit``) is not
atomic and invoked outside of the Neutron database transaction. This could
lead to a problem where the order in which the updates are committed to
the Neutron database are different than the order that they are committed
to the OVN database, resulting in an inconsistency.
This problem has been reported at `bug #1605089
<https://bugs.launchpad.net/networking-ovn/+bug/1605089>`_.
.. _problem_2:
Problem 2: Backend failures
---------------------------
Another situation is when the changes are already committed in Neutron
but an exception is raised upon trying to update the OVN database (e.g
lost connectivity to the ``ovsdb-server``). We currently don't have a
good way of handling this problem, obviously it would be possible to try
to immediately rollback the changes in the Neutron database and raise an
exception but, that rollback itself is an operation that could also fail.
Plus, rollbacks is not very straight forward when it comes to updates
or deletes. In a case where a VM is being teared down and OVN fail to
delete a port, re-creating that port in Neutron doesn't necessary fix the
problem. The decommission of a VM involves many other things, in fact, we
could make things even worse by leaving some dirty data around. I believe
this is a problem that would be better dealt with by other methods.
Proposed change
===============
In order to fix the problems presented at the `Problem description`_
section this document proposes a solution based on the Neutron's
``revision_number`` attribute. In summary, for every resource in Neutron
there's an attribute called ``revision_number`` which gets incremented
on each update made on that resource. For example::
$ openstack port create --network nettest porttest
...
| revision_number | 2 |
...
$ openstack port set porttest --mac-address 11:22:33:44:55:66
$ mysql -e "use neutron; select standard_attr_id from ports where id=\"91c08021-ded3-4c5a-8d57-5b5c389f8e39\";"
+------------------+
| standard_attr_id |
+------------------+
| 1427 |
+------------------+
$ mysql -e "use neutron; SELECT revision_number FROM standardattributes WHERE id=1427;"
+-----------------+
| revision_number |
+-----------------+
| 3 |
+-----------------+
This document proposes a solution that will use the `revision_number`
attribute for three things:
#. Perform a compare-and-swap operation based on the resource version
#. Guarantee the order of the updates (`Problem 1 <problem_1_>`_)
#. Detecting when resources in Neutron and OVN are out-of-sync
But, before any of points above can be done we need to change the
ovn driver code to:
#1 - Store the revision_number referent to a change in OVNDB
------------------------------------------------------------
To be able to compare the version of the resource in Neutron against
the version in OVN we first need to know which version the OVN resource
is present at.
Fortunately, each table in the OVNDB contains a special column called
``external_ids`` which external systems (like Neutron)
can use to store information about its own resources that corresponds
to the entries in OVNDB.
So, every time a resource is created or updated in OVNDB by
ovn driver, the Neutron ``revision_number`` referent to that change
will be stored in the ``external_ids`` column of that resource. That
will allow ovn driver to look at both databases and detect whether
the version in OVN is up-to-date with Neutron or not.
#2 - Ensure correctness when updating OVN
-----------------------------------------
As stated in `Problem 1 <problem_1_>`_, simultaneous updates to a single
resource will race and, with the current code, the order in which these
updates are applied is not guaranteed to be the correct order. That
means that, if two or more updates arrives we can't prevent an older
version of that update to be applied after a newer one.
This document proposes creating a special ``OVSDB command`` that runs
as part of the same transaction that is updating a resource in OVNDB to
prevent changes with a lower ``revision_number`` to be applied in case
the resource in OVN is at a higher ``revision_number`` already.
This new OVSDB command needs to basically do two things:
1. Add a verify operation to the ``external_ids`` column in OVNDB so
that if another client modifies that column mid-operation the transaction
will be restarted.
A better explanation of what "verify" does is described at the doc string
of the `Transaction class`_ in the OVS code itself, I quote:
Because OVSDB handles multiple clients, it can happen that between
the time that OVSDB client A reads a column and writes a new value,
OVSDB client B has written that column. Client A's write should not
ordinarily overwrite client B's, especially if the column in question
is a "map" column that contains several more or less independent data
items. If client A adds a "verify" operation before it writes the
column, then the transaction fails in case client B modifies it first.
Client A will then see the new value of the column and compose a new
transaction based on the new contents written by client B.
2. Compare the ``revision_number`` from the update against what is
presently stored in OVNDB. If the version in OVNDB is already higher
than the version in the update, abort the transaction.
So basically this new command is responsible for guarding the OVN resource
by not allowing old changes to be applied on top of new ones. Here's a
scenario where two concurrent updates comes in the wrong order and how
the solution above will deal with it:
Neutron worker 1 (NW-1): Updates a port with address A (revision_number: 2)
Neutron worker 2 (NW-2): Updates a port with address B (revision_number: 3)
TXN 1: NW-2 transaction is committed first and the OVN resource now has RN 3
TXN 2: NW-1 transaction detects the change in the external_ids column and
is restarted
TXN 2: NW-1 the new command now sees that the OVN resource is at RN 3,
which is higher than the update version (RN 2) and aborts the transaction.
There's a bit more for the above to work with the current ovn driver
code, basically we need to tidy up the code to do two more things.
1. Consolidate changes to a resource in a single transaction.
This is important regardless of this spec, having all changes to a
resource done in a single transaction minimizes the risk of having
half-changes written to the database in case of an eventual problem. This
`should be done already <https://review.openstack.org/#/c/515673>`_
but it's important to have it here in case we find more examples like
that as we code.
2. When doing partial updates, use the OVNDB as the source of comparison
to create the deltas.
Being able to do a partial update in a resource is important for
performance reasons; it's a way to minimize the number of changes that
will be performed in the database.
Right now, some of the update() methods in ovn driver creates the
deltas using the *current* and *original* parameters that are passed to
it. The *current* parameter is, as the name says, the current version
of the object present in the Neutron DB. The *original* parameter is
the previous version (current - 1) of that object.
The problem of creating the deltas by comparing these two objects is
because only the data in the Neutron DB is used for it. We need to stop
using the *original* object for it and instead we should create the
delta based on the *current* version of the Neutron DB against the data
stored in the OVNDB to be able to detect the real differences between
the two databases.
So in summary, to guarantee the correctness of the updates this document
proposes to:
#. Create a new OVSDB command is responsible for comparing revision
numbers and aborting the transaction, when needed.
#. Consolidate changes to a resource in a single transaction (should be
done already)
#. When doing partial updates, create the deltas based in the current
version in the Neutron DB and the OVNDB.
#3 - Detect and fix out-of-sync resources
-----------------------------------------
When things are working as expected the above changes should ensure
that Neutron DB and OVNDB are in sync but, what happens when things go
bad ? As per `Problem 2 <problem_2_>`_, things like temporarily losing
connectivity with the OVNDB could cause changes to fail to be committed
and the databases getting out-of-sync. We need to be able to detect the
resources that were affected by these failures and fix them.
We do already have the means to do it, similar to what the
`ovn_db_sync.py`_ script does we could fetch all the data from both
databases and compare each resource. But, depending on the size of the
deployment this can be really slow and costy.
This document proposes an optimization for this problem to make it
efficient enough so that we can run it periodically (as a periodic task)
and not manually as a script anymore.
First, we need to create an additional table in the Neutron database
that would serve as a cache for the revision numbers in **OVNDB**.
The new table schema could look this:
================ ======== =================================================
Column name Type Description
================ ======== =================================================
standard_attr_id Integer Primary key. The reference ID from the
standardattributes table in Neutron for
that resource. ONDELETE SET NULL.
resource_uuid String The UUID of the resource
resource_type String The type of the resource (e.g, Port, Router, ...)
revision_number Integer The version of the object present in OVN
acquired_at DateTime The time that the entry was create. For
troubleshooting purposes
updated_at DateTime The time that the entry was updated. For
troubleshooting purposes
================ ======== =================================================
For the different actions: Create, update and delete; this table will be
used as:
1. Create:
In the create_*_precommit() method, we will create an entry in the new
table within the same Neutron transaction. The revision_number column
for the new entry will have a placeholder value until the resource is
successfully created in OVNDB.
In case we fail to create the resource in OVN (but succeed in Neutron)
we still have the entry logged in the new table and this problem can
be detected by fetching all resources where the revision_number column
value is equal to the placeholder value.
The pseudo-code will look something like this:
.. code-block:: python
def create_port_precommit(ctx, port):
create_initial_revision(port['id'], revision_number=-1,
session=ctx.session)
def create_port_postcommit(ctx, port):
create_port_in_ovn(port)
bump_revision(port['id'], revision_number=port['revision_number'])
2. Update:
For update it's simpler, we need to bump the revision number for
that resource **after** the OVN transaction is committed in the
update_*_postcommit() method. That way, if an update fails to be applied
to OVN the inconsistencies can be detected by a JOIN between the new
table and the ``standardattributes`` table where the revision_number
columns does not match.
The pseudo-code will look something like this:
.. code-block:: python
def update_port_postcommit(ctx, port):
update_port_in_ovn(port)
bump_revision(port['id'], revision_number=port['revision_number'])
3. Delete:
The ``standard_attr_id`` column in the new table is a foreign key
constraint with a ``ONDELETE=SET NULL`` set. That means that, upon
Neutron deleting a resource the ``standard_attr_id`` column in the new
table will be set to *NULL*.
If deleting a resource succeeds in Neutron but fails in OVN, the
inconsistency can be detect by looking at all resources that has a
``standard_attr_id`` equals to NULL.
The pseudo-code will look something like this:
.. code-block:: python
def delete_port_postcommit(ctx, port):
delete_port_in_ovn(port)
delete_revision(port['id'])
With the above optimization it's possible to create a periodic task that
can run quite frequently to detect and fix the inconsistencies caused
by random backend failures.
.. note::
There's no lock linking both database updates in the postcommit()
methods. So, it's true that the method bumping the revision_number
column in the new table in Neutron DB could still race but, that
should be fine because this table acts like a cache and the real
revision_number has been written in OVNDB.
The mechanism that will detect and fix the out-of-sync resources should
detect this inconsistency as well and, based on the revision_number
in OVNDB, decide whether to sync the resource or only bump the
revision_number in the cache table (in case the resource is already
at the right version).
Refereces
=========
* There's a chain of patches with a proof of concept for this approach,
they start at: https://review.openstack.org/#/c/517049/
Alternatives
============
Journaling
----------
An alternative solution to this problem is *journaling*. The basic
idea is to create another table in the Neutron database and log every
operation (create, update and delete) instead of passing it directly to
the SDN controller.
A separated thread (or multiple instances of it) is then responsible
for reading this table and applying the operations to the SDN backend.
This approach has been used and validated
by drivers such as `networking-odl
<https://docs.openstack.org/networking-odl/latest/contributor/drivers_architecture.html#v2-design>`_.
An attempt to implement this approach
in *ovn driver* can be found `here
<https://review.openstack.org/#/q/project:openstack/networking-ovn+topic:bug/1605089-journaling>`_.
Some things to keep in mind about this approach:
* The code can get quite complex as this approach is not only about
applying the changes to the SDN backend asynchronously. The dependencies
between each resource as well as their operations also needs to be
computed. For example, before attempting to create a router port the
router that this port belongs to needs to be created. Or, before
attempting to delete a network all the dependent resources on it
(subnets, ports, etc...) needs to be processed first.
* The number of journal threads running can cause problems. In my tests
I had three controllers, each one with 24 CPU cores (Intel Xeon E5-2620
with hyperthreading enabled) and 64GB RAM. Running 1 journal thread
per Neutron API worker has caused ``ovsdb-server`` to misbehave
when under heavy pressure [1]_. Running multiple journal threads
seem to be causing other types of problems `in other drivers as well
<https://bugs.launchpad.net/networking-odl/+bug/1683797>`_.
* When under heavy pressure [1]_, I noticed that the journal
threads could come to a halt (or really slowed down) while the
API workers were handling a lot of requests. This resulted in some
operations taking more than a minute to be processed. This behaviour
can be seem `in this screenshot <http://i.imgur.com/GDG8Mic.png>`_.
.. TODO find a better place to host that image
* Given that the 1 journal thread per Neutron API worker approach
is problematic, determining the right number of journal threads is
also difficult. In my tests, I've noticed that 3 journal threads
per controller worked better but that number was pure based on
``trial & error``. In production this number should probably be
calculated based in the environment, perhaps something like `TripleO
<http://tripleo.org>`_ (or any upper layer) would be in a better
position to make that decision.
* At least temporarily, the data in the Neutron database is duplicated
between the normal tables and the journal one.
* Some operations like creating a new
resource via Neutron's API will return `HTTP 201
<https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#2xx_Success>`_,
which indicates that the resource has been created and is ready to
be used, but as these resources are created asynchronously one could
argue that the HTTP codes are now misleading. As a note, the resource
will be created at the Neutron database by the time the HTTP request
returns but it may not be present in the SDN backend yet.
Given all considerations, this approach is still valid and the fact
that it's already been used by other ML2 drivers makes it more open for
collaboration and code sharing.
.. _`Transaction class`: https://github.com/openvswitch/ovs/blob/3728b3b0316b44d1f9181be115b63ea85ff5883c/python/ovs/db/idl.py#L1014-L1055
.. _`ovn_db_sync.py`: https://github.com/openstack/networking-ovn/blob/a9af75cd3ce6cd6685b6435b325c97cacc83ce0e/networking_ovn/ovn_db_sync.py
.. rubric:: Footnotes
.. [1] I ran the tests using `Browbeat
<https://github.com/openstack/browbeat>`_ which is basically orchestrate
`Openstack Rally <https://github.com/openstack/rally>`_ and monitor the
machine's usage of resources.

View File

@ -0,0 +1,142 @@
.. _distributed_ovsdb_events:
================================
Distributed OVSDB events handler
================================
This document presents the problem and proposes a solution for handling
OVSDB events in a distributed fashion in ovn driver.
Problem description
===================
In ovn driver, the OVSDB Monitor class is responsible for listening
to the OVSDB events and performing certain actions on them. We use it
extensively for various tasks including critical ones such as monitoring
for port binding events (in order to notify Neutron/Nova that a port
has been bound to a certain chassis). Currently, this class uses a
distributed OVSDB lock to ensure that only one instance handles those
events at a time.
The problem with this approach is that it creates a bottleneck because
even if we have multiple Neutron Workers running at the moment, only one
is actively handling those events. And, this problem is highlighted even
more when working with technologies such as containers which rely on
creating multiple ports at a time and waiting for them to be bound.
Proposed change
===============
In order to fix this problem, this document proposes using a `Consistent
Hash Ring`_ to split the load of handling events across multiple Neutron
Workers.
A new table called ``ovn_hash_ring`` will be created in the Neutron
Database where the Neutron Workers capable of handling OVSDB events will
be registered. The table will use the following schema:
================ ======== =================================================
Column name Type Description
================ ======== =================================================
node_uuid String Primary key. The unique identification of a
Neutron Worker.
hostname String The hostname of the machine this Node is running
on.
created_at DateTime The time that the entry was created. For
troubleshooting purposes.
updated_at DateTime The time that the entry was updated. Used as a
heartbeat to indicate that the Node is still
alive.
================ ======== =================================================
This table will be used to form the `Consistent Hash Ring`_. Fortunately,
we have an implementation already in the `tooz`_ library of OpenStack. It
was contributed by the `Ironic`_ team which also uses this data
structure in order to spread the API request load across multiple
Ironic Conductors.
Here's how a `Consistent Hash Ring`_ from `tooz`_ works::
from tooz import hashring
hring = hashring.HashRing({'worker1', 'worker2', 'worker3'})
# Returns set(['worker3'])
hring[b'event-id-1']
# Returns set(['worker1'])
hring[b'event-id-2']
How OVSDB Monitor will use the Ring
-----------------------------------
Every instance of the OVSDB Monitor class will be listening to a series
of events from the OVSDB database and each of them will have a unique
ID registered in the database which will be part of the `Consistent
Hash Ring`.
When an event arrives, each OVSDB Monitor instance will hash that
event UUID and the ring will return one instance ID, which will then
be compared with its own ID and if it matches that instance will then
process the event.
Verifying status of OVSDB Monitor instance
------------------------------------------
A new maintenance task will be created in ovn driver which will
update the ``updated_at`` column from the ``ovn_hash_ring`` table for
the entries matching its hostname indicating that all Neutron Workers
running on that hostname are alive.
Note that only a single maintenance instance runs on each machine so
the writes to the Neutron database are optimized.
When forming the ring, the code should check for entries where the
value of ``updated_at`` column is newer than a given timeout. Entries
that haven't been updated in a certain time won't be part of the ring.
If the ring already exists it will be re-balanced.
Clean up and minimizing downtime window
---------------------------------------
Apart from heartbeating, we need to make sure that we remove the Nodes
from the ring when the service is stopped or killed.
By stopping the ``neutron-server`` service, all Nodes sharing the same
hostname as the machine where the service is running will be removed
from the ``ovn_hash_ring`` table. This is done by handling the SIGTERM
event. Upon this event arriving, ovn driver should invoke the clean
up method and then let the process halt.
Unfortunately nothing can be done in case of a SIGKILL, this will leave
the nodes in the database and they will be part of the ring until the
timeout is reached or the service is restarted. This can introduce a
window of time which can result in some events being lost. The current
implementation shares the same problem, if the instance holding the
current OVSDB lock is killed abruptly, events will be lost until the lock
is moved on to the next instance which is alive. One could argue that
the current implementation aggravates the problem because all events
will be lost where with the distributed mechanism **some** events will
be lost. As far as distributed systems goes, that's a normal scenario
and things are soon corrected.
Ideas for future improvements
-----------------------------
This section contains some ideas that can be added on top of this work
to further improve it:
* Listen to changes to the Chassis table in the OVSDB and force a ring
re-balance when a Chassis is added or removed from it.
* Cache the ring for a short while to minimize the database reads when
the service is under heavy load.
* To greater minimize/avoid event losses it would be possible to cache the
last X events to be reprocessed in case a node times out and the
ring re-balances.
.. _`Consistent Hash Ring`: https://en.wikipedia.org/wiki/Consistent_hashing
.. _`tooz`: https://github.com/openstack/tooz
.. _`Ironic`: https://github.com/openstack/ironic

View File

@ -0,0 +1,18 @@
..
================
OVN Design Notes
================
.. toctree::
:maxdepth: 1
data_model
native_dhcp
ovn_worker
metadata_api
database_consistency
acl_optimizations
loadbalancer
distributed_ovsdb_events
l3_ha_rescheduling

View File

@ -0,0 +1,166 @@
.. _l3_ha_rescheduling:
===================================
L3 HA Scheduling of Gateway Chassis
===================================
Problem Description
-------------------
Currently if a single network node is active in the system, gateway chassis
for the routers would be scheduled on that node. However, when a new node is
added to the system, neither rescheduling nor rebalancing occur automatically.
This makes the router created on the first node to be not in HA mode.
Side-effects of this behavior include:
* Skewed up load on different network nodes due to lack of router rescheduling.
* If the active node, where the gateway chassis for a router is scheduled
goes down, then because of lack of HA the North-South traffic from that
router will be hampered.
Overview of Proposed Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Gateway scheduling has been proposed in `[2]`_. However, rebalancing or
rescheduling was not a part of that solution. This specification clarifies
what is rescheduling and rebalancing.
Rescheduling would automatically happen on every event triggered by
addition or deletion of chassis.
Rebalancing would be only triggered by manual operator action.
Rescheduling of Gateway Chassis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In order to provide proper rescheduling of the gateway ports during
addition or deletion of the chassis, following approach can be considered:
* Identify the number of chassis in which each router has been scheduled
- Consider router for scheduling if no. of chassis < *MAX_GW_CHASSIS*
*MAX_GW_CHASSIS* is defined in `[0]`_
* Find a list of chassis where router is scheduled and reschedule it
up to *MAX_GW_CHASSIS* gateways using list of available candidates.
Do not modify the master chassis association to not interrupt network flows.
Rescheduling is an event triggered operation which will occur whenever a
chassis is added or removed. When it happend, ``schedule_unhosted_gateways()``
`[1]`_ will be called to host the unhosted gateways. Routers without gateway
ports are excluded in this operation because those are not connected to
provider networks and haven't the gateway ports. More information about
it can be found in the ``gateway_chassis`` table definition in OVN
NorthBound DB `[5]`_.
Chassis which has the flag ``enable-chassis-as-gw`` enabled in their OVN
southbound database table, would be the ones eligible for hosting the routers.
Rescheduling of router depends on current prorities set. Each chassis is given
a specific priority for the router's gateway and priority increases with
increasing value ( i.e. 1 < 2 < 3 ...). The highest prioritized chassis hosts
gateway port. Other chassis are selected as slaves.
There are two approaches for rescheduling supported by ovn driver right
now:
* Least loaded - select least-loaded chassis first,
* Random - select chassis randomly.
Few points to consider for the design:
* If there are 2 Chassis C1 and C2, where the routers are already balanced,
and a new chassis C3 is added, then routers should be rescheduled only from
C1 to C3 and C2 to C3. Rescheduling from C1 to C2 and vice-versa should not
be allowed.
* In order to reschedule the router's chassis, the ``master`` chassis for a
gateway router will be left untouched. However, for the scenario where all
routers are scheduled in only one chassis which is available as gateway,
the addition of the second gateway chassis would schedule the router
gateway ports at a lower priority on the new chassis.
Following scenarios are possible which have been considered in the design:
* Case #1:
- System has only one chassis C1 and all router gateway ports are scheduled
on it. We add a new chassis C2.
- Behavior: All the routers scheduled on C1 will also be scheduled on C2
with priority 1.
* Case #2:
- System has 2 chassis C1 and C2 during installation. C1 goes down.
- Behavior: In this case, all routers would be rescheduled to C2.
Once C1 is back up, routers would be rescheduled on it. However,
since C2 is now the new master, routers on C1 would have lower priority.
* Case #3:
- System has 2 chassis C1 and C2 during installation. C3 is added to it.
- Behavior: In this case, routers would not move their master chassis
associations. So routers which have their master on C1, would remain
there, and same for routers on C2. However, lower proritized candidates
of existing gateways would be scheduled on the chassis C3, depending
on the type of used scheduler (Random or LeastLoaded).
Rebalancing of Gateway Chassis
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rebalancing is the second part of the design and it assigns a new master to
already scheduled router gateway ports. Downtime is expected in this
operation. Rebalancing of routers can be achieved using external cli script.
Similar approach has been implemeneted for DHCP rescheduling `[4]`_.
The master chassis gateway could be moved only to other, previously scheduled
gateway. Rebalancing of chassis occurs only if number of scheduled master
chassis ports per each provider network hosted by given chassis is higher than
average number of hosted master gateway ports per chassis per provider network.
This dependency is determined by formula:
avg_gw_per_chassis = num_gw_by_provider_net / num_chassis_with_provider_net
Where:
- avg_gw_per_chassis - average number of scheduler master gateway chassis
withing same provider network.
- num_gw_by_provider_net - number of master chassis gateways scheduled in
given provider networks.
- num_chassis_with_provider_net - number of chassis that has connectivity
to given provider network.
The rebalancing occurs only if:
num_gw_by_provider_net_by_chassis > avg_gw_per_chassis
Where:
- num_gw_by_provider_net_by_chassis - number of hosted master gateways
by given provider network by given chassis
- avg_gw_per_chassis - average number of scheduler master gateway chassis
withing same provider network.
Following scenarios are possible which have been considered in the design:
* Case #1:
- System has only two chassis C1 and C2. Chassis host the same number
of gateways.
- Behavior: Rebalancing doesn't occur.
* Case #2:
- System has only two chassis C1 and C2. C1 hosts 3 gateways.
C2 hosts 2 gateways.
- Behavior: Rebalancing doesn't occur to not continuously move gateways
between chassis in loop.
* Case #3:
- System has two chassis C1 and C2. In meantime third chassis C3 has been
added to the system.
- Behavior: Rebalancing should occur. Gateways from C1 and C2 should be
moved to C3 up to avg_gw_per_chassis.
* Case #4:
- System has two chassis C1 and C2. C1 is connected to provnet1, but C2
is connected to provnet2.
- Behavior: Rebalancing shouldn't occur because of lack of chassis within
same provider network.
References
~~~~~~~~~~
.. _`[0]`: https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/neutron/common/ovn/constants.py#L171
.. _`[1]`: https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/neutron/services/ovn_l3/plugin.py#L318
.. _`[2]`: https://bugs.launchpad.net/networking-ovn/+bug/1762694
.. _`[3]`: https://developer.openstack.org/api-ref/network/v2/index.html?expanded=schedule-router-to-an-l3-agent-detail#schedule-router-to-an-l3-agent
.. _`[4]`: https://opendev.org/x/osops-tools-contrib/src/branch/master/neutron/dhcp_agents_balancer.py
.. _`[5]`: http://www.openvswitch.org/support/dist-docs/ovn-nb.5.txt

View File

@ -0,0 +1,316 @@
.. _loadbalancer:
==================================
OpenStack LoadBalancer API and OVN
==================================
Introduction
------------
Load balancing is essential for enabling simple or automatic delivery
scaling and availability since application delivery, scaling and
availability are considered vital features of any cloud.
Octavia is an open source, operator-scale load balancing solution designed
to work with OpenStack.
The purpose of this document is to propose a design for how we can use OVN
as the backend for OpenStack's LoadBalancer API provided by Octavia.
Octavia LoadBalancers Today
---------------------------
A Detailed design analysis of Octavia is available here:
https://docs.openstack.org/octavia/queens/contributor/design/version0.5/component-design.html
Currently, Octavia uses the in-built Amphorae driver to fulfill the
Loadbalancing requests in Openstack. Amphorae can be a Virtual machine,
container, dedicated hardware, appliance or device that actually performs the
task of load balancing in the Octavia system. More specifically, an amphora
takes requests from clients on the front-end and distributes these to back-end
systems. Amphorae communicates with its controllers over the LoadBalancer's
network through a driver interface on the controller.
Amphorae needs a placeholder, such as a separate VM/Container for deployment,
so that it can handle the LoadBalancer's requests. Along with this,
it also needs a separate network (termed as lb-mgmt-network) which handles all
Amphorae requests.
Amphorae has the capability to handle L4 (TCP/UDP) as well as L7 (HTTP)
LoadBalancer requests and provides monitoring features using HealthMonitors.
Octavia with OVN
----------------
OVN native LoadBalancer currently supports L4 protocols, with support for L7
protocols aimed for in future releases. Currently it also does not have any
monitoring facility. However, it does not need any extra
hardware/VM/Container for deployment, which is a major positive point when
compared with Amphorae. Also, it does not need any special network to
handle the LoadBalancer's requests as they are taken care by OpenFlow rules
directly. And, though OVN does not have support for TLS, it is in the works
and once implemented can be integrated with Octavia.
This following section details about how OVN can be used as an Octavia driver.
Overview of Proposed Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OVN Driver for Octavia runs under the scope of Octavia. Octavia API
receives and forwards calls to the OVN Driver.
**Step 1** - Creating a LoadBalancer
Octavia API receives and issues a LoadBalancer creation request on
a network to the OVN Provider driver. OVN driver creates a LoadBalancer
in the OVN NorthBound DB and asynchronously updates the Octavia DB
with the status response. A VIP port is created in Neutron when the
LoadBalancer creation is complete. The VIP information however is not updated
in the NorthBound DB until the Members are associated with the
LoadBalancer's Pool.
**Step 2** - Creating LoadBalancer entities (Pools, Listeners, Members)
Once a LoadBalancer is created by OVN in its NorthBound DB, users can now
create Pools, Listeners and Members associated with the LoadBalancer using
the Octavia API. With the creation of each entity, the LoadBalancer's
*external_ids* column in the NorthBound DB would be updated and corresponding
Logical and Openflow rules would be added for handling them.
**Step 3** - LoadBalancer request processing
When a user sends a request to the VIP IP address, OVN pipeline takes care of
load balancing the VIP request to one of the backend members.
More information about this can be found in the ovn-northd man pages.
OVN LoadBalancer Driver Logic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* On startup: Open and maintain a connection to the OVN Northbound DB
(using the ovsdbapp library). On first connection, and anytime a reconnect
happens:
* Do a full sync.
* Register a callback when a new interface is added to a router or deleted
from a router.
* When a new LoadBalancer L1 is created, create a Row in OVN's
``Load_Balancer`` table and update its entries for name and network
references. If the network on which the LoadBalancer is created, is
associated with a router, say R1, then add the router reference to the
LoadBalancer's *external_ids* and associate the LoadBalancer to the router.
Also associate the LoadBalancer L1 with all those networks which have an
interface on the router R1. This is required so that Logical Flows for
inter-network communication while using the LoadBalancer L1 is possible.
Also, during this time, a new port is created via Neutron which acts as a
VIP Port. The information of this new port is not visible on the OVN's
NorthBound DB till a member is added to the LoadBalancer.
* If a new network interface is added to the router R1 described above, all
the LoadBalancers on that network are associated with the router R1 and all
the LoadBalancers on the router are associated with the new network.
* If a network interface is removed from the router R1, then all the
LoadBalancers which have been solely created on that network (identified
using the *ls_ref* attribute in the LoadBalancer's *external_ids*) are
removed from the router. Similarly those LoadBalancers which are associated
with the network but not actually created on that network are removed from
the network.
* LoadBalancer can either be deleted with all its children entities using
the *cascade* option, or its members/pools/listeners can be individually
deleted. When the LoadBalancer is deleted, its references and
associations from all networks and routers are removed. This might change
in the future once the association of LoadBalancers with networks/routers
are changed to *weak* from *strong* [3]. Also the VIP port is deleted
when the LoadBalancer is deleted.
OVN LoadBalancer at work
~~~~~~~~~~~~~~~~~~~~~~~~
OVN Northbound schema [5] has a table to store LoadBalancers.
The table looks like::
"Load_Balancer": {
"columns": {
"name": {"type": "string"},
"vips": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"protocol": {
"type": {"key": {"type": "string",
"enum": ["set", ["tcp", "udp"]]},
"min": 0, "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"isRoot": true},
There is a ``load_balancer`` column in the Logical_Switch table (which
corresponds to a Neutron network) as well as the Logical_Router table
(which corresponds to a Neutron router) referring back to the 'Load_Balancer'
table.
The OVN driver updates the OVN Northbound DB. When a LoadBalancer is created,
a row in this table is created. And when the listeners and members are added,
'vips' column is updated accordingly. And the Logical_Switch's
``load_balancer`` column is also updated accordingly.
ovn-northd service which monitors for changes to the OVN Northbound DB,
generates OVN logical flows to enable load balancing and ovn-controller
running on each compute node, translates the logical flows into actual
OpenFlow rules.
The status of each entity in the Octavia DB is managed according to [4]
Below are few examples on what happens when LoadBalancer commands are
executed and what changes in the Load_Balancer Northbound DB table.
1. Create a LoadBalancer::
$ openstack loadbalancer create --provider ovn --vip-subnet-id=private lb1
$ ovn-nbctl list load_balancer
_uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
external_ids : {
lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 1}",
neutron:vip="10.0.0.10",
neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
protocol : []
vips : {}
2. Create a pool::
$ openstack loadbalancer pool create --name p1 --loadbalancer lb1
--protocol TCP --lb-algorithm SOURCE_IP_PORT
$ ovn-nbctl list load_balancer
_uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
external_ids : {
lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 1}",
"pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"="", neutron:vip="10.0.0.10",
neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
protocol : []
vips : {}
3. Create a member::
$ openstack loadbalancer member create --address 10.0.0.107
--subnet-id 2d54ec67-c589-473b-bc67-41f3d1331fef --protocol-port 80 p1
$ ovn-nbctl list load_balancer
_uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
external_ids : {
lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 2}",
"pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"=
"member_579c0c9f-d37d-4ba5-beed-cabf6331032d_10.0.0.107:80",
neutron:vip="10.0.0.10",
neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
protocol : []
vips : {}
4. Create another member::
$ openstack loadbalancer member create --address 20.0.0.107
--subnet-id c2e2da10-1217-4fe2-837a-1c45da587df7 --protocol-port 80 p1
$ ovn-nbctl list load_balancer
_uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
external_ids : {
lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 2,
\"neutron-12c42705-3e15-4e2d-8fc0-070d1b80b9ef\": 1}",
"pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"=
"member_579c0c9f-d37d-4ba5-beed-cabf6331032d_10.0.0.107:80,
member_d100f2ed-9b55-4083-be78-7f203d095561_20.0.0.107:80",
neutron:vip="10.0.0.10",
neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
protocol : []
vips : {}
5. Create a listener::
$ openstack loadbalancer listener create --name l1 --protocol TCP
--protocol-port 82 --default-pool p1 lb1
$ ovn-nbctl list load_balancer
_uuid : 9dd65bae-2501-43f2-b34e-38a9cb7e4251
external_ids : {
lr_ref="neutron-52b6299c-6e38-4226-a275-77370296f257",
ls_refs="{\"neutron-2526c68a-5a9e-484c-8e00-0716388f6563\": 2,
\"neutron-12c42705-3e15-4e2d-8fc0-070d1b80b9ef\": 1}",
"pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9"="10.0.0.107:80,20.0.0.107:80",
"listener_12345678-2501-43f2-b34e-38a9cb7e4132"=
"82:pool_f2ddf7a6-4047-4cc9-97be-1d1a6c47ece9",
neutron:vip="10.0.0.10",
neutron:vip_port_id="2526c68a-5a9e-484c-8e00-0716388f6563"}
name : "973a201a-8787-4f6e-9b8f-ab9f93c31f44"
protocol : []
vips : {"10.0.0.10:82"="10.0.0.107:80,20.0.0.107:80"}
As explained earlier in the design section:
- If a network N1 has a LoadBalancer LB1 associated to it and one of
its interfaces is added to a router R1, LB1 is associated with R1 as well.
- If a network N2 has a LoadBalancer LB2 and one of its interfaces is added
to the router R1, then R1 will have both LoadBalancers LB1 and LB2. N1 and
N2 will also have both the LoadBalancers associated to them. However, kindly
note that though network N1 would have both LB1 and LB2 LoadBalancers
associated with it, only LB1 would be the LoadBalancer which has a direct
reference to the network N1, since LB1 was created on N1. This is visible
in the ``ls_ref`` key of the ``external_ids`` column in LB1's entry in
the ``load_balancer`` table.
- If a network N3 is added to the router R1, N3 will also have both
LoadBalancers (LB1, LB2) associated to it.
- If the interface to network N2 is removed from R1, network N2 will now only
have LB2 associated with it. Networks N1 and N3 and router R1 will have
LoadBalancer LB1 associated with them.
Limitations
-----------
Following actions are not supported by OVN Driver:
- Creating a LoadBalancer/Listener/Pool with L7 Protocol
- Creating HealthMonitors
- Currently only one algorithm is supported for pool management
(Source IP Port)
- Creating Listeners and Pools with different protocols. They should be of the
same protocol type.
Following issue exists with OVN's integration with Octavia:
- If creation/deletion of a LoadBalancer, Listener, Pool or Member fails, then
the corresponding object will remain in the DB in a PENDING_* state.
Support Matrix
--------------
A detailed matrix of the operations supported by OVN Provider driver in Octavia
can be found in https://docs.openstack.org/octavia/latest/user/feature-classification/index.html
Other References
----------------
[1] Octavia API:
https://docs.openstack.org/api-ref/load-balancer/v2/
[2] Octavia Glossary:
https://docs.openstack.org/octavia/queens/reference/glossary.html
[3] https://github.com/openvswitch/ovs/commit/612f80fa8ebf88dad2e204364c6c02b451dca36c
[4] https://docs.openstack.org/api-ref/load-balancer/v2/index.html#status-codes
[5] https://github.com/openvswitch/ovs/blob/d1b235d7a6246e00d4afc359071d3b6b3ed244c3/ovn/ovn-nb.ovsschema#L117

View File

@ -0,0 +1,363 @@
.. _metadata_api:
==============================
OpenStack Metadata API and OVN
==============================
Introduction
------------
OpenStack Nova presents a metadata API to VMs similar to what is available on
Amazon EC2. Neutron is involved in this process because the source IP address
is not enough to uniquely identify the source of a metadata request since
networks can have overlapping IP addresses. Neutron is responsible for
intercepting metadata API requests and adding HTTP headers which uniquely
identify the source of the request before forwarding it to the metadata API
server.
The purpose of this document is to propose a design for how to enable this
functionality when OVN is used as the backend for OpenStack Neutron.
Neutron and Metadata Today
--------------------------
The following blog post describes how VMs access the metadata API through
Neutron today.
https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/
In summary, we run a metadata proxy in either the router namespace or DHCP
namespace. The DHCP namespace can be used when there's no router connected to
the network. The one downside to the DHCP namespace approach is that it
requires pushing a static route to the VM through DHCP so that it knows to
route metadata requests to the DHCP server IP address.
* Instance sends a HTTP request for metadata to 169.254.169.254
* This request either hits the router or DHCP namespace depending on the route
in the instance
* The metadata proxy service in the namespace adds the following info to the
request:
* Instance IP (X-Forwarded-For header)
* Router or Network-ID (X-Neutron-Network-Id or X-Neutron-Router-Id header)
* The metadata proxy service sends this request to the metadata agent (outside
the namespace) via a UNIX domain socket.
* The neutron-metadata-agent service forwards the request to the Nova metadata
API service by adding some new headers (instance ID and Tenant ID) to the
request [0].
For proper operation, Neutron and Nova must be configured to communicate
together with a shared secret. Neutron uses this secret to sign the Instance-ID
header of the metadata request to prevent spoofing. This secret is configured
through metadata_proxy_shared_secret on both nova and neutron configuration
files (optional).
[0] https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/neutron/agent/metadata/agent.py#L175
Neutron and Metadata with OVN
-----------------------------
The current metadata API approach does not translate directly to OVN. There
are no Neutron agents in use with OVN. Further, OVN makes no use of its own
network namespaces that we could take advantage of like the original
implementation makes use of the router and dhcp namespaces.
We must use a modified approach that fits the OVN model. This section details
a proposed approach.
Overview of Proposed Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The proposed approach would be similar to the *isolated network* case in the
current ML2+OVS implementation. Therefore, we would be running a metadata
proxy (haproxy) instance on every hypervisor for each network a VM on that
host is connected to.
The downside of this approach is that we'll be running more metadata proxies
than we're doing now in case of routed networks (one per virtual router) but
since haproxy is very lightweight and they will be idling most of the time,
it shouldn't be a big issue overall. However, the major benefit of this
approach is that we don't have to implement any scheduling logic to distribute
metadata proxies across the nodes, nor any HA logic. This, however, can be
evolved in the future as explained below in this document.
Also, this approach relies on a new feature in OVN that we must implement
first so that an OVN port can be present on *every* chassis (similar to
*localnet* ports). This new type of logical port would be *localport* and we
will never forward packets over a tunnel for these ports. We would only send
packets to the local instance of a *localport*.
**Step 1** - Create a port for the metadata proxy
When using the DHCP agent today, Neutron automatically creates a port for the
DHCP agent to use. We could do the same thing for use with the metadata proxy
(haproxy). We'll create an OVN *localport* which will be present on every
chassis and this port will have the same MAC/IP address on every host.
Eventually, we can share the same neutron port for both DHCP and metadata.
**Step 2** - Routing metadata API requests to the correct Neutron port
This works similarly to the current approach.
We would program OVN to include a static route in DHCP responses that routes
metadata API requests to the *localport* that is hosting the metadata API
proxy.
Also, in case DHCP isn't enabled or the client ignores the route info, we
will program a static route in the OVN logical router which will still get
metadata requests directed to the right place.
If the DHCP route does not work and the network is isolated, VMs won't get
metadata, but this already happens with the current implementation so this
approach doesn't introduce a regression.
**Step 3** - Management of the namespaces and haproxy instances
We propose a new agent called ``neutron-ovn-metadata-agent``.
We will run this agent on every hypervisor and it will be responsible for
spawning the haproxy instances for managing the OVS interfaces, network
namespaces and haproxy processes used to proxy metadata API requests.
**Step 4** - Metadata API request processing
Similar to the existing neutron metadata agent, ``neutron-ovn-metadata-agent``
must act as an intermediary between haproxy and the Nova metadata API service.
``neutron-ovn-metadata-agent`` is the process that will have access to the
host networks where the Nova metadata API exists. Each haproxy will be in a
network namespace not able to reach the appropriate host network. Haproxy
will add the necessary headers to the metadata API request and then forward it
to ``neutron-ovn-metadata-agent`` over a UNIX domain socket, which matches the
behavior of the current metadata agent.
Metadata Proxy Management Logic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In neutron-ovn-metadata-agent.
* On startup:
* Do a full sync. Ensure we have all the required metadata proxies running.
For that, the agent would watch the ``Port_Binding`` table of the OVN
Southbound database and look for all rows with the ``chassis`` column set
to the host the agent is running on. For all those entries, make sure a
metadata proxy instance is spawned for every ``datapath`` (Neutron
network) those ports are attached to. The agent will keep record of the
list of networks it currently has proxies running on by updating the
``external-ids`` key ``neutron-metadata-proxy-networks`` of the OVN
``Chassis`` record in the OVN Southbound database that corresponds to this
host. As an example, this key would look like
``neutron-metadata-proxy-networks=NET1_UUID,NET4_UUID`` meaning that this
chassis is hosting one or more VM's connected to networks 1 and 4 so we
should have a metadata proxy instance running for each. Ensure any running
metadata proxies no longer needed are torn down.
* Open and maintain a connection to the OVN Northbound database (using the
ovsdbapp library). On first connection, and anytime a reconnect happens:
* Do a full sync.
* Register a callback for creates/updates/deletes to Logical_Switch_Port rows
to detect when metadata proxies should be started or torn down.
``neutron-ovn-metadata-agent`` will watch OVN Southbound database
(``Port_Binding`` table) to detect when a port gets bound to its chassis. At
that point, the agent will make sure that there's a metadata proxy
attached to the OVN *localport* for the network which this port is connected
to.
* When a new network is created, we must create an OVN *localport* for use
as a metadata proxy. This port will be owned by ``network:dhcp`` so that it
gets auto deleted upon the removal of the network and it will remain ``DOWN``
and not bound to any chassis. The metadata port will be created regardless of
the DHCP setting of the subnets within the network as long as the metadata
service is enabled.
* When a network is deleted, we must tear down the metadata proxy instance (if
present) on the host and delete the corresponding OVN *localport* (which will
happen automatically as it's owned by ``network:dhcp``).
Launching a metadata proxy includes:
* Creating a network namespace::
$ sudo ip netns add <ns-name>
* Creating a VETH pair (OVS upgrades that upgrade the kernel module will make
internal ports go away and then brought back by OVS scripts. This may cause
some disruption. Therefore, veth pairs are preferred over internal ports)::
$ sudo ip link add <iface-name>0 type veth peer name <iface-name>1
* Creating an OVS interface and placing one end in that namespace::
$ sudo ovs-vsctl add-port br-int <iface-name>0
$ sudo ip link set <iface-name>1 netns <ns-name>
* Setting the IP and MAC addresses on that interface::
$ sudo ip netns exec <ns-name> \
> ip link set <iface-name>1 address <neutron-port-mac>
$ sudo ip netns exec <ns-name> \
> ip addr add <neutron-port-ip>/<netmask> dev <iface-name>1
* Bringing the VETH pair up::
$ sudo ip netns exec <ns-name> ip link set <iface-name>1 up
$ sudo ip link set <iface-name>0 up
* Set ``external-ids:iface-id=NEUTRON_PORT_UUID`` on the OVS interface so that
OVN is able to correlate this new OVS interface with the correct OVN logical
port::
$ sudo ovs-vsctl set Interface <iface-name>0 external_ids:iface-id=<neutron-port-uuid>
* Starting haproxy in this network namespace.
* Add the network UUID to ``external-ids:neutron-metadata-proxy-networks`` on
the Chassis table for our chassis in OVN Southbound database.
Tearing down a metadata proxy includes:
* Removing the network UUID from our chassis.
* Stopping haproxy.
* Deleting the OVS interface.
* Deleting the network namespace.
**Other considerations**
This feature will be enabled by default when using ``ovn`` driver, but there
should be a way to disable it in case operators who don't need metadata don't
have to deal with the complexity of it (haproxy instances, network namespaces,
etcetera). In this case, the agent would not create the neutron ports needed
for metadata.
There could be a race condition when the first VM for a certain network boots
on a hypervisor if it does so before the metadata proxy instance has been
spawned.
Right now, the ``vif-plugged`` event to Nova is sent out when the up column
in the OVN Northbound database's Logical_Switch_Port table changes to True,
indicating that the VIF is now up. To overcome this race condition we want
to wait until all network UUID's to which this VM is connected to are present
in ``external-ids:neutron-metadata-proxy-networks`` on the Chassis table
for our chassis in OVN Southbound database. This will delay the event to Nova
until the metadata proxy instance is up and running on the host ensuring the
VM will be able to get the metadata on boot.
Alternatives Considered
-----------------------
Alternative 1: Build metadata support into ovn-controller
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We've been building some features useful to OpenStack directly into OVN. DHCP
and DNS are key examples of things we've replaced by building them into
ovn-controller. The metadata API case has some key differences that make this
a less attractive solution:
The metadata API is an OpenStack specific feature. DHCP and DNS by contrast
are more clearly useful outside of OpenStack. Building metadata API proxy
support into ovn-controller means embedding an HTTP and TCP stack into
ovn-controller. This is a significant degree of undesired complexity.
This option has been ruled out for these reasons.
Alternative 2: Distributed metadata and High Availability
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In this approach, we would spawn a metadata proxy per virtual router or per
network (if isolated), thus, improving the number of metadata proxy instances
running in the cloud. However, scheduling and HA have to be considered. Also,
we wouldn't need the OVN *localport* implementation.
``neutron-ovn-metadata-agent`` would run on any host that we wish to be able
to host metadata API proxies. These hosts must also be running ovn-controller.
Each of these hosts will have a Chassis record in the OVN southbound database
created by ovn-controller. The Chassis table has a column called
``external_ids`` which can be used for general metadata however we see fit.
``neutron-ovn-metadata-agent`` will update its corresponding Chassis record
with an external-id of ``neutron-metadata-proxy-host=true`` to indicate that
this OVN chassis is one capable of hosting metadata proxy instances.
Once we have a way to determine hosts capable of hosting metadata API proxies,
we can add logic to the ovn ML2 driver that schedules metadata API
proxies. This would be triggered by Neutron API requests.
The output of the scheduling process would be setting an ``external_ids`` key
on a Logical_Switch_Port in the OVN northbound database that corresponds with
a metadata proxy. The key could be something like
``neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME``.
``neutron-ovn-metadata-agent`` on each host would also be watching for updates
to these Logical_Switch_Port rows. When it detects that a metadata proxy has
been scheduled locally, it will kick off the process to spawn the local
haproxy instance and get it plugged into OVN.
HA must also be considered. We must know when a host goes down so that all
metadata proxies scheduled to that host can be rescheduled. This is almost
the exact same problem we have with L3 HA. When a host goes down, we need to
trigger rescheduling gateways to other hosts. We should ensure that the
approach used for rescheduling L3 gateways can be utilized for rescheduling
metadata proxies, as well.
In neutron-server (ovn mechanism driver) .
Introduce a new ovn driver configuration option:
* ``[ovn] isolated_metadata=[True|False]``
Events that trigger scheduling a new metadata proxy:
* If isolated_metadata is True
* When a new network is created, we must create an OVN logical port for use
as a metadata proxy and then schedule this to one of the
``neutron-ovn-metadata-agent`` instances.
* If isolated_metadata is False
* When a network is attached to or removed from a logical router, ensure
that at least one of the networks has a metadata proxy port already
created. If not, pick a network and create a metadata proxy port and then
schedule it to an agent. At this point, we need to update the static route
for metadata API.
Events that trigger unscheduling an existing metadata proxy:
* When a network is deleted, delete the metadata proxy port if it exists and
unschedule it from a ``neutron-ovn-metadata-agent``.
To schedule a new metadata proxy:
* Determine the list of available OVN Chassis that can host metadata proxies
by reading the ``Chassis`` table of the OVN Southbound database. Look for
chassis that have an external-id of ``neutron-metadata-proxy-host=true``.
* Of the available OVN chassis, choose the one "least loaded", or currently
hosting the fewest number of metadata proxies.
* Set ``neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME`` as an external-id on
the Logical_Switch_Port in the OVN Northbound database that corresponds to
the neutron port used for this metadata proxy. ``CHASSIS_HOSTNAME`` maps to
the hostname row of a Chassis record in the OVN Southbound database.
This approach has been ruled out for its complexity although we have analyzed
the details deeply because, eventually, and depending on the implementation of
L3 HA, we will want to evolve to it.
Other References
----------------
* Haproxy config --
https://review.openstack.org/#/c/431691/34/neutron/agent/metadata/driver.py
* https://engineeringblog.yelp.com/2015/04/true-zero-downtime-haproxy-reloads.html

View File

@ -0,0 +1,53 @@
.. _native_dhcp:
=============================================
Using the native DHCP feature provided by OVN
=============================================
DHCPv4
------
OVN implements a native DHCPv4 support which caters to the common use case of
providing an IP address to a booting instance by providing stateless replies to
DHCPv4 requests based on statically configured address mappings. To do this it
allows a short list of DHCPv4 options to be configured and applied at each
compute host running ovn-controller.
OVN northbound db provides a table 'DHCP_Options' to store the DHCP options.
Logical switch port has a reference to this table.
When a subnet is created and enable_dhcp is True, a new entry is created in
this table. The 'options' column stores the DHCPv4 options. These DHCPv4
options are included in the DHCPv4 reply by the ovn-controller when the VIF
attached to the logical switch port sends a DHCPv4 request.
In order to map the DHCP_Options row with the subnet, the OVN ML2 driver
stores the subnet id in the 'external_ids' column.
When a new port is created, the 'dhcpv4_options' column of the logical switch
port refers to the DHCP_Options row created for the subnet of the port.
If the port has multiple IPv4 subnets, then the first subnet in the 'fixed_ips'
is used.
If the port has extra DHCPv4 options defined, then a new entry is created
in the DHCP_Options table for the port. The default DHCP options are obtained
from the subnet DHCP_Options table and the extra DHCPv4 options of the port
are overridden. In order to map the port DHCP_Options row with the port,
the OVN ML2 driver stores both the subnet id and port id in the 'external_ids'
column.
If admin wants to disable native OVN DHCPv4 for any particular port, then the
admin needs to define the 'dhcp_disabled' with the value 'true' in the extra
DHCP options.
Ex. neutron port-update <PORT_ID> \
--extra-dhcp-opt ip_version=4, opt_name=dhcp_disabled, opt_value=false
DHCPv6
------
OVN implements a native DHCPv6 support similar to DHCPv4. When a v6 subnet is
created, the OVN ML2 driver will insert a new entry into DHCP_Options table
only when the subnet 'ipv6_address_mode' is not 'slaac', and enable_dhcp is
True.

View File

@ -0,0 +1,84 @@
.. _ovn_worker:
===========================================
OVN Neutron Worker and Port status handling
===========================================
When the logical switch port's VIF is attached or removed to/from the ovn
integration bridge, ovn-northd updates the Logical_Switch_Port.up to 'True'
or 'False' accordingly.
In order for the OVN Neutron ML2 driver to update the corresponding neutron
port's status to 'ACTIVE' or 'DOWN' in the db, it needs to monitor the
OVN Northbound db. A neutron worker is created for this purpose.
The implementation of the ovn worker can be found here -
'networking_ovn.ovsdb.worker.OvnWorker'.
Neutron service will create 'n' api workers and 'm' rpc workers and 1 ovn
worker (all these workers are separate processes).
Api workers and rpc workers will create ovsdb idl client object
('ovs.db.idl.Idl') to connect to the OVN_Northbound db.
See 'networking_ovn.ovsdb.impl_idl_ovn.OvsdbNbOvnIdl' and
'ovsdbapp.backend.ovs_idl.connection.Connection' classes for more details.
Ovn worker will create 'networking_ovn.ovsdb.ovsdb_monitor.OvnIdl' class
object (which inherits from 'ovs.db.idl.Idl') to connect to the
OVN_Northbound db. On receiving the OVN_Northbound db updates from the
ovsdb-server, 'notify' function of 'OVnIdl' is called by the parent class
object.
OvnIdl.notify() function passes the received events to the
ovsdb_monitor.OvnDbNotifyHandler class.
ovsdb_monitor.OvnDbNotifyHandler checks for any changes in
the 'Logical_Switch_Port.up' and updates the neutron port's status accordingly.
If 'notify_nova_on_port_status_changes' configuration is set, then neutron
would notify nova on port status changes.
ovsdb locks
-----------
If there are multiple neutron servers running, then each neutron server will
have one ovn worker which listens for the notify events. When the
'Logical_Switch_Port.up' is updated by ovn-northd, we do not want all the
neutron servers to handle the event and update the neutron port status.
In order for only one neutron server to handle the events, ovsdb locks are
used.
At start, each neutron server's ovn worker will try to acquire a lock with id -
'neutron_ovn_event_lock'. The ovn worker which has acquired the lock will
handle the notify events.
In case the neutron server with the lock dies, ovsdb-server will assign the
lock to another neutron server in the queue.
More details about the ovsdb locks can be found here [1] and [2]
[1] - https://tools.ietf.org/html/draft-pfaff-ovsdb-proto-04#section-4.1.8
[2] - https://github.com/openvswitch/ovs/blob/branch-2.4/python/ovs/db/idl.py#L67
One thing to note is the ovn worker (with OvnIdl) do not carry out any
transactions to the OVN Northbound db.
Since the api and rpc workers are not configured with any locks,
using the ovsdb lock on the OVN_Northbound and OVN_Southbound DBs by the ovn
workers will not have any side effects to the transactions done by these api
and rpc workers.
Handling port status changes when neutron server(s) are down
------------------------------------------------------------
When neutron server starts, ovn worker would receive a dump of all
logical switch ports as events. 'ovsdb_monitor.OvnDbNotifyHandler' would
sync up if there are any inconsistencies in the port status.
OVN Southbound DB Access
------------------------
The OVN Neutron ML2 driver has a need to acquire chassis information (hostname
and physnets combinations). This is required initially to support routed
networks. Thus, the plugin will initiate and maintain a connection to the OVN
SB DB during startup.

View File

@ -28,8 +28,7 @@
considerations specific to that choice of backend. For example, OVN does considerations specific to that choice of backend. For example, OVN does
not use Neutron agents, but does have a local controller that runs on each not use Neutron agents, but does have a local controller that runs on each
compute node. OVN supports rolling upgrades, but information about how that compute node. OVN supports rolling upgrades, but information about how that
works should be covered in the documentation for networking-ovn, the OVN works should be covered in the documentation for the OVN Neutron plugin.
Neutron plugin.
Upgrade strategy Upgrade strategy
================ ================

View File

@ -0,0 +1,20 @@
..
================================================
Deploying a development environment with vagrant
================================================
The vagrant directory contains a set of vagrant configurations which will
help you deploy Neutron with ovn driver for testing or development purposes.
We provide a sparse multinode architecture with clear separation between
services. In the future we will include all-in-one and multi-gateway
architectures.
.. toctree::
:maxdepth: 2
prerequisites
sparse-architecture

View File

@ -0,0 +1,29 @@
.. _prerequisites:
=====================
Vagrant prerequisites
=====================
Those are the prerequisites for using the vagrant file definitions
#. Install `VirtualBox <https://www.virtualbox.org/wiki/Downloads>`_ and
`Vagrant <https://www.vagrantup.com/downloads.html>`_. Alternatively
you can use parallels or libvirt vagrant plugin.
#. Install plug-ins for Vagrant::
$ vagrant plugin install vagrant-cachier
$ vagrant plugin install vagrant-vbguest
#. On Linux hosts, you can enable instances to access external networks such
as the Internet by enabling IP forwarding and configuring SNAT from the IP
address range of the provider network interface (typically vboxnet1) on
the host to the external network interface on the host. For example, if
the ``eth0`` network interface on the host provides external network
connectivity::
# sysctl -w net.ipv4.ip_forward=1
# sysctl -p
# iptables -t nat -A POSTROUTING -s 10.10.0.0/16 -o eth0 -j MASQUERADE
Note: These commands do not persist after rebooting the host.

View File

@ -0,0 +1,106 @@
.. _sparse-architecture:
===================
Sparse architecture
===================
The Vagrant scripts deploy OpenStack with Open Virtual Network (OVN)
using four nodes (five if you use the optional ovn-vtep node) to implement a
minimal variant of the reference architecture:
#. ovn-db: Database node containing the OVN northbound (NB) and southbound (SB)
databases via the Open vSwitch (OVS) database and ``ovn-northd`` services.
#. ovn-controller: Controller node containing the Identity service, Image
service, control plane portion of the Compute service, control plane
portion of the Networking service including the ``ovn`` ML2
driver, and the dashboard. In addition, the controller node is configured
as an NFS server to support instance live migration between the two
compute nodes.
#. ovn-compute1 and ovn-compute2: Two compute nodes containing the Compute
hypervisor, ``ovn-controller`` service for OVN, metadata agents for the
Networking service, and OVS services. In addition, the compute nodes are
configured as NFS clients to support instance live migration between them.
#. ovn-vtep: Optional. A node to run the HW VTEP simulator. This node is not
started by default but can be started by running "vagrant up ovn-vtep"
after doing a normal "vagrant up".
During deployment, Vagrant creates three VirtualBox networks:
#. Vagrant management network for deployment and VM access to external
networks such as the Internet. Becomes the VM ``eth0`` network interface.
#. OpenStack management network for the OpenStack control plane, OVN
control plane, and OVN overlay networks. Becomes the VM ``eth1`` network
interface.
#. OVN provider network that connects OpenStack instances to external networks
such as the Internet. Becomes the VM ``eth2`` network interface.
Requirements
------------
The default configuration requires approximately 12 GB of RAM and supports
launching approximately four OpenStack instances using the ``m1.tiny``
flavor. You can change the amount of resources for each VM in the
``instances.yml`` file.
Deployment
----------
#. Follow the pre-requisites described in
:doc:`/contributor/ovn_vagrant/prerequisites`
#. Clone the ``neutron`` repository locally and change to the
``neutron/tools/ovn_vagrant/sparse`` directory::
$ git clone https://opendev.org/openstack/neutron.git
$ cd neutron/tools/ovn_vagrant/sparse
#. If necessary, adjust any configuration in the ``instances.yml`` file.
* If you change any IP addresses or networks, avoid conflicts with the
host.
* For evaluating large MTUs, adjust the ``mtu`` option. You must also
change the MTU on the equivalent ``vboxnet`` interfaces on the host
to the same value after Vagrant creates them. For example::
# ip link set dev vboxnet0 mtu 9000
# ip link set dev vboxnet1 mtu 9000
#. Launch the VMs and grab some coffee::
$ vagrant up
#. After the process completes, you can use the ``vagrant status`` command
to determine the VM status::
$ vagrant status
Current machine states:
ovn-db running (virtualbox)
ovn-controller running (virtualbox)
ovn-vtep running (virtualbox)
ovn-compute1 running (virtualbox)
ovn-compute2 running (virtualbox)
#. You can access the VMs using the following commands::
$ vagrant ssh ovn-db
$ vagrant ssh ovn-controller
$ vagrant ssh ovn-vtep
$ vagrant ssh ovn-compute1
$ vagrant ssh ovn-compute2
Note: If you prefer to use the VM console, the password for the ``root``
account is ``vagrant``. Since ovn-controller is set as the primary
in the Vagrantfile, the command ``vagrant ssh`` (without specifying
the name) will connect ssh to that virtual machine.
#. Access OpenStack services via command-line tools on the ``ovn-controller``
node or via the dashboard from the host by pointing a web browser at the
IP address of the ``ovn-controller`` node.
Note: By default, OpenStack includes two accounts: ``admin`` and ``demo``,
both using password ``password``.
#. After completing your tasks, you can destroy the VMs::
$ vagrant destroy

View File

@ -36,3 +36,4 @@ Testing
template_model_sync_test template_model_sync_test
db_transient_failure_injection db_transient_failure_injection
ci_scenario_jobs ci_scenario_jobs
ovn_devstack

View File

@ -0,0 +1,602 @@
.. _ovn_devstack:
=====================
Testing with DevStack
=====================
This document describes how to test OpenStack with OVN using DevStack. We will
start by describing how to test on a single host.
Single Node Test Environment
----------------------------
1. Create a test system.
It's best to use a throwaway dev system for running DevStack. Your best bet is
to use either CentOS 7 or the latest Ubuntu LTS (16.04, Xenial).
2. Create the ``stack`` user.
::
$ git clone https://opendev.org/openstack/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
3. Switch to the ``stack`` user and clone DevStack and Neutron.
::
$ sudo su - stack
$ git clone https://opendev.org/openstack/devstack.git
$ git clone https://opendev.org/openstack/neutron.git
4. Configure DevStack to use networking-ovn.
Ovn driver comes with a sample DevStack configuration file you can start
with. For example, you may want to set some values for the various PASSWORD
variables in that file so DevStack doesn't have to prompt you for them. Feel
free to edit it if you'd like, but it should work as-is.
::
$ cd devstack
$ cp ../neutron/devstack/ovn.conf.sample local.conf
5. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch
of git repos, and installs everything from these git repos.
::
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks
something like this::
This is your host IP address: 172.16.189.6
This is your host IPv6 address: ::1
Horizon is now available at http://172.16.189.6/dashboard
Keystone is serving at http://172.16.189.6/identity/
The default users are: admin and demo
The password: password
2017-03-09 15:10:54.117 | stack.sh completed in 2110 seconds.
Environment Variables
---------------------
Once DevStack finishes successfully, we're ready to start interacting with
OpenStack APIs. OpenStack provides a set of command line tools for interacting
with these APIs. DevStack provides a file you can source to set up the right
environment variables to make the OpenStack command line tools work.
::
$ . openrc
If you're curious what environment variables are set, they generally start with
an OS prefix::
$ env | grep OS
OS_REGION_NAME=RegionOne
OS_IDENTITY_API_VERSION=2.0
OS_PASSWORD=password
OS_AUTH_URL=http://192.168.122.8:5000/v2.0
OS_USERNAME=demo
OS_TENANT_NAME=demo
OS_VOLUME_API_VERSION=2
OS_CACERT=/opt/stack/data/CA/int-ca/ca-chain.pem
OS_NO_CACHE=1
Default Network Configuration
-----------------------------
By default, DevStack creates networks called ``private`` and ``public``.
Run the following command to see the existing networks::
$ openstack network list
+--------------------------------------+---------+----------------------------------------------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+----------------------------------------------------------------------------+
| 40080dad-0064-480a-b1b0-592ae51c1471 | private | 5ff81545-7939-4ae0-8365-1658d45fa85c, da34f952-3bfc-45bb-b062-d2d973c1a751 |
| 7ec986dd-aae4-40b5-86cf-8668feeeab67 | public | 60d0c146-a29b-4cd3-bd90-3745603b1a4b, f010c309-09be-4af2-80d6-e6af9c78bae7 |
+--------------------------------------+---------+----------------------------------------------------------------------------+
A Neutron network is implemented as an OVN logical switch. Ovn driver
creates logical switches with a name in the format neutron-<network UUID>.
We can use ``ovn-nbctl`` to list the configured logical switches and see that
their names correlate with the output from ``openstack network list``::
$ ovn-nbctl ls-list
71206f5c-b0e6-49ce-b572-eb2e964b2c4e (neutron-40080dad-0064-480a-b1b0-592ae51c1471)
8d8270e7-fd51-416f-ae85-16565200b8a4 (neutron-7ec986dd-aae4-40b5-86cf-8668feeeab67)
$ ovn-nbctl get Logical_Switch neutron-40080dad-0064-480a-b1b0-592ae51c1471 external_ids
{"neutron:network_name"=private}
Booting VMs
-----------
In this section we'll go through the steps to create two VMs that have a
virtual NIC attached to the ``private`` Neutron network.
DevStack uses libvirt as the Nova backend by default. If KVM is available, it
will be used. Otherwise, it will just run qemu emulated guests. This is
perfectly fine for our testing, as we only need these VMs to be able to send
and receive a small amount of traffic so performance is not very important.
1. Get the Network UUID.
Start by getting the UUID for the ``private`` network from the output of
``openstack network list`` from earlier and save it off::
$ PRIVATE_NET_ID=$(openstack network show private -c id -f value)
2. Create an SSH keypair.
Next create an SSH keypair in Nova. Later, when we boot a VM, we'll ask that
the public key be put in the VM so we can SSH into it.
::
$ openstack keypair create demo > id_rsa_demo
$ chmod 600 id_rsa_demo
3. Choose a flavor.
We need minimal resources for these test VMs, so the ``m1.nano`` flavor is
sufficient.
::
$ openstack flavor list
+----+-----------+-------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+-----------+-------+------+-----------+-------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | 1 | True |
| 2 | m1.small | 2048 | 20 | 0 | 1 | True |
| 3 | m1.medium | 4096 | 40 | 0 | 2 | True |
| 4 | m1.large | 8192 | 80 | 0 | 4 | True |
| 42 | m1.nano | 64 | 0 | 0 | 1 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True |
| 84 | m1.micro | 128 | 0 | 0 | 1 | True |
| c1 | cirros256 | 256 | 0 | 0 | 1 | True |
| d1 | ds512M | 512 | 5 | 0 | 1 | True |
| d2 | ds1G | 1024 | 10 | 0 | 1 | True |
| d3 | ds2G | 2048 | 10 | 0 | 2 | True |
| d4 | ds4G | 4096 | 20 | 0 | 4 | True |
+----+-----------+-------+------+-----------+-------+-----------+
$ FLAVOR_ID=$(openstack flavor show m1.nano -c id -f value)
4. Choose an image.
DevStack imports the CirrOS image by default, which is perfect for our testing.
It's a very small test image.
::
$ openstack image list
+--------------------------------------+--------------------------+--------+
| ID | Name | Status |
+--------------------------------------+--------------------------+--------+
| 849a8db2-3754-4cf6-9271-491fa4ff7195 | cirros-0.3.5-x86_64-disk | active |
+--------------------------------------+--------------------------+--------+
$ IMAGE_ID=$(openstack image list -c ID -f value)
5. Setup a security rule so that we can access the VMs we will boot up next.
By default, DevStack does not allow users to access VMs, to enable that, we
will need to add a rule. We will allow both ICMP and SSH.
::
$ openstack security group rule create --ingress --ethertype IPv4 --dst-port 22 --protocol tcp default
$ openstack security group rule create --ingress --ethertype IPv4 --protocol ICMP default
$ openstack security group rule list
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
| ID | IP Protocol | IP Range | Port Range | Remote Security Group | Security Group |
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
...
| ade97198-db44-429e-9b30-24693d86d9b1 | tcp | 0.0.0.0/0 | 22:22 | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
| d0861a98-f90e-4d1a-abfb-827b416bc2f6 | icmp | 0.0.0.0/0 | | None | a47b14da-5607-404a-8de4-3a0f1ad3649c |
...
+--------------------------------------+-------------+-----------+------------+--------------------------------------+--------------------------------------+
6. Boot some VMs.
Now we will boot two VMs. We'll name them ``test1`` and ``test2``.
::
$ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test1
+-----------------------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | BzAWWA6byGP6 |
| config_drive | |
| created | 2017-03-09T16:56:08Z |
| flavor | m1.nano (42) |
| hostId | |
| id | d8b8084e-58ff-44f4-b029-a57e7ef6ba61 |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
| key_name | demo |
| name | test1 |
| progress | 0 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-03-09T16:56:08Z |
| user_id | c68f77f1d85e43eb9e5176380a68ac1f |
| volumes_attached | |
+-----------------------------+-----------------------------------------------------------------+
$ openstack server create --nic net-id=$PRIVATE_NET_ID --flavor $FLAVOR_ID --image $IMAGE_ID --key-name demo test2
+-----------------------------+-----------------------------------------------------------------+
| Field | Value |
+-----------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | YB8dmt5v88JV |
| config_drive | |
| created | 2017-03-09T16:56:50Z |
| flavor | m1.nano (42) |
| hostId | |
| id | 170d4f37-9299-4a08-b48b-2b90fce8e09b |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-4cf6-9271-491fa4ff7195) |
| key_name | demo |
| name | test2 |
| progress | 0 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| properties | |
| security_groups | name='default' |
| status | BUILD |
| updated | 2017-03-09T16:56:51Z |
| user_id | c68f77f1d85e43eb9e5176380a68ac1f |
| volumes_attached | |
+-----------------------------+-----------------------------------------------------------------+
Once both VMs have been started, they will have a status of ``ACTIVE``::
$ openstack server list
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
| ID | Name | Status | Networks | Image Name |
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
| 170d4f37-9299-4a08-b48b-2b90fce8e09b | test2 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe24:49df, 10.0.0.3 | cirros-0.3.5-x86_64-disk |
| d8b8084e-58ff-44f4-b029-a57e7ef6ba61 | test1 | ACTIVE | private=fd5d:9d1b:457c:0:f816:3eff:fe3f:953d, 10.0.0.10 | cirros-0.3.5-x86_64-disk |
+--------------------------------------+-------+--------+---------------------------------------------------------+--------------------------+
Our two VMs have addresses of ``10.0.0.3`` and ``10.0.0.10``. If we list
Neutron ports, there are two new ports with these addresses associated
with them::
$ openstack port list
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
...
| 97c970b0-485d-47ec-868d-783c2f7acde3 | | fa:16:3e:3f:95:3d | ip_address='10.0.0.10', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
| | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe3f:953d', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
| e003044d-334a-4de3-96d9-35b2d2280454 | | fa:16:3e:24:49:df | ip_address='10.0.0.3', subnet_id='da34f952-3bfc-45bb-b062-d2d973c1a751' | ACTIVE |
| | | | ip_address='fd5d:9d1b:457c:0:f816:3eff:fe24:49df', subnet_id='5ff81545-7939-4ae0-8365-1658d45fa85c' | |
...
+--------------------------------------+------+-------------------+-----------------------------------------------------------------------------------------------------+--------+
$ TEST1_PORT_ID=97c970b0-485d-47ec-868d-783c2f7acde3
$ TEST2_PORT_ID=e003044d-334a-4de3-96d9-35b2d2280454
Now we can look at OVN using ``ovn-nbctl`` to see the logical switch ports
that were created for these two Neutron ports. The first part of the output
is the OVN logical switch port UUID. The second part in parentheses is the
logical switch port name. Neutron sets the logical switch port name equal to
the Neutron port ID.
::
$ ovn-nbctl lsp-list neutron-$PRIVATE_NET_ID
...
fde1744b-e03b-46b7-b181-abddcbe60bf2 (97c970b0-485d-47ec-868d-783c2f7acde3)
7ce284a8-a48a-42f5-bf84-b2bca62cd0fe (e003044d-334a-4de3-96d9-35b2d2280454)
...
These two ports correspond to the two VMs we created.
VM Connectivity
---------------
We can connect to our VMs by associating a floating IP address from the public
network.
::
$ openstack floating ip create --port $TEST1_PORT_ID public
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-03-09T18:58:12Z |
| description | |
| fixed_ip_address | 10.0.0.10 |
| floating_ip_address | 172.24.4.8 |
| floating_network_id | 7ec986dd-aae4-40b5-86cf-8668feeeab67 |
| id | 24ff0799-5a72-4a5b-abc0-58b301c9aee5 |
| name | None |
| port_id | 97c970b0-485d-47ec-868d-783c2f7acde3 |
| project_id | b6522570f7344c06b1f24303abf3c479 |
| revision_number | 1 |
| router_id | ee51adeb-0dd8-4da0-ab6f-7ce60e00e7b0 |
| status | DOWN |
| updated_at | 2017-03-09T18:58:12Z |
+---------------------+--------------------------------------+
Devstack does not wire up the public network by default so we must do
that before connecting to this floating IP address.
::
$ sudo ip link set br-ex up
$ sudo ip route add 172.24.4.0/24 dev br-ex
$ sudo ip addr add 172.24.4.1/24 dev br-ex
Now you should be able to connect to the VM via its floating IP address.
First, ping the address.
::
$ ping -c 1 172.24.4.8
PING 172.24.4.8 (172.24.4.8) 56(84) bytes of data.
64 bytes from 172.24.4.8: icmp_seq=1 ttl=63 time=0.823 ms
--- 172.24.4.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.823/0.823/0.823/0.000 ms
Now SSH to the VM::
$ ssh -i id_rsa_demo cirros@172.24.4.8 hostname
test1
Adding Another Compute Node
---------------------------
After completing the earlier instructions for setting up devstack, you can use
a second VM to emulate an additional compute node. This is important for OVN
testing as it exercises the tunnels created by OVN between the hypervisors.
Just as before, create a throwaway VM but make sure that this VM has a
different host name. Having same host name for both VMs will confuse Nova and
will not produce two hypervisors when you query nova hypervisor list later.
Once the VM is setup, create the ``stack`` user::
$ git clone https://opendev.org/openstack/devstack.git
$ sudo ./devstack/tools/create-stack-user.sh
Switch to the ``stack`` user and clone DevStack and neutron::
$ sudo su - stack
$ git clone https://opendev.org/openstack/devstack.git
$ git clone https://opendev.org/openstack/neutron.git
networking-ovn comes with another sample configuration file that can be used
for this::
$ cd devstack
$ cp ../neutron/devstack/ovn-computenode.conf.sample local.conf
You must set SERVICE_HOST in local.conf. The value should be the IP address of
the main DevStack host. You must also set HOST_IP to the IP address of this
new host. See the text in the sample configuration file for more
information. Once that is complete, run DevStack::
$ cd devstack
$ ./stack.sh
This should complete in less time than before, as it's only running a single
OpenStack service (nova-compute) along with OVN (ovn-controller, ovs-vswitchd,
ovsdb-server). The final output will look something like this::
This is your host IP address: 172.16.189.30
This is your host IPv6 address: ::1
2017-03-09 18:39:27.058 | stack.sh completed in 1149 seconds.
Now go back to your main DevStack host. You can use admin credentials to
verify that the additional hypervisor has been added to the deployment::
$ cd devstack
$ . openrc admin
$ openstack hypervisor list
+----+------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+------------------------+-----------------+---------------+-------+
| 1 | centos7-ovn-devstack | QEMU | 172.16.189.6 | up |
| 2 | centos7-ovn-devstack-2 | QEMU | 172.16.189.30 | up |
+----+------------------------+-----------------+---------------+-------+
You can also look at OVN and OVS to see that the second host has shown up. For
example, there will be a second entry in the Chassis table of the
OVN_Southbound database. You can use the ``ovn-sbctl`` utility to list
chassis, their configuration, and the ports bound to each of them::
$ ovn-sbctl show
Chassis "ddc8991a-d838-4758-8d15-71032da9d062"
hostname: "centos7-ovn-devstack"
Encap vxlan
ip: "172.16.189.6"
options: {csum="true"}
Encap geneve
ip: "172.16.189.6"
options: {csum="true"}
Port_Binding "97c970b0-485d-47ec-868d-783c2f7acde3"
Port_Binding "e003044d-334a-4de3-96d9-35b2d2280454"
Port_Binding "cr-lrp-08d1f28d-cc39-4397-b12b-7124080899a1"
Chassis "b194d07e-0733-4405-b795-63b172b722fd"
hostname: "centos7-ovn-devstack-2.os1.phx2.redhat.com"
Encap geneve
ip: "172.16.189.30"
options: {csum="true"}
Encap vxlan
ip: "172.16.189.30"
options: {csum="true"}
You can also see a tunnel created to the other compute node::
$ ovs-vsctl show
...
Bridge br-int
fail_mode: secure
...
Port "ovn-b194d0-0"
Interface "ovn-b194d0-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.189.30"}
...
...
Provider Networks
-----------------
Neutron has a "provider networks" API extension that lets you specify
some additional attributes on a network. These attributes let you
map a Neutron network to a physical network in your environment.
The OVN ML2 driver is adding support for this API extension. It currently
supports "flat" and "vlan" networks.
Here is how you can test it:
First you must create an OVS bridge that provides connectivity to the
provider network on every host running ovn-controller. For trivial
testing this could just be a dummy bridge. In a real environment, you
would want to add a local network interface to the bridge, as well.
::
$ ovs-vsctl add-br br-provider
ovn-controller on each host must be configured with a mapping between
a network name and the bridge that provides connectivity to that network.
In this case we'll create a mapping from the network name "providernet"
to the bridge 'br-provider".
::
$ ovs-vsctl set open . \
external-ids:ovn-bridge-mappings=providernet:br-provider
If you want to enable this chassis to host a gateway router for
external connectivity, then set ovn-cms-options to enable-chassis-as-gw.
::
$ ovs-vsctl set open . \
external-ids:ovn-cms-options="enable-chassis-as-gw"
Now create a Neutron provider network.
::
$ openstack network create provider --share \
--provider-physical-network providernet \
--provider-network-type flat
Alternatively, you can define connectivity to a VLAN instead of a flat network:
::
$ openstack network create provider-101 --share \
--provider-physical-network providernet \
--provider-network-type vlan
--provider-segment 101
Observe that the OVN ML2 driver created a special logical switch port of type
localnet on the logical switch to model the connection to the physical network.
::
$ ovn-nbctl show
...
switch 5bbccbbd-f5ca-411b-bad9-01095d6f1316 (neutron-729dbbee-db84-4a3d-afc3-82c0b3701074)
port provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
addresses: ["unknown"]
...
$ ovn-nbctl lsp-get-type provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
localnet
$ ovn-nbctl lsp-get-options provnet-729dbbee-db84-4a3d-afc3-82c0b3701074
network_name=providernet
If VLAN is used, there will be a VLAN tag shown on the localnet port as well.
Finally, create a Neutron port on the provider network.
::
$ openstack port create --network provider myport
or if you followed the VLAN example, it would be:
::
$ openstack port create --network provider-101 myport
Skydive
-------
`Skydive <https://github.com/skydive-project/skydive>`_ is an open source
real-time network topology and protocols analyzer. It aims to provide a
comprehensive way of understanding what is happening in the network
infrastructure. Skydive works by utilizing agents to collect host-local
information, and sending this information to a central agent for
further analysis. It utilizes elasticsearch to store the data.
To enable Skydive support with OVN and devstack, enable it on the control
and compute nodes.
On the control node, enable it as follows:
::
enable_plugin skydive https://github.com/skydive-project/skydive.git
enable_service skydive-analyzer
On the compute nodes, enable it as follows:
::
enable_plugin skydive https://github.com/skydive-project/skydive.git
enable_service skydive-agent
Troubleshooting
---------------
If you run into any problems, take a look at our :doc:`/admin/ovn/troubleshooting`
page.
Additional Resources
--------------------
See the documentation and other references linked
from the :doc:`/admin/ovn/ovn` page.

View File

@ -34,7 +34,7 @@ Networking Guide
---------------- ----------------
.. toctree:: .. toctree::
:maxdepth: 2 :maxdepth: 3
admin/index admin/index
@ -53,6 +53,14 @@ CLI Reference
cli/index cli/index
OVN Driver
----------
.. toctree::
:maxdepth: 2
ovn/index
Neutron Feature Classification Neutron Feature Classification
------------------------------ ------------------------------

View File

@ -13,6 +13,7 @@ Networking service Installation Guide
install-obs.rst install-obs.rst
install-rdo.rst install-rdo.rst
install-ubuntu.rst install-ubuntu.rst
ovn/index.rst
This chapter explains how to install and configure the Networking This chapter explains how to install and configure the Networking
service (neutron) using the :ref:`provider networks <network1>` or service (neutron) using the :ref:`provider networks <network1>` or

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

File diff suppressed because it is too large Load Diff

After

Width:  |  Height:  |  Size: 165 KiB

View File

@ -0,0 +1,11 @@
..
=========================
OVN Install Documentation
=========================
.. toctree::
:maxdepth: 1
manual_install.rst
tripleo_install.rst

View File

@ -0,0 +1,347 @@
.. _manual_install:
==============================
Manual install & Configuration
==============================
This document discusses what is required for manual installation or
integration into a production OpenStack deployment tool of conventional
architectures that include the following types of nodes:
* Controller - Runs OpenStack control plane services such as REST APIs
and databases.
* Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents
for the Networking service. Some agents optional. Usually provides
connectivity between provider (public) and project (private) networks
via NAT and floating IP addresses.
.. note::
Some tools deploy these services on controller nodes.
* Compute - Runs the hypervisor and layer-2 agent for the Networking
service.
Packaging
---------
Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers
it experimental. The Networking service integration for OVN is now one of
the in-tree Neutron drivers so should be delivered with ``neutron`` package,
but older versions of this integration were delivered with independent
package, typically ``networking-ovn``.
Building OVS from source automatically installs OVN. For deployment tools
using distribution packages, the ``openvswitch-ovn`` package for RHEL/CentOS
and compatible distributions automatically installs ``openvswitch`` as a
dependency. Ubuntu/Debian includes ``ovn-central``, ``ovn-host``,
``ovn-docker``, and ``ovn-common`` packages that pull in the appropriate Open
vSwitch dependencies as needed.
A ``python-networking-ovn`` RPM may be obtained for Fedora or CentOS from
the RDO project. A package based on the ``master`` branch of
``networking-ovn`` can be found at https://trunk.rdoproject.org/.
Fedora and CentOS RPM builds of OVS and OVN from the ``master`` branch of
``ovs`` can be found in this COPR repository:
https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
Controller nodes
----------------
Each controller node runs the OVS service (including dependent services such
as ``ovsdb-server``) and the ``ovn-northd`` service. However, only a single
instance of the ``ovsdb-server`` and ``ovn-northd`` services can operate in
a deployment. However, deployment tools can implement active/passive
high-availability using a management tool that monitors service health
and automatically starts these services on another node after failure of the
primary node. See the :doc:`/ovn/faq/index` for more information.
#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
#. Start the OVS service. The central OVS service starts the ``ovsdb-server``
service that manages OVN databases.
Using the *systemd* unit:
.. code-block:: console
# systemctl start openvswitch
Using the ``ovs-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
#. Configure the ``ovsdb-server`` component. By default, the ``ovsdb-server``
service only permits local access to databases via Unix socket. However,
OVN services on compute nodes require access to these databases.
* Permit remote database access.
.. code-block:: console
# ovn-nbctl set-connection ptcp:6641:0.0.0.0 -- \
set connection . inactivity_probe=60000
# ovn-sbctl set-connection ptcp:6642:0.0.0.0 -- \
set connection . inactivity_probe=60000
# if using the VTEP functionality:
# ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640:0.0.0.0
Replace ``0.0.0.0`` with the IP address of the management network
interface on the controller node to avoid listening on all interfaces.
.. note::
Permit remote access to TCP ports: 6640 (OVS) to VTEPS (if you use vteps),
6642 (SBDB) to hosts running neutron-server, gateway nodes that run ovn-controller,
and compute node services like ovn-controller an ovn-metadata-agent. 6641 (NBDB) to
hosts running neutron-server.
#. Start the ``ovn-northd`` service.
Using the *systemd* unit:
.. code-block:: console
# systemctl start ovn-northd
Using the ``ovn-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovn-ctl start_northd
Options for *start_northd*:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovn-ctl start_northd --help
# ...
# DB_NB_SOCK="/usr/local/etc/openvswitch/nb_db.sock"
# DB_NB_PID="/usr/local/etc/openvswitch/ovnnb_db.pid"
# DB_SB_SOCK="usr/local/etc/openvswitch/sb_db.sock"
# DB_SB_PID="/usr/local/etc/openvswitch/ovnsb_db.pid"
# ...
#. Configure the Networking server component. The Networking service
implements OVN as an ML2 driver. Edit the ``/etc/neutron/neutron.conf``
file:
* Enable the ML2 core plug-in.
.. code-block:: ini
[DEFAULT]
...
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
* Enable the OVN layer-3 service.
.. code-block:: ini
[DEFAULT]
...
service_plugins = networking_ovn.l3.l3_ovn.OVNL3RouterPlugin
#. Configure the ML2 plug-in. Edit the
``/etc/neutron/plugins/ml2/ml2_conf.ini`` file:
* Configure the OVN mechanism driver, network type drivers, self-service
(tenant) network types, and enable the port security extension.
.. code-block:: ini
[ml2]
...
mechanism_drivers = ovn
type_drivers = local,flat,vlan,geneve
tenant_network_types = geneve
extension_drivers = port_security
overlay_ip_version = 4
.. note::
To enable VLAN self-service networks, make sure that OVN
version 2.11 (or higher) is used, then add ``vlan`` to the
``tenant_network_types`` option. The first network type in the
list becomes the default self-service network type.
To use IPv6 for all overlay (tunnel) network endpoints,
set the ``overlay_ip_version`` option to ``6``.
* Configure the Geneve ID range and maximum header size. The IP version
overhead (20 bytes for IPv4 (default) or 40 bytes for IPv6) is added
to the maximum header size based on the ML2 ``overlay_ip_version``
option.
.. code-block:: ini
[ml2_type_geneve]
...
vni_ranges = 1:65536
max_header_size = 38
.. note::
The Networking service uses the ``vni_ranges`` option to allocate
network segments. However, OVN ignores the actual values. Thus, the ID
range only determines the quantity of Geneve networks in the
environment. For example, a range of ``5001:6000`` defines a maximum
of 1000 Geneve networks.
* Optionally, enable support for VLAN provider and self-service
networks on one or more physical networks. If you specify only
the physical network, only administrative (privileged) users can
manage VLAN networks. Additionally specifying a VLAN ID range for
a physical network enables regular (non-privileged) users to
manage VLAN networks. The Networking service allocates the VLAN ID
for each self-service network using the VLAN ID range for the
physical network.
.. code-block:: ini
[ml2_type_vlan]
...
network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
Replace ``PHYSICAL_NETWORK`` with the physical network name and
optionally define the minimum and maximum VLAN IDs. Use a comma
to separate each physical network.
For example, to enable support for administrative VLAN networks
on the ``physnet1`` network and self-service VLAN networks on
the ``physnet2`` network using VLAN IDs 1001 to 2000:
.. code-block:: ini
network_vlan_ranges = physnet1,physnet2:1001:2000
* Enable security groups.
.. code-block:: ini
[securitygroup]
...
enable_security_group = true
.. note::
The ``firewall_driver`` option under ``[securitygroup]`` is ignored
since the OVN ML2 driver itself handles security groups.
* Configure OVS database access and L3 scheduler
.. code-block:: ini
[ovn]
...
ovn_nb_connection = tcp:IP_ADDRESS:6641
ovn_sb_connection = tcp:IP_ADDRESS:6642
ovn_l3_scheduler = OVN_L3_SCHEDULER
.. note::
Replace ``IP_ADDRESS`` with the IP address of the controller node that
runs the ``ovsdb-server`` service. Replace ``OVN_L3_SCHEDULER`` with
``leastloaded`` if you want the scheduler to select a compute node with
the least number of gateway ports or ``chance`` if you want the
scheduler to randomly select a compute node from the available list of
compute nodes.
* Set ovn-cms-options with enable-chassis-as-gw in Open_vSwitch table's
external_ids column. Then if this chassis has proper bridge mappings,
it will be selected for scheduling gateway routers.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-cms-options=enable-chassis-as-gw
#. Start the ``neutron-server`` service.
Network nodes
-------------
Deployments using OVN native layer-3 and DHCP services do not require
conventional network nodes because connectivity to external networks
(including VTEP gateways) and routing occurs on compute nodes.
Compute nodes
-------------
Each compute node runs the OVS and ``ovn-controller`` services. The
``ovn-controller`` service replaces the conventional OVS layer-2 agent.
#. Install the ``openvswitch-ovn`` and ``networking-ovn`` packages.
#. Start the OVS service.
Using the *systemd* unit:
.. code-block:: console
# systemctl start openvswitch
Using the ``ovs-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovs-ctl start --system-id="random"
#. Configure the OVS service.
* Use OVS databases on the controller node.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-remote=tcp:IP_ADDRESS:6642
Replace ``IP_ADDRESS`` with the IP address of the controller node
that runs the ``ovsdb-server`` service.
* Enable one or more overlay network protocols. At a minimum, OVN requires
enabling the ``geneve`` protocol. Deployments using VTEP gateways should
also enable the ``vxlan`` protocol.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-encap-type=geneve,vxlan
.. note::
Deployments without VTEP gateways can safely enable both protocols.
* Configure the overlay network local endpoint IP address.
.. code-block:: console
# ovs-vsctl set open . external-ids:ovn-encap-ip=IP_ADDRESS
Replace ``IP_ADDRESS`` with the IP address of the overlay network
interface on the compute node.
#. Start the ``ovn-controller`` service.
Using the *systemd* unit:
.. code-block:: console
# systemctl start ovn-controller
Using the ``ovn-ctl`` script:
.. code-block:: console
# /usr/share/openvswitch/scripts/ovn-ctl start_controller
Verify operation
----------------
#. Each compute node should contain an ``ovn-controller`` instance.
.. code-block:: console
# ovn-sbctl show
<output>

View File

@ -0,0 +1,286 @@
.. _tripleo_install:
=============================
TripleO/RDO based deployments
=============================
`TripleO <http://tripleo.org/>`_ is a project aimed at installing,
upgrading and operating OpenStack clouds using OpenStack's own cloud
facilities as the foundation.
`RDO <http://rdoproject.org/>`_ is the OpenStack distribution that runs on
top of CentOS, and can be deployed via TripleO.
`TripleO Quickstart`_ is an easy way to try out TripleO in a libvirt
virtualized environment.
In this document we will stick to the details of installing a 3 controller
+ 1 compute in high availability through TripleO Quickstart, but the
non-quickstart details in this document also work with TripleO.
.. _`TripleO Quickstart`: https://github.com/openstack/tripleo-quickstart/blob/master/README.rst
.. note::
This deployment requires 32GB for the VMs, so your host may have >32GB of
RAM at least. If you have 32GB I recommend to trim down the compute node
memory in "config/nodes/3ctlr_1comp.yml" to 2GB and controller nodes to 5GB.
Deployment steps
================
#. Download the quickstart.sh script with curl:
.. code-block:: console
$ curl -O https://raw.githubusercontent.com/openstack/tripleo-quickstart/master/quickstart.sh
#. Install the necessary dependencies by running:
.. code-block:: console
$ bash quickstart.sh --install-deps
#. Clone the tripleo-quickstart and neutron repositories:
.. code-block:: console
$ git clone https://opendev.org/openstack/tripleo-quickstart
$ git clone https://opendev.org/openstack/neutron
#. Once you're done, run quickstart as follows (3 controller HA + 1 compute):
.. code-block:: console
# Exporting the tags is a workaround until the bug
# https://bugs.launchpad.net/tripleo/+bug/1737602 is resolved
$ export ansible_tags="untagged,provision,environment,libvirt,\
undercloud-scripts,undercloud-inventory,overcloud-scripts,\
undercloud-setup,undercloud-install,undercloud-post-install,\
overcloud-prep-config"
$ bash ./quickstart.sh --tags $ansible_tags --teardown all \
--release master-tripleo-ci \
--nodes tripleo-quickstart/config/nodes/3ctlr_1comp.yml \
--config neutron/tools/tripleo/ovn.yml \
$VIRTHOST
.. note::
When deploying directly on ``localhost`` use the loopback address
127.0.0.2 as your $VIRTHOST. The loopback address 127.0.0.1 is
reserved by ansible. Also make sure that 127.0.0.2 is accessible
via public keys::
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
.. note::
You can adjust RAM/VCPUs if you want by editing
*config/nodes/3ctlr_1comp.yml* before running the above command. If
you have enough memory stick to the defaults. We recommend using 8GB
of RAM for the controller nodes.
#. When quickstart has finished you will have 5 VMs ready to be used, 1 for
the undercloud (TripleO's node to deploy your openstack from), 3 VMs for
controller nodes and 1 VM for the compute node.
#. Log in into the undercloud:
.. code-block:: console
$ ssh -F ~/.quickstart/ssh.config.ansible undercloud
#. Prepare overcloud container images:
.. code-block:: console
[stack@undercloud ~]$ ./overcloud-prep-containers.sh
#. Run inside the undercloud:
.. code-block:: console
[stack@undercloud ~]$ ./overcloud-deploy.sh
#. Grab a coffee, that may take around 1 hour (depending on your hardware).
#. If anything goes wrong, go to IRC on freenode, and ask on #oooq
Description of the environment
==============================
Once deployed, inside the undercloud root directory two files are present:
stackrc and overcloudrc, which will let you connect to the APIs of the
undercloud (managing the openstack node), and to the overcloud (where
your instances would live).
We can find out the existing controller/computes this way:
.. code-block:: console
[stack@undercloud ~]$ source stackrc
(undercloud) [stack@undercloud ~]$ openstack server list -c Name -c Networks -c Flavor
+-------------------------+------------------------+--------------+
| Name | Networks | Flavor |
+-------------------------+------------------------+--------------+
| overcloud-controller-1 | ctlplane=192.168.24.16 | oooq_control |
| overcloud-controller-0 | ctlplane=192.168.24.14 | oooq_control |
| overcloud-controller-2 | ctlplane=192.168.24.12 | oooq_control |
| overcloud-novacompute-0 | ctlplane=192.168.24.13 | oooq_compute |
+-------------------------+------------------------+--------------+
Network architecture of the environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. image:: figures/tripleo-ovn-arch.png
:alt: TripleO Quickstart single NIC with vlans
:align: center
Connecting to one of the nodes via ssh
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We can connect to the IP address in the `openstack server list` we showed
before.
.. code-block:: console
(undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.16
Last login: Wed Feb 21 14:11:40 2018 from 192.168.24.1
[heat-admin@overcloud-controller-1 ~]$ ps fax | grep ovn-controller
20422 ? S<s 30:40 ovn-controller unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --no-chdir --log-file=/var/log/openvswitch/ovn-controller.log --pidfile=/var/run/openvswitch/ovn-controller.pid --detach
[heat-admin@overcloud-controller-1 ~]$ sudo ovs-vsctl show
bb413f44-b74f-4678-8d68-a2c6de725c73
Bridge br-ex
fail_mode: standalone
...
Port "patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"
Interface "patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"
type: patch
options: {peer="patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"}
Port "eth0"
Interface "eth0"
...
Bridge br-int
fail_mode: secure
Port "ovn-c8b85a-0"
Interface "ovn-c8b85a-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.17"}
Port "ovn-b5643d-0"
Interface "ovn-b5643d-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.14"}
Port "ovn-14d60a-0"
Interface "ovn-14d60a-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.12"}
Port "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"
Interface "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"
type: patch
options: {peer="patch-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6-to-br-int"}
Port br-int
Interface br-int
type: internal
Initial resource creation
=========================
Well, now you have a virtual cloud with 3 controllers in HA, and one compute
node, but no instances or routers running. We can give it a try and create a
few resources:
.. image:: figures/ovn-initial-resources.png
:alt: Initial resources we can create
:align: center
You can use the following script to create the resources.
.. code-block:: console
ssh -F ~ /.quickstart/ssh.config.ansible undercloud
source ~/overcloudrc
curl http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \
> cirros-0.4.0-x86_64-disk.img
openstack image create "cirros" --file cirros-0.4.0-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
openstack network create public --provider-physical-network datacentre \
--provider-network-type vlan \
--provider-segment 10 \
--external --share
openstack subnet create --network public public --subnet-range 10.0.0.0/24 \
--allocation-pool start=10.0.0.20,end=10.0.0.250 \
--dns-nameserver 8.8.8.8 --gateway 10.0.0.1 \
--no-dhcp
openstack network create private
openstack subnet create --network private private \
--subnet-range 192.168.99.0/24
openstack router create router1
openstack router set --external-gateway public router1
openstack router add subnet router1 private
openstack security group create test
openstack security group rule create --ingress --protocol tcp \
--dst-port 22 test
openstack security group rule create --ingress --protocol icmp test
openstack security group rule create --egress test
openstack flavor create m1.tiny --disk 1 --vcpus 1 --ram 64
PRIV_NET=$(openstack network show private -c id -f value)
openstack server create --flavor m1.tiny --image cirros \
--nic net-id=$PRIV_NET --security-group test \
--wait cirros
openstack floating ip create --floating-ip-address 10.0.0.130 public
openstack server add floating ip cirros 10.0.0.130
.. note::
You can now log in into the instance if you want.
In a CirrOS >0.4.0 image, the login account is cirros. The password is
*gocubsgo*.
.. code-block:: console
(overcloud) [stack@undercloud ~]$ ssh cirros@10.0.0.130
cirros@10.0.0.130's password:
$ ip a | grep eth0 -A 10
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen 1000
link/ether fa:16:3e:85:b4:66 brd ff:ff:ff:ff:ff:ff
inet 192.168.99.5/24 brd 192.168.99.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe85:b466/64 scope link
valid_lft forever preferred_lft forever
$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: seq=0 ttl=63 time=2.145 ms
64 bytes from 10.0.0.1: seq=1 ttl=63 time=1.025 ms
64 bytes from 10.0.0.1: seq=2 ttl=63 time=0.836 ms
^C
--- 10.0.0.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.836/1.335/2.145 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=52 time=3.943 ms
64 bytes from 8.8.8.8: seq=1 ttl=52 time=4.519 ms
64 bytes from 8.8.8.8: seq=2 ttl=52 time=3.778 ms
$ curl http://169.254.169.254/2009-04-04/meta-data/instance-id
i-00000002

View File

@ -0,0 +1,111 @@
.. _ovn_faq:
==========================
Frequently Asked Questions
==========================
**Q: What are the key differences between ML2/ovs and ML2/ovn?**
+---------------+---------------------------+--------------------------------+
| Detail | ml2/ovs | ml2/ovn |
+===============+===========================+================================+
| agent/server | rabbit mq messaging + RPC.| ovsdb protocol on the |
| communication | | NorthBound and SouthBound |
| | | databases. |
+---------------+---------------------------+--------------------------------+
| l3ha | routers expose an "ha" | routers don't expose an "ha" |
| API | field that can be disabled| field, and will make use of HA |
| | or enabled by admin with a| as soon as there is more than |
| | deployment default. | one network node available. |
+---------------+---------------------------+--------------------------------+
| l3ha | qrouter namespace with | ovn-controller configures |
| dataplane | keepalive process and an | specific OpenFlow rules, and |
| | internal ha network for | enables BFD protocol over |
| | VRRP traffic. | tunnel endpoints to detect |
| | | connectivity issues to nodes. |
+---------------+---------------------------+--------------------------------+
| DVR | exposes the "distributed" | no "distributed" flag is shown |
| API | flag on routers only | or available on routers via |
| | modifiable by admin. | API. |
+---------------+---------------------------+--------------------------------+
| DVR | uses namespaces, veths, | Uses OpenFlow rules on the |
| dataplane | ip routing, ip rules and | compute nodes. |
| | iptables on the compute | |
| | nodes. | |
+---------------+---------------------------+--------------------------------+
| E/W traffic | goes through network nodes| completely distributed in |
| | when the router is not | all cases. |
| | distributed (DVR). | |
+---------------+---------------------------+--------------------------------+
| Metadata | Metadata service is | Metadata is completely |
| Service | provided by the qrouters | distributed across compute |
| | or dhcp namespaces in the | nodes, and served from the |
| | network nodes. | ovnmeta-xxxxx-xxxx namespace. |
+---------------+---------------------------+--------------------------------+
| DHCP | DHCP is provided via | DHCP is provided by OpenFlow |
| Service | qdhcp-xxxxx-xxx namespaces| and ovn-controller, being |
| | which run dnsmasq inside. | distributed across computes. |
+---------------+---------------------------+--------------------------------+
| Trunk | Trunk ports are built | Trunk ports live in br-int |
| Ports | by creating br-trunk-xxx | as OpenFlow rules, while |
| | bridges and patch ports. | subports are directly attached |
| | | to br-int. |
+---------------+---------------------------+--------------------------------+
**Q: Why can't I use the distributed or ha flags of routers?**
Networking OVN implements HA and distributed in a transparent way for the
administrator and users.
HA will be automatically used on routers as soon as more than two
gateway nodes are detected. And distributed floating IPs will be used
as soon as it's configured (see next question).
**Q: Does OVN support DVR or distributed L3 routing?**
Yes, it's controlled by a single flag in configuration.
DVR will be used for floating IPs if the ovn / enable_distributed_floating_ip
flag is configured to True in the neutron server configuration, being
a deployment wide setting. In contrast to ML2/ovs which was able to specify
this setting per router (only admin).
Although ovn driver does not expose the "distributed" flag of routers
throught the API.
**Q: Does OVN support integration with physical switches?**
OVN currently integrates with physical switches by optionally using them as
VTEP gateways from logical to physical networks and via integrations provided
by the Neutron ML2 framework, hierarchical port binding.
**Q: What's the status of HA for ovn driver and OVN?**
Typically, multiple copies of neutron-server are run across multiple servers
and uses a load balancer. The neutron ML2 mechanism driver provided by
ovn driver supports this deployment model. DHCP and metadata services
are distributed across compute nodes, and don't depend on the network nodes.
The network controller portion of OVN is distributed - an instance of the
ovn-controller service runs on every hypervisor. OVN also includes some
central components for control purposes.
ovn-northd is a centralized service that does some translation between the
northbound and southbound databases in OVN. Currently, you only run this
service once. You can manage it in an active/passive HA mode using something
like Pacemaker. The OVN project plans to allow this service to be horizontally
scaled both for scaling and HA reasons. This will allow it to be run in an
active/active HA mode.
OVN also makes use of ovsdb-server for the OVN northbound and southbound
databases. ovsdb-server supports active/passive HA using replication.
For more information, see:
http://docs.openvswitch.org/en/latest/topics/ovsdb-replication/
A typical deployment would use something like Pacemaker to manage the
active/passive HA process. Clients would be pointed at a virtual IP
address. When the HA manager detects a failure of the master, the
virtual IP would be moved and the passive replica would become the
new master.
See :doc:`/admin/ovn/ovn` for links to more details on OVN's architecture.

12
doc/source/ovn/index.rst Normal file
View File

@ -0,0 +1,12 @@
.. meta::
:keywords: ovn, networking-ovn, OpenStack, neutron
==========
OVN Driver
==========
.. toctree::
:maxdepth: 1
migration.rst
faq/index.rst

View File

@ -0,0 +1,360 @@
.. _ovn_migration:
Migration Strategy
==================
This document details an in-place migration strategy from ML2/OVS to ML2/OVN
in either ovs-firewall or ovs-hybrid mode for a TripleO OpenStack deployment.
For non TripleO deployments, please refer to the file ``migration/README.rst``
and the ansible playbook ``migration/migrate-to-ovn.yml``.
Overview
--------
The migration process is orchestrated through the shell script
ovn_migration.sh, which is provided with the OVN driver.
The administrator uses ovn_migration.sh to perform readiness steps
and migration from the undercloud node.
The readiness steps, such as host inventory production, DHCP and MTU
adjustments, prepare the environment for the procedure.
Subsequent steps start the migration via Ansible.
Plan for a 24-hour wait after the setup-mtu-t1 step to allow VMs to catch up
with the new MTU size. The default neutron ML2/OVS configuration has a
dhcp_lease_duration of 86400 seconds (24h).
Also, if there are instances using static IP assignment, the administrator
should be ready to update the MTU of those instances to the new value of 8
bytes less than the ML2/OVS (VXLAN) MTU value. For example, the typical
1500 MTU network value that makes VXLAN tenant networks use 1450 bytes of MTU
will need to change to 1442 under Geneve. Or under the same overlay network,
a GRE encapsulated tenant network would use a 1458 MTU, but again a 1442 MTU
for Geneve.
If there are instances which use DHCP but don't support lease update during
the T1 period the administrator will need to reboot them to ensure that MTU
is updated inside those instances.
Steps for migration
-------------------
Perform the following steps in the overcloud/undercloud
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Ensure that you have updated to the latest openstack/neutron version.
Perform the following steps in the undercloud
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Install python-networking-ovn-migration-tool.
.. code-block:: console
yum install python-networking-ovn-migration-tool
2. Create a working directory on the undercloud, and copy the ansible playbooks
.. code-block:: console
mkdir ~/ovn_migration
cd ~/ovn_migration
cp -rfp /usr/share/ansible/networking-ovn-migration/playbooks .
3. Create ``~/overcloud-deploy-ovn.sh`` script in your ``$HOME``.
This script must source your stackrc file, and then execute an ``openstack
overcloud deploy`` with your original deployment parameters, plus
the following environment files, added to the end of the command
in the following order:
When your network topology is DVR and your compute nodes have connectivity
to the external network:
.. code-block:: console
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-dvr-ha.yaml \
-e $HOME/ovn-extras.yaml
When your compute nodes don't have external connectivity and you don't use
DVR:
.. code-block:: console
-e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovn-ha.yaml \
-e $HOME/ovn-extras.yaml
Make sure that all users have execution privileges on the script, because it
will be called by ovn_migration.sh/ansible during the migration process.
.. code-block:: console
$ chmod a+x ~/overcloud-deploy-ovn.sh
4. To configure the parameters of your migration you can set the environment
variables that will be used by ``ovn_migration.sh``. You can skip setting
any values matching the defaults.
* STACKRC_FILE - must point to your stackrc file in your undercloud.
Default: ~/stackrc
* OVERCLOUDRC_FILE - must point to your overcloudrc file in your
undercloud.
Default: ~/overcloudrc
* OVERCLOUD_OVN_DEPLOY_SCRIPT - must point to the script described in
step 1.
Default: ~/overcloud-deploy-ovn.sh
* PUBLIC_NETWORK_NAME - Name of your public network.
Default: 'public'.
To support migration validation, this network must have available
floating IPs, and those floating IPs must be pingable from the
undercloud. If that's not possible please configure VALIDATE_MIGRATION
to False.
* IMAGE_NAME - Name/ID of the glance image to us for booting a test server.
Default:'cirros'.
If the image does not exist it will automatically download and use
cirros during the pre-validation / post-validation process.
* VALIDATE_MIGRATION - Create migration resources to validate the
migration. The migration script, before starting the migration, boot a
server and validates that the server is reachable after the migration.
Default: True.
* SERVER_USER_NAME - User name to use for logging into the migration
instances.
Default: 'cirros'.
* DHCP_RENEWAL_TIME - DHCP renewal time in seconds to configure in DHCP
agent configuration file. This renewal time is used only temporarily
during migration to ensure a synchronized MTU switch across the networks.
Default: 30
.. warning::
Please note that VALIDATE_MIGRATION requires enough quota (2
available floating ips, 2 networks, 2 subnets, 2 instances,
and 2 routers as admin).
For example:
.. code-block:: console
$ export PUBLIC_NETWORK_NAME=my-public-network
$ ovn_migration.sh .........
5. Run ``ovn_migration.sh generate-inventory`` to generate the inventory
file - ``hosts_for_migration`` and ``ansible.cfg``. Please review
``hosts_for_migration`` for correctness.
.. code-block:: console
$ ovn_migration.sh generate-inventory
At this step the script will inspect the TripleO ansible inventory
and generate an inventory of hosts, specifically tagged to work
with the migration playbooks.
6. Run ``ovn_migration.sh setup-mtu-t1``
.. code-block:: console
$ ovn_migration.sh setup-mtu-t1
This lowers the T1 parameter
of the internal neutron DHCP servers configuring the ``dhcp_renewal_time``
in /var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini
in all the nodes where DHCP agent is running.
We lower the T1 parameter to make sure that the instances start refreshing
the DHCP lease quicker (every 30 seconds by default) during the migration
proccess. The reason why we force this is to make sure that the MTU update
happens quickly across the network during step 8, this is very important
because during those 30 seconds there will be connectivity issues with
bigger packets (MTU missmatchess across the network), this is also why
step 7 is very important, even though we reduce T1, the previous T1 value
the instances leased from the DHCP server will be much higher
(24h by default) and we need to wait those 24h to make sure they have
updated T1. After migration the DHCP T1 parameter returns to normal values.
7. If you are using VXLAN or GRE tenant networking, ``wait at least 24 hours``
before continuing. This will allow VMs to catch up with the new MTU size
of the next step.
.. warning::
If you are using VXLAN or GRE networks, this 24-hour wait step is critical.
If you are using VLAN tenant networks you can proceed to the next step without delay.
.. warning::
If you have any instance with static IP assignment on VXLAN or
GRE tenant networks, you must manually modify the configuration of those instances.
If your instances don't honor the T1 parameter of DHCP they will need
to be rebooted.
to configure the new geneve MTU, which is the current VXLAN MTU minus 8 bytes.
For instance, if the VXLAN-based MTU was 1450, change it to 1442.
.. note::
24 hours is the time based on default configuration. It actually depends on
/var/lib/config-data/puppet-generated/neutron/etc/neutron/dhcp_agent.ini
dhcp_renewal_time and
/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf
dhcp_lease_duration parameters. (defaults to 86400 seconds)
.. note::
Please note that migrating a deployment which uses VLAN for tenant/project
networks is not recommended at this time because of a bug in core ovn,
full support is being worked out here:
https://mail.openvswitch.org/pipermail/ovs-dev/2018-May/347594.html
One way to verify that the T1 parameter has propagated to existing VMs
is to connect to one of the compute nodes, and run ``tcpdump`` over one
of the VM taps attached to a tenant network. If T1 propegation was a success,
you should see that requests happen on an interval of approximately 30 seconds.
.. code-block:: console
[heat-admin@overcloud-novacompute-0 ~]$ sudo tcpdump -i tap52e872c2-e6 port 67 or port 68 -n
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap52e872c2-e6, link-type EN10MB (Ethernet), capture size 262144 bytes
13:17:28.954675 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300
13:17:28.961321 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
13:17:56.241156 IP 192.168.99.5.bootpc > 192.168.99.3.bootps: BOOTP/DHCP, Request from fa:16:3e:6b:41:3d, length 300
13:17:56.249899 IP 192.168.99.3.bootps > 192.168.99.5.bootpc: BOOTP/DHCP, Reply, length 355
.. note::
This verification is not possible with cirros VMs. The cirros
udhcpc implementation does not obey DHCP option 58 (T1). Please
try this verification on a port that belongs to a full linux VM.
We recommend you to check all the different types of workloads your
system runs (Windows, different flavors of linux, etc..).
8. Run ``ovn_migration.sh reduce-mtu``.
This lowers the MTU of the pre migration VXLAN and GRE networks. The
tool will ignore non-VXLAN/GRE networks, so if you use VLAN for tenant
networks it will be fine if you find this step not doing anything.
.. code-block:: console
$ ovn_migration.sh reduce-mtu
This step will go network by network reducing the MTU, and tagging with
``adapted_mtu`` the networks which have been already handled.
Every time a network is updated all the existing L3/DHCP agents
connected to such network will update their internal leg MTU, instances
will start fetching the new MTU as the DHCP T1 timer expires. As explained
before, instances not obeying the DHCP T1 parameter will need to be
restarted, and instances with static IP assignment will need to be manually
updated.
9. Make TripleO ``prepare the new container images`` for OVN.
If your deployment didn't have a containers-prepare-parameter.yaml, you can
create one with:
.. code-block:: console
$ test -f $HOME/containers-prepare-parameter.yaml || \
openstack tripleo container image prepare default \
--output-env-file $HOME/containers-prepare-parameter.yaml
If you had to create the file, please make sure it's included at the end of
your $HOME/overcloud-deploy-ovn.sh and $HOME/overcloud-deploy.sh
Change the neutron_driver in the containers-prepare-parameter.yaml file to
ovn:
.. code-block:: console
$ sed -i -E 's/neutron_driver:([ ]\w+)/neutron_driver: ovn/' $HOME/containers-prepare-parameter.yaml
You can verify with:
.. code-block:: console
$ grep neutron_driver $HOME/containers-prepare-parameter.yaml
neutron_driver: ovn
Then update the images:
.. code-block:: console
$ openstack tripleo container image prepare \
--environment-file $HOME/containers-prepare-parameter.yaml
.. note::
It's important to provide the full path to your containers-prepare-parameter.yaml
otherwise the command will finish very quickly and won't work (current
version doesn't seem to output any error).
During this step TripleO will build a list of containers, pull them from
the remote registry and push them to your deployment local registry.
10. Run ``ovn_migration.sh start-migration`` to kick start the migration
process.
.. code-block:: console
$ ovn_migration.sh start-migration
During this step, this is what will happen:
* Create pre-migration resources (network and VM) to validate existing
deployment and final migration.
* Update the overcloud stack to deploy OVN alongside reference
implementation services using a temporary bridge "br-migration" instead
of br-int.
* Start the migration process:
1. generate the OVN north db by running neutron-ovn-db-sync util
2. clone the existing resources from br-int to br-migration, so OVN
can find the same resources UUIDS over br-migration
3. re-assign ovn-controller to br-int instead of br-migration
4. cleanup network namespaces (fip, snat, qrouter, qdhcp),
5. remove any unnecessary patch ports on br-int
6. remove br-tun and br-migration ovs bridges
7. delete qr-*, ha-* and qg-* ports from br-int (via neutron netns
cleanup)
* Delete neutron agents and neutron HA internal networks from the database
via API.
* Validate connectivity on pre-migration resources.
* Delete pre-migration resources.
* Create post-migration resources.
* Validate connectivity on post-migration resources.
* Cleanup post-migration resources.
* Re-run deployment tool to update OVN on br-int, this step ensures
that the TripleO database is updated with the final integration bridge.
* Run an extra validation round to ensure the final state of the system is
fully operational.
Migration is complete !!!

View File

@ -25,6 +25,7 @@
admin/index admin/index
configuration/index configuration/index
cli/index cli/index
ovn/index
reference/rest-api reference/rest-api
feature_classification/index feature_classification/index
contributor/index contributor/index

51
tools/tripleo/ovn.yml Normal file
View File

@ -0,0 +1,51 @@
# Summary of the feature set.
# Deploy an Openstack environment with OVN configured in the containerized
# overcloud
# TODO (lucasagomes): Ideally this configuration file should live in
# the tripleo-quickstart repository. Delete it from the networking-ovn
# tree once its moved.
deploy_timeout: 190
network_isolation: true
enable_pacemaker: true
overcloud_ipv6: false
containerized_overcloud: true
# This enables TLS for the undercloud which will also make haproxy bind
# to the configured public-vip and admin-vip.
undercloud_generate_service_certificate: true
# List of ntp servers to use in the undercloud
undercloud_undercloud_ntp_servers: pool.ntp.org
# This enables the deployment of the overcloud with SSL.
ssl_overcloud: false
# This featureset is extremely resource intensive, so we disable telemetry
# in order to reduce the overall memory footprint This is not required
# in newton
telemetry_args: >-
{% if release != 'newton' %}
-e {{ overcloud_templates_path }}/environments/disable-telemetry.yaml
{% endif %}
extra_args: >-
--ntp-server pool.ntp.org
-e {{ overcloud_templates_path }}/environments/docker.yaml
-e {{ overcloud_templates_path }}/environments/docker-ha.yaml
-e {{ overcloud_templates_path }}/environments/services/neutron-ovn-ha.yaml
prepare_service_env_args: >-
-e {{ overcloud_templates_path }}/environments/docker.yaml
-e {{ overcloud_templates_path }}/environments/docker-ha.yaml
-e {{ overcloud_templates_path }}/environments/services/neutron-ovn-ha.yaml
# If `run_tempest` is `true`, run tempests tests, otherwise do not
# run them.
tempest_config: true
test_ping: false
run_tempest: false
test_regex: ''
tempest_whitelist:
- 'tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_network_basic_ops'