openstack-manuals/doc/admin-guide-cloud/ch_networking.xml
Andreas Jaeger 34a567089b Fix markup
This fixes mainly screens, computeroutput/userinput and also
marks some filenames. A lot of prompts have been added.

Change-Id: I864dc5b051bb297b61c9b2ed5464f9a35306bd68
Partial-Bug: #1217503
2013-09-17 16:08:17 +02:00

2255 lines
116 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_networking">
<title>Networking</title>
<para>This chapter describes the high-level concepts and components of an OpenStack Networking
administration in a cloud system.</para>
<section xml:id="neworking-intro">
<title>Introduction to Networking</title>
<para>The OpenStack Networking project was created to provide a rich
API for defining network connectivity and
addressing in the cloud. The OpenStack Networking project gives
operators the ability to leverage different networking
technologies to power their cloud networking.</para>
<para>For a detailed description of the OpenStack Networking API
abstractions and their attributes, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:m="http://www.w3.org/1998/Math/MathML"
xmlns:html="http://www.w3.org/1999/xhtml"
xmlns:db="http://docbook.org/ns/docbook"
><citetitle>OpenStack Networking API Guide
(v2.0)</citetitle></link>.</para>
<section xml:id="section_networking-api">
<title>Networking API</title>
<para>Networking is a virtual network service that provides a powerful API to define the
network connectivity and addressing used by devices from other services, such as
OpenStack Compute.   </para>
<para>The Compute API has a virtual server abstraction to describe computing resources.
Similarly, the OpenStack Networking API has virtual network, subnet, and port
abstractions to describe networking resources. In more detail: <itemizedlist>
<listitem>
<para><emphasis role="bold">Network</emphasis>. An isolated L2 segment,
analogous to VLAN in the physical networking world.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Subnet</emphasis>. A block of v4 or v6 IP
addresses and associated configuration state.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Port</emphasis>. A connection point for
attaching a single device, such as the NIC of a virtual server, to a
virtual network. Also describes the associated network configuration,
such as the MAC and IP addresses to be used on that port.   </para>
</listitem>
</itemizedlist> You can configure rich network topologies by creating and
configuring networks and subnets, and then instructing other OpenStack services like
OpenStack Compute to attach virtual devices to ports on these networks.  In
particular, OpenStack Networking supports each tenant having multiple private
networks, and allows tenants to choose their own IP addressing scheme (even if those
IP addresses overlap with those used by other tenants). The OpenStack Networking
service: <itemizedlist>
<listitem>
<para>Enables advanced cloud networking use cases, such as building
multi-tiered web applications and allowing applications to be migrated
to the cloud without changing IP addresses.</para>
</listitem>
<listitem>
<para>Offers flexibility for the cloud administrator to customized network
offerings.</para>
</listitem>
<listitem>
<para>Provides a mechanism that lets cloud administrators expose additional
API capabilities through API extensions.  Commonly, new capabilities are
first introduced as an API extension, and over time become part of the
core OpenStack Networking API.</para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="section_plugin-arch">
<title>Plugin architecture</title>
<para>Enhancing traditional networking solutions to
provide rich cloud networking is challenging.
Traditional networking is not designed to scale to
cloud proportions nor to handle automatic configuration.</para>
<para>The original OpenStack Compute network implementation assumed a
very basic model of performing all isolation through
Linux VLANs and IP tables. OpenStack Networking introduces the
concept of a <emphasis role="italic"
>plugin</emphasis>, which is a back-end
implementation of the OpenStack Networking API. A plugin can use a
variety of technologies to implement the logical API
requests.  Some OpenStack Networking plugins might use basic Linux
VLANs and IP tables, while others might use more
advanced technologies, such as L2-in-L3 tunneling or
OpenFlow, to provide similar benefits.</para>
<para>The following plugins are currently included in the OpenStack Networking distribution: <itemizedlist>
<listitem>
<para><emphasis role="bold">Big Switch Plugin (Floodlight REST Proxy)</emphasis>.
<link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Brocade Plugin</emphasis>.
<link
xlink:href="https://github.com/brocade/brocade"
>https://github.com/brocade/brocade</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cisco</emphasis>.
<link
xlink:href="http://wiki.openstack.org/cisco-neutron"
>http://wiki.openstack.org/cisco-neutron</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cloudbase Hyper-V Plugin</emphasis>.
<link xlink:href="http://www.cloudbase.it/quantum-hyper-v-plugin/"
>http://www.cloudbase.it/quantum-hyper-v-plugin/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Linux Bridge Plugin</emphasis>.
Documentation included in this guide and at
<link
xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin</link>
 </para>
</listitem>
<listitem>
<para><emphasis role="bold">Mellanox Plugin</emphasis>. <link
xlink:href="https://wiki.openstack.org/wiki/Mellanox-Neutron/">
https://wiki.openstack.org/wiki/Mellanox-Neutron/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Midonet Plugin</emphasis>.
<link
xlink:href="http://www.midokura.com/">
http://www.midokura.com/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">NEC OpenFlow Plugin</emphasis>.
<link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Nicira NVP Plugin</emphasis>.
Documentation include in this guide,
<link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html">
NVP Product Overview </link>, and
<link
xlink:href="http://www.nicira.com/support"
>NVP Product Support</link>.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Open vSwitch Plugin</emphasis>.
Documentation included in this guide.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">PLUMgrid</emphasis>.
<link
xlink:href="https://wiki.openstack.org/wiki/PLUMgrid-Neutron"
>https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Ryu Plugin</emphasis>.
<link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link>
</para>
</listitem>
</itemizedlist>
</para>
<para>Plugins can have different properties for hardware requirements, features, performance,
scale, or operator tools. Because OpenStack Networking supports a large number of plugins,
the cloud administrator is able to weigh different options and decide which networking
technology is right for the deployment.
</para>
<?hard-pagebreak?>
<para>Not all OpenStack networking plugins are compatible with all possible OpenStack compute drivers:</para>
<table rules="all">
<caption>Plugin Compatibility with OpenStack Compute Drivers</caption>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<thead>
<tr>
<th></th>
<th>Libvirt (KVM/QEMU)</th>
<th>XenServer</th>
<th>VMware</th>
<th>Hyper-V</th>
<th>Bare-metal</th>
<th>PowerVM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bigswitch / Floodlight</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Brocade</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cisco</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Cloudbase Hyper-V</td>
<td></td>
<td></td>
<td></td>
<td>Yes</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Linux Bridge</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Mellanox</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Midonet</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>NEC OpenFlow</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Nicira NVP</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Plumgrid</td>
<td>Yes</td>
<td></td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Ryu</td>
<td>Yes</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="section_networking-arch">
<title>Networking architecture</title>
<para>This section describes the high-level components of an Networking deployment. Before
you deploy Networking, it is useful to understand the different components that make up
the solution, and how these components interact with each other and with other OpenStack
services.</para>
<section xml:id="arch_overview">
<title>Overview</title>
<para>OpenStack Networking is a standalone service, just
like other OpenStack services such as OpenStack
Compute, OpenStack Image service, OpenStack Identity
service, or the OpenStack Dashboard. Like those
services, a deployment of OpenStack Networking often
involves deploying several processes on a variety of
hosts.</para>
<para>The main process of the OpenStack Networking server is
<literal>neutron-server</literal>, which is a
Python daemon that exposes the OpenStack Networking API and passes
user requests to the configured OpenStack Networking plugin for
additional processing. Typically, the plugin requires
access to a database for persistent storage (also similar
to other OpenStack services).</para>
<para>If your deployment uses a controller host to run centralized
OpenStack Compute components, you can deploy the OpenStack Networking server on
that same host. However, OpenStack Networking is entirely
standalone and can be deployed on its own host as
well. OpenStack Networking also includes additional agents that
might be required, depending on your deployment: <itemizedlist>
<listitem>
<para><emphasis role="bold">plugin agent</emphasis>
(<literal>neutron-*-agent</literal>).
Runs on each hypervisor to perform local
vswitch configuration. The agent to be run will
depend on which plugin you are using, because
some plugins do not actually require an agent.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">dhcp agent</emphasis>
(<literal>neutron-dhcp-agent</literal>).
Provides DHCP services to tenant networks.
This agent is the same for all plugins.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">l3
agent</emphasis>
<literal>(neutron-l3-agent</literal>).
Provides L3/NAT forwarding to provide
external network access for VMs on tenant
networks. This agent is the same for all plugins.
</para>
</listitem>
</itemizedlist>
</para>
<para>The above agents interact with the main Neutron process through RPC (for example,
rabbitmq or qpid) or through the standard OpenStack Networking API. Further:
<itemizedlist>
<listitem>
<para>Networking relies on the OpenStack Identity service (keystone) for the
authentication and authorization of all API request. </para>
</listitem>
<listitem>
<para>Compute (nova) interacts with OpenStack Networking through calls to
its standard API.  As part of creating a VM, the <systemitem
class="service">nova-compute</systemitem> service communicates with
the OpenStack Networking API to plug each virtual NIC on the VM into a
particular network.   </para>
</listitem>
<listitem><para>The Dashboard (Horizon) integrates with the OpenStack Networking API, allowing administrators
and tenant users to create and manage network services through the
Dashboard GUI.</para></listitem>
</itemizedlist>
  </para>
</section>
<section xml:id="services">
<title>Place services on physical hosts</title>
<para>Like other OpenStack services, Networking provides cloud administrators with
significant flexibility in deciding which individual services should run on which
physical devices. At one extreme, all service daemons can be run on a single
physical host for evaluation purposes. At the other, each service could have its own
physical hosts and, in some cases, be replicated across multiple hosts for
redundancy. For more information, see <citetitle
xmlns:svg="http://www.w3.org/2000/svg" xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration Reference</citetitle>.</para>
<para>In this guide, we focus primarily on a standard
architecture that includes a “cloud controller” host,
a “network gateway” host, and a set of hypervisors for
running VMs.  The "cloud controller" and "network gateway" can be combined
in simple deployments. However, if you expect VMs to send significant amounts of
traffic to or from the Internet, a dedicated network gateway host is recommended
to avoid potential CPU contention between packet forwarding performed by
the <literal>neutron-l3-agent</literal> and other OpenStack services.</para>
</section>
<section xml:id="connectivity">
<title>Network connectivity for physical hosts</title>
<mediaobject>
<imageobject>
<imagedata scale="60" fileref="../common/figures/Neutron-PhysNet-Diagram.png"/>
</imageobject>
</mediaobject>
<para>A standard OpenStack Networking setup has up to four distinct physical data center networks:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Management
network</emphasis>. Used for internal
communication between OpenStack Components.  
IP addresses on this network should be
reachable only within the data center. 
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Data
network</emphasis>. Used for VM data
communication within the cloud deployment. 
The IP addressing requirements of this network
depend on the OpenStack Networking plugin being used.  </para>
</listitem>
<listitem>
<para><emphasis role="bold">External
network</emphasis>. Used to provide VMs
with Internet access in some deployment
scenarios.  IP addresses on this network
should be reachable by anyone on the
Internet.  </para>
</listitem>
<listitem>
<para><emphasis role="bold">API
network</emphasis>. Exposes all OpenStack
APIs, including the OpenStack Networking API, to
tenants. IP addresses on this network
should be reachable by anyone on the
Internet. The API network may be the same as
the external network, because it is possible to create
an external-network subnet that is allocated
IP ranges that use less than the full
range of IP addresses in an IP block.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="section_networking-use">
<title>Use Networking</title>
<para>You can use OpenStack Networking in the following ways:
<itemizedlist>
<listitem><para>Expose the OpenStack Networking API to cloud tenants,
which enables them to build rich network topologies.
</para>
</listitem>
<listitem><para>Have the cloud administrator, or an automated
administrative tool, create network connectivity on behalf of tenants.
</para>
</listitem>
</itemizedlist>
</para>
<para>
A tenant or cloud administrator can both perform the following procedures.
</para>
<section xml:id="api_features">
<title>Core Networking API features</title>
<para>After you install and run OpenStack Networking, tenants
and administrators can perform create-read-update-delete (CRUD) API
networking operations by using either the
<command>neutron</command> CLI tool or the API.
Like other OpenStack CLI tools, the <command>neutron</command>
tool is just a basic wrapper around the OpenStack Networking API. Any
operation that can be performed using the CLI has an equivalent API call
that can be performed programmatically.
</para>
<para>The CLI includes a number of options. For details, refer to the
<citetitle>OpenStack End User Guide</citetitle>.
</para>
<section xml:id="api_abstractions">
<title>API abstractions</title>
<para>The OpenStack Networking v2.0 API provides control over both
L2 network topologies and the IP addresses used on those networks
(IP Address Management or IPAM). There is also an extension to
cover basic L3 forwarding and NAT, which provides capabilities
similar to <command>nova-network</command>.
</para>
<para>In the OpenStack Networking API:
<itemizedlist>
<listitem><para>A 'Network' is an isolated L2 network segment
(similar to a VLAN), which forms the basis for
describing the L2 network topology available in an OpenStack
Networking deployment.
</para></listitem>
<listitem><para>A 'Subnet' associates a block of IP addresses
and other network configuration (for example, default gateways
or dns-servers) with an OpenStack Networking network. Each
subnet represents an IPv4 or IPv6 address block and, if needed,
each OpenStack Networking network can have multiple subnets.
</para></listitem>
<listitem><para>A 'Port' represents an attachment port to a L2
OpenStack Networking network. When a port
is created on the network, by default it is allocated an
available fixed IP address out of one of the designated subnets
for each IP version (if one exists). When the port is destroyed,
its allocated addresses return to the pool of available IPs on
the subnet. Users of the OpenStack Networking API can either
choose a specific IP address from the block, or let OpenStack
Networking choose the first available IP address.
</para></listitem>
</itemizedlist>
</para>
<para>The following table summarizes the attributes available for each
of the previous networking abstractions. For more operations about
API abstraction and operations, please refer to the
<link xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/">Networking API v2.0 Reference</link>.
</para>
<table rules="all">
<caption>Network attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>admin_state_up</systemitem></td>
<td>bool</td>
<td>True</td>
<td>Administrative state of the network. If specified as
False (down), this network does not forward
packets.
</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-str</td>
<td>Generated</td>
<td>UUID for this network.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this network; is not required
to be unique.
</td>
</tr>
<tr>
<td><systemitem>shared</systemitem></td>
<td>bool</td>
<td>False</td>
<td>Specifies whether this network resource can
be accessed by any tenant. The default policy setting restricts
usage of this attribute to administrative users only.
</td>
</tr>
<tr>
<td><systemitem>status</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether this network is
currently operational.</td>
</tr>
<tr>
<td><systemitem>subnets</systemitem></td>
<td>list(uuid-str)</td>
<td>Empty list</td>
<td>List of subnets associated with this network.
</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-str</td>
<td>N/A</td>
<td>Tenant owner of the network. Only administrative users
can set the tenant identifier; this cannot be changed
using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Subnet Attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>allocation_pools</systemitem></td>
<td>list(dict)</td>
<td>Every address in <systemitem>cidr</systemitem>,
excluding <systemitem>gateway_ip</systemitem> (if
configured).
</td>
<td><para>List of cidr sub-ranges that are available for dynamic
allocation to ports. Syntax:
<programlisting>[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]</programlisting></para>
</td>
</tr>
<tr>
<td><systemitem>cidr</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>IP range for this subnet, based on the IP version.</td>
</tr>
<tr>
<td><systemitem>dns_nameservers</systemitem></td>
<td>list(string)</td>
<td>Empty list</td>
<td>List of DNS name servers used by hosts in this subnet.</td>
</tr>
<tr>
<td><systemitem>enable_dhcp</systemitem></td>
<td>bool</td>
<td>True</td>
<td>Specifies whether DHCP is enabled for this subnet.</td>
</tr>
<tr>
<td><systemitem>gateway_ip</systemitem></td>
<td>string</td>
<td>First address in <systemitem>cidr</systemitem>
</td>
<td>Default gateway used by devices in this subnet.</td>
</tr>
<tr>
<td><systemitem>host_routes</systemitem></td>
<td>list(dict)</td>
<td>Empty list</td>
<td>Routes that should be used by devices with
IPs from this subnet (not including local
subnet route).</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID representing this subnet.</td>
</tr>
<tr>
<td><systemitem>ip_version</systemitem></td>
<td>int</td>
<td>4</td>
<td>IP version.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this subnet (might
not be unique).
</td>
</tr>
<tr>
<td><systemitem>network_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this subnet is associated.</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of network. Only administrative users
can set the tenant identifier; this cannot be changed
using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Port attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>admin_state_up</systemitem></td>
<td>bool</td>
<td>true</td>
<td>Administrative state of this port. If specified as False
(down), this port does not forward packets.
</td>
</tr>
<tr>
<td><systemitem>device_id</systemitem></td>
<td>string</td>
<td>None</td>
<td>Identifies the device using this port (for example, a
virtual server's ID).
</td>
</tr>
<tr>
<td><systemitem>device_owner</systemitem></td>
<td>string</td>
<td>None</td>
<td>Identifies the entity using this port (for example, a
dhcp agent).</td>
</tr>
<tr>
<td><systemitem>fixed_ips</systemitem></td>
<td>list(dict)</td>
<td>Automatically allocated from pool</td>
<td>Specifies IP addresses for this port; associates
the port with the subnets containing the listed IP
addresses.
</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID for this port.</td>
</tr>
<tr>
<td><systemitem>mac_address</systemitem></td>
<td>string</td>
<td>Generated</td>
<td>Mac address to use on this port.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this port (might
not be unique).
</td>
</tr>
<tr>
<td><systemitem>network_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this port is associated.
</td>
</tr>
<tr>
<td><systemitem>status</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether the network is currently
operational.
</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of the network. Only administrative users
can set the tenant identifier; this cannot be changed
using authorization policies.
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="basic_operations">
<title>Basic operations</title>
<para>Before going further, it is highly recommended that you first
read the few pages in the <link xlink:href="http://docs.openstack.org/user-guide/content/index.html">
OpenStack End User Guide</link> that are specific to OpenStack
Networking. OpenStack Networking's CLI has some advanced
capabilities that are described only in that guide.
</para>
<para>The following table provides just a few examples of the
<systemitem>neutron</systemitem> tool usage.
</para>
<table rules="all">
<caption>Basic OpenStack Networking operations</caption>
<col width="50%"/>
<col width="50%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Create a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet associated with net1.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>List ports on a tenant.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list</userinput></screen></td>
</tr>
<tr>
<td>List ports on a tenant, and display the <systemitem>id</systemitem>, <systemitem>fixed_ips</systemitem>, and
<systemitem>device_owner</systemitem> columns.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -c device_owner</userinput></screen>
</td>
</tr>
<tr>
<td>Display details of a particular port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-show <replaceable>port-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>
The <systemitem>device_owner</systemitem> field describes who owns the
port. A port whose <systemitem>device_owner</systemitem> begins with:
<itemizedlist>
<listitem><para>"network:" is created by OpenStack
Networking.</para></listitem>
<listitem><para>"compute:" is created by OpenStack Compute.
</para></listitem>
</itemizedlist>
</para>
</note>
</section>
<section xml:id="admin_api_config">
<title>Administrative operations</title>
<para>The cloud administrator can perform any <systemitem>neutron</systemitem>
call on behalf of tenants by specifying an OpenStack Identity <systemitem>tenant_id</systemitem> in the request, as follows:
</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=<replaceable>tenant-id</replaceable> <replaceable>network-name</replaceable></userinput></screen>
<para>
For example:
</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1</userinput></screen>
<note><para>To view all tenant IDs in OpenStack Identity, run the
following command as an OpenStack Identity (keystone) admin user:
</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
</note>
</section>
<section xml:id="advanced_networking">
<title>Advanced operations</title>
<para>The following table provides a few advanced examples of using the
<systemitem>neutron</systemitem> tool to create and display
networks, subnets, and ports.</para>
<table rules="all">
<caption>Advanced OpenStack Networking operations</caption>
<col width="25%"/>
<col width="75%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Create a "shared" network (that is, a network that can be used by all tenants).</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create --shared public-net</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet that has a specific gateway IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet that has no gateway IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --no-gateway net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Create a subnet in which DHCP is disabled.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False</userinput></screen></td>
</tr>
<tr>
<td>Create subnet with a specific set of host routes.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2</userinput></screen></td>
</tr>
<tr>
<td>Create subnet with a specific set of dns nameservers.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8</userinput></screen></td>
</tr>
<tr>
<td>Display all ports/IPs allocated on a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --network_id <replaceable>net-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="using_nova_with_neutron">
<title>Use Compute with Networking</title>
<section xml:id="basic_workflow_with_nova">
<title>Basic operations</title>
<table rules="all">
<caption>Basic Compute/Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Check available networks.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-list</userinput></screen></td>
</tr>
<tr>
<td>Boot a VM with a single NIC on a selected OpenStack Networking network.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td>Search for all ports with a <systemitem>device_id</systemitem> corresponding to the OpenStack Compute instance UUID.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Search for ports, but limit display to only the port's <systemitem>mac_address</systemitem>.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c mac_address --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Temporarily disable a port from sending traffic.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-update <replaceable>port-id</replaceable> --admin_state_up=False</userinput></screen></td>
</tr>
<tr>
<td>Delete a VM.</td>
<td><screen><prompt>$</prompt> <userinput>nova delete --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note><para>When you:
<itemizedlist>
<listitem><para>Boot a Compute VM, a port on the network is
automatically created that corresponds to the VM Nic. You may
also need to configure <link linkend="enabling_ping_and_ssh">security group rules</link> to allow access to the VM.</para></listitem>
<listitem><para>Delete a Compute VM, the underlying OpenStack
Networking port is automatically deleted as well.</para></listitem>
</itemizedlist>
</para></note>
</section>
<section xml:id="advanceed_vm_creation">
<title>Advanced VM creation</title>
<table rules="all">
<caption>VM creation operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boot a VM with multiple NICs.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net1-id</replaceable> --nic net-id=<replaceable>net2-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Boot a VM with a specific IP address: first create an OpenStack
Networking port with a specific IP address, then boot
a VM specifying a <systemitem>port-id</systemitem> rather than a
<systemitem>net-id</systemitem>.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-create --fixed-ip subnet_id=<replaceable>subnet-id</replaceable>,ip_address=<replaceable>IP</replaceable> <replaceable>net-id</replaceable></userinput>
<prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic port-id=<replaceable>port-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td>Boot a VM that connects to all networks that are accessible to
the tenant who submits the request (without the
<systemitem>--nic</systemitem> option).
</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
</tbody>
</table>
<note><para>OpenStack Networking does not currently support the <command>v4-fixed-ip</command> parameter of the <command>--nic</command> option for the <command>nova</command> command.
</para></note>
</section>
<section xml:id="enabling_ping_and_ssh">
<title>Security groups (enabling ping and SSH on VMs)</title>
<para>You must configure security group rules depending on the type of
plugin you are using. If you are using a plugin that:
</para>
<itemizedlist>
<listitem><para>Implements Networking security groups, you can configure security group rules directly by
using <command>neutron security-group-rule-create</command>. The
following example allows <command>ping</command> and
<command>ssh</command> access to your VMs.</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol icmp --direction ingress default</userinput>
<prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default</userinput></screen>
</listitem>
<listitem>
<para>Does not implement Networking security groups, you can configure security group rules
by using the <command>nova secgroup-add-rule</command> or
<command>euca-authorize</command> command. The following
<systemitem>nova</systemitem> commands allow <command>ping</command>
and <command>ssh</command> access to your VMs.</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
</listitem>
</itemizedlist>
<note>
<para>If your plugin implements OpenStack Networking security groups,
you can also leverage Compute security groups by setting
<systemitem>security_group_api = neutron</systemitem> in
<filename>nova.conf</filename>. After setting this option, all Compute
security group commands are proxied to OpenStack Networking.
</para>
</note>
</section>
</section>
</section>
<section xml:id="section_networking-adv-features">
<title>Advanced features through API extensions</title>
<para>This section discusses two API extensions implemented by
several plugins.  We include them in this guide as they
provide capabilities similar to what was available in
nova-network and are thus likely to be relevant to a large
portion of the OpenStack community.  </para>
<section xml:id="provider_networks">
<title>Provider networks</title>
<para>Provider networks allow cloud administrators to create
OpenStack Networking networks that map directly to
physical networks in the data center.  This is commonly
used to give tenants direct access to a "public" network
that can be used to reach the Internet.  It may also be
used to integrate with VLANs in the network that already
have a defined meaning (e.g., allow a VM from the
"marketing" department to be placed on the same VLAN as
bare-metal marketing hosts in the same data
center).</para>
<para>The provider extension allows administrators to
explicitly manage the relationship between OpenStack
Networking virtual networks and underlying physical
mechanisms such as VLANs and tunnels. When this extension
is supported, OpenStack Networking client users with
administrative privileges see additional provider
attributes on all virtual networks, and are able to
specify these attributes in order to create provider
networks.</para>
<para>The provider extension is supported by the openvswitch
and linuxbridge plugins. Configuration of these plugins
requires familiarity with this extension.</para>
<section xml:id="provider_terminology">
<title>Terminology</title>
<para>A number of terms are used in the provider extension
and in the configuration of plugins supporting the
provider extension:<itemizedlist>
<listitem>
<para><emphasis role="bold">virtual
network</emphasis> - An OpenStack
Networking L2 network (identified by a
UUID and optional name) whose ports can be
attached as vNICs to OpenStack Compute
instances and to various OpenStack
Networking agents. The openvswitch and
linuxbridge plugins each support several
different mechanisms to realize virtual
networks.</para>
</listitem>
<listitem>
<para><emphasis role="bold">physical
network</emphasis> - A network
connecting virtualization hosts (i.e.
OpenStack Compute nodes) with each other
and with other network resources. Each
physical network may support multiple
virtual networks. The provider extension
and the plugin configurations identify
physical networks using simple string
names.</para>
</listitem>
<listitem>
<para><emphasis role="bold">tenant
network</emphasis> - A "normal"
virtual network created by/for a tenant.
The tenant is not aware of how that
network is physically realized.</para>
</listitem>
<listitem>
<para><emphasis role="bold">provider
network</emphasis> - A virtual network
administratively created to map to a
specific network in the data center,
typically to enable direct access to
non-OpenStack resources on that network.
Tenants can be given access to provider
networks.</para>
</listitem>
<listitem>
<para><emphasis role="bold">VLAN
network</emphasis> - A virtual network
realized as packets on a specific physical
network containing IEEE 802.1Q headers
with a specific VID field value. VLAN
networks sharing the same physical network
are isolated from each other at L2, and
can even have overlapping IP address
spaces. Each distinct physical network
supporting VLAN networks is treated as a
separate VLAN trunk, with a distinct space
of VID values. Valid VID values are 1
through 4094.</para>
</listitem>
<listitem>
<para><emphasis role="bold">flat
network</emphasis> - A virtual network
realized as packets on a specific physical
network containing no IEEE 802.1Q header.
Each physical network can realize at most
one flat network.</para>
</listitem>
<listitem>
<para><emphasis role="bold">local
network</emphasis> - A virtual network
that allows communication within each
host, but not across a network. Local
networks are intended mainly for
single-node test scenarios, but may have
other uses.</para>
</listitem>
<listitem>
<para><emphasis role="bold">GRE
network</emphasis> - A virtual network
realized as network packets encapsulated
using GRE. GRE networks are also referred
to as "tunnels". GRE tunnel packets are
routed by the host's IP routing table, so
GRE networks are not associated by
OpenStack Networking with specific
physical networks.</para>
</listitem>
</itemizedlist></para>
<para>Both the openvswitch and linuxbridge plugins support
VLAN networks, flat networks, and local networks. Only
the openvswitch plugin currently supports GRE
networks, provided that the host's Linux kernel
supports the required Open vSwitch features.</para>
</section>
<section xml:id="provider_attributes">
<title>Provider attributes</title>
<para>The provider extension extends the OpenStack
Networking network resource with the following three
additional attributes:</para>
<table rules="all">
<caption>Provider Network Attributes</caption>
<col width="25%"/>
<col width="10%"/>
<col width="25%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>provider:network_type</td>
<td>String</td>
<td>N/A</td>
<td>The physical mechanism by which the
virtual network is realized. Possible
values are "flat", "vlan", "local", and
"gre", corresponding to flat networks,
VLAN networks, local networks, and GRE
networks as defined above. All types of
provider networks can be created by
administrators, while tenant networks can
be realized as "vlan", "gre", or "local"
network types depending on plugin
configuration.</td>
</tr>
<tr>
<td>provider:physical_network</td>
<td>String</td>
<td>If a physical network named "default" has
been configured, and if
provider:network_type is "flat" or "vlan",
then "default" is used.</td>
<td>The name of the physical network over
which the virtual network is realized for
flat and VLAN networks. Not applicable to
the "local" or "gre" network types.</td>
</tr>
<tr>
<td>provider:segmentation_id</td>
<td>Integer</td>
<td>N/A</td>
<td>For VLAN networks, the VLAN VID on the
physical network that realizes the virtual
network. Valid VLAN VIDs are 1 through
4094. For GRE networks, the tunnel ID.
Valid tunnel IDs are any 32 bit unsigned
integer. Not applicable to the "flat" or
"local" network types.</td>
</tr>
</tbody>
</table>
<para>The provider attributes are returned by OpenStack
Networking API operations when the client is
authorized for the
<code>extension:provider_network:view</code>
action via the OpenStack Networking policy
configuration. The provider attributes are only
accepted for network API operations if the client is
authorized for the
<code>extension:provider_network:set</code>
action. The default OpenStack Networking API policy
configuration authorizes both actions for users with
the admin role. See <xref linkend="section_auth"/> for
details on policy configuration.</para>
</section>
<section xml:id="provider_api_workflow">
<title>Provider API workflow</title>
<para>Show all attributes of a network, including provider
attributes when invoked with the admin role:</para>
<para>
<screen><prompt>$</prompt> <userinput>neutron net-show &lt;name or net-id&gt;</userinput></screen>
</para>
<para>Create a local provider network (admin-only):</para>
<para>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type local</userinput></screen>
</para>
<para>Create a flat provider network (admin-only):</para>
<para>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type flat --provider:physical_network &lt;phys-net-name&gt;</userinput></screen>
</para>
<para>Create a VLAN provider network (admin-only):</para>
<para>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type vlan --provider:physical_network &lt;phys-net-name&gt; --provider:segmentation_id &lt;VID&gt;</userinput></screen>
</para>
<para>Create a GRE provider network (admin-only):</para>
<para>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type gre --provider:segmentation_id &lt;tunnel-id&gt;</userinput></screen>
</para>
<para>When creating flat networks or VLAN networks, &lt;phys-net-name&gt; must be known
to the plugin. See the <citetitle xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml">OpenStack Configuration
Reference</citetitle> for details on configuring network_vlan_ranges to
identify all physical networks. When creating VLAN networks, &lt;VID&gt; can
fall either within or outside any configured ranges of VLAN IDs from which
tenant networks are allocated. Similarly, when creating GRE networks,
&lt;tunnel-id&gt; can fall either within or outside any tunnel ID ranges from
which tenant networks are allocated.</para>
<para>Once provider networks have been created, subnets
can be allocated and they can be used similarly to
other virtual networks, subject to authorization
policy based on the specified
&lt;tenant_id&gt;.</para>
</section>
</section>
<section xml:id="l3_router_and_nat">
<title>L3 Routing and NAT</title>
<para>Just like the core OpenStack Networking API provides abstract L2 network segments that
are decoupled from the technology used to implement the L2 network, OpenStack
Networking includes an API extension that provides abstract L3 routers that API
users can dynamically provision and configure. These OpenStack Networking routers
can connect multiple L2 OpenStack Networking networks, and can also provide a
"gateway" that connects one or more private L2 networks to a shared "external"
network (e.g., a public network for access to the Internet). See the <citetitle
xmlns:svg="http://www.w3.org/2000/svg" xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration Reference</citetitle> for details on common models of
deploying Networking L3 routers.</para>
<para>The L3 router provides basic NAT capabilities on
"gateway" ports that uplink the router to external
networks. This router SNATs all traffic by default, and
supports "Floating IPs", which creates a static one-to-one
mapping from a public IP on the external network to a
private IP on one of the other subnets attached to the
router. This allows a tenant to selectively expose VMs on
private networks to other hosts on the external network
(and often to all hosts on the Internet). Floating IPs can
be allocated and then mapped from one OpenStack Networking
port to another, as needed.</para>
<section xml:id="l3_api_abstractions">
<title>L3 API abstractions</title>
<table rules="all">
<caption>Router</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the router.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Human-readable name for the router. Might
not be unique.</td>
</tr>
<tr>
<td>admin_state_up</td>
<td>Bool</td>
<td>True</td>
<td>The administrative state of router. If
false (down), the router does not forward
packets.</td>
</tr>
<tr>
<td>status</td>
<td>String</td>
<td>N/A</td>
<td><para>Indicates whether router is
currently operational.</para></td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the router. Only admin users can
specify a tenant_id other than its own.
</td>
</tr>
<tr>
<td>external_gateway_info</td>
<td>dict contain 'network_id' key-value
pair</td>
<td>Null</td>
<td>External network that this router connects
to for gateway services (e.g., NAT)</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Floating IP</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the floating IP.</td>
</tr>
<tr>
<td>floating_ip_address</td>
<td>string (IP address)</td>
<td>allocated by OpenStack Networking</td>
<td>The external network IP address available
to be mapped to an internal IP
address.</td>
</tr>
<tr>
<td>floating_network_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td><para>The network indicating the set of
subnets from which the floating IP
should be allocated</para></td>
</tr>
<tr>
<td>router_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Read-only value indicating the router that
connects the external network to the
associated internal port, if a port is
associated.</td>
</tr>
<tr>
<td>port_id</td>
<td>uuid-str</td>
<td>Null</td>
<td>Indicates the internal OpenStack
Networking port associated with the
external floating IP.</td>
</tr>
<tr>
<td>fixed_ip_address</td>
<td>string (IP address)</td>
<td>Null</td>
<td>Indicates the IP address on the internal
port that is mapped to by the floating IP
(since an OpenStack Networking port might
have more than one IP address).</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the Floating IP. Only admin users
can specify a tenant_id other than its
own.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="l3_workflow">
<title>Common L3 workflow</title>
<para>Create external networks (admin-only)</para>
<screen><prompt>$</prompt> <userinput>neutron net-create public --router:external=True</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create public 172.16.1.0/24</userinput></screen>
<para>Viewing external networks:</para>
<screen><prompt>$</prompt> <userinput>neutron net-list -- --router:external=True</userinput></screen>
<para>Creating routers</para>
<para>Internal-only router to connect multiple L2 networks
privately.</para>
<screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput>
<prompt>$</prompt> <userinput>neutron net-create net2</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create net2 10.0.1.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-create router1</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router1 &lt;subnet1-uuid&gt;</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router1 &lt;subnet2-uuid&gt;</userinput></screen>
<para>The router will get an interface with the gateway_ip
address of the subnet, and this interface will be
attached to a port on the L2 OpenStack Networking
network associated with the subnet. The router will
also get an gateway interface to the specified
external network.  This will provide SNAT connectivity
to the external network as well as support for
floating IPs allocated on that external networks (see
below).  Commonly an external network maps to a
network in the provider</para>
<para>A router can also be connected to an “external
network”, allowing that router to act as a NAT gateway
for external connectivity.</para>
<screen><prompt>$</prompt> <userinput>neutron router-gateway-set router1 &lt;ext-net-id&gt;</userinput></screen>
<para>Viewing routers:</para>
<para>List all routers:
<screen><prompt>$</prompt> <userinput>neutron router-list</userinput></screen>
</para>
<para>Show a specific router:
<screen><prompt>$</prompt> <userinput>neutron router-show &lt;router_id&gt;</userinput></screen>
</para>
<para>Show all internal interfaces for a router:
<screen><prompt>$</prompt> <userinput>neutron port-list -- --device_id=&lt;router_id&gt;</userinput></screen>
</para>
<para>Associating / Disassociating Floating IPs:</para>
<para>First, identify the port-id representing the VM NIC
that the floating IP should map to:</para>
<screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -- --device_id=&lt;instance_id&gt;</userinput></screen>
<para>This port must be on an OpenStack Networking subnet
that is attached to a router uplinked to the external
network that will be used to create the floating IP. 
Conceptually, this is because the router must be able
to perform the Destination NAT (DNAT) rewriting of
packets from the Floating IP address (chosen from a
subnet on the external network) to the internal Fixed
IP (chosen from a private subnet that is “behind” the
router).  </para>
<para>Create floating IP unassociated, then
associate</para>
<screen><prompt>$</prompt> <userinput>neutron floatingip-create &lt;ext-net-id&gt;
neutron floatingip-associate &lt;floatingip-id&gt; &lt;internal VM port-id&gt;</userinput></screen>
<para>create floating IP and associate in a single
step</para>
<screen><prompt>$</prompt> <userinput>neutron floatingip-create --port_id &lt;internal VM port-id&gt; &lt;ext-net-id&gt;</userinput></screen>
<para>Viewing Floating IP State:</para>
<screen><prompt>$</prompt> <userinput>neutron floatingip-list</userinput></screen>
<para>Find floating IP for a particular VM port:</para>
<screen><prompt>$</prompt> <userinput>neutron floatingip-list -- --port_id=ZZZ</userinput></screen>
<para>Disassociate a Floating IP:</para>
<screen><prompt>$</prompt> <userinput>neutron floatingip-disassociate &lt;floatingip-id&gt;</userinput></screen>
<para>L3 Tear Down</para>
<para>Delete the Floating IP:</para>
<screen><prompt>$</prompt> <userinput>neutron floatingip-delete &lt;floatingip-id&gt;</userinput></screen>
<para>Then clear the gateway:</para>
<screen><prompt>$</prompt> <userinput>neutron router-gateway-clear router1</userinput></screen>
<para>Then remove the interfaces from the router:</para>
<screen><prompt>$</prompt> <userinput>neutron router-interface-delete router1 &lt;subnet-id&gt;</userinput></screen>
<para>Finally, delete the router:</para>
<screen><prompt>$</prompt> <userinput>neutron router-delete router1</userinput></screen>
</section>
</section>
<section xml:id="securitygroups">
<title>Security groups</title>
<para>Security groups and security group rules allows
administrators and tenants the ability to specify the type
of traffic and direction (ingress/egress) that is allowed
to pass through a port. A security group is a container
for security group rules.</para>
<para>When a port is created in OpenStack Networking it is
associated with a security group. If a security group is
not specified the port will be associated with a 'default'
security group. By default this group will drop all
ingress traffic and allow all egress. Rules can be added
to this group in order to change the behaviour.</para>
<para>If one desires to use the OpenStack Compute security group APIs and/or have OpenStack
Compute orchestrate the creation of new ports for instances on specific security groups,
additional configuration is needed. To enable this, one must configure the following
file <filename>/etc/nova/nova.conf</filename> and set the config option
security_group_api=neutron on every node running <systemitem class="service">nova-compute</systemitem> and <systemitem class="service">nova-api</systemitem>. After this
change is made restart <systemitem class="service">nova-api</systemitem> and <systemitem class="service">nova-compute</systemitem> in order to pick up this change. After
this change is made one will be able to use both the OpenStack Compute and OpenStack
Network security group API at the same time.</para>
<note>
<itemizedlist>
<listitem><para>To use the OpenStack Compute security group
API with OpenStack Networking, the OpenStack Networking
plugin must implement the security group API. The
following plugins currently implement this: Nicira
NVP, Open vSwitch, Linux Bridge, NEC, and Ryu.</para></listitem>
<listitem><para>You must configure the correct firewall driver in the
<literal>securitygroup</literal> section of the plugin/agent configuration file.
Some plugins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use
the no-operation driver as the default, which results in non-working security
groups.</para></listitem>
<listitem><para>When using the security group API through OpenStack
Compute, security groups are applied to all ports on
an instance. The reason for this is that OpenStack
Compute security group APIs are instances based and
not port based as OpenStack Networking.</para></listitem>
</itemizedlist>
</note>
<section xml:id="securitygroup_api_abstractions">
<title>Security Group API Abstractions</title>
<table rules="all">
<caption>Security Group Attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the security group.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Human-readable name for the security
group. Might not be unique. Cannot be
named default as that is automatically
created for a tenant.</td>
</tr>
<tr>
<td>description</td>
<td>String</td>
<td>None</td>
<td>Human-readable description of a security
group.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the security group. Only admin
users can specify a tenant_id other than
their own.</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Security Group Rules</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the security group rule.</td>
</tr>
<tr>
<td>security_group_id</td>
<td>uuid-str or Integer</td>
<td>allocated by OpenStack Networking</td>
<td>The security group to associate rule
with.</td>
</tr>
<tr>
<td>direction</td>
<td>String</td>
<td>N/A</td>
<td>The direction the traffic is allow
(ingress/egress) from a VM.</td>
</tr>
<tr>
<td>protocol</td>
<td>String</td>
<td>None</td>
<td>IP Protocol (icmp, tcp, udp, etc).</td>
</tr>
<tr>
<td>port_range_min</td>
<td>Integer</td>
<td>None</td>
<td>Port at start of range</td>
</tr>
<tr>
<td>port_range_max</td>
<td>Integer</td>
<td>None</td>
<td>Port at end of range</td>
</tr>
<tr>
<td>ethertype</td>
<td>String</td>
<td>None</td>
<td>ethertype in L2 packet (IPv4, IPv6,
etc)</td>
</tr>
<tr>
<td>remote_ip_prefix</td>
<td>string (IP cidr)</td>
<td>None</td>
<td>CIDR for address range</td>
</tr>
<tr>
<td>remote_group_id</td>
<td>uuid-str or Integer</td>
<td>allocated by OpenStack Networking or
OpenStack Compute</td>
<td>Source security group to apply to
rule.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the security group rule. Only
admin users can specify a tenant_id other
than its own.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="securitygroup_workflow">
<title>Common security group commands</title>
<para>Create a security group for our web servers:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-create webservers --description "security group for webservers"</userinput></screen>
<para>Viewing security groups:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-list</userinput></screen>
<para>Creating security group rule to allow port 80
ingress:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 &lt;security_group_uuid&gt;</userinput></screen>
<para>List security group rules:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-list</userinput></screen>
<para>Delete a security group rule:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-delete &lt;security_group_rule_uuid&gt;</userinput></screen>
<para>Delete security group:</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-delete &lt;security_group_uuid&gt;</userinput></screen>
<para>Create a port and associated two security
groups:</para>
<screen><prompt>$</prompt> <userinput>neutron port-create --security-group &lt;security_group_id1&gt; --security-group &lt;security_group_id2&gt; &lt;network_id&gt;</userinput></screen>
<para>Remove security groups from a port:</para>
<screen><prompt>$</prompt> <userinput>neutron port-update --no-security-groups &lt;port_id&gt;</userinput></screen>
</section>
</section>
<section xml:id="lbaas">
<title>Load-Balancer-as-a-Service</title>
<note>
<para>The Load-Balancer-as-a-Service API is an
API meant to provision and configure load balancers.
The Havana release offers a reference implementation that is based on
the HAProxy software load balancer.</para>
</note>
<section xml:id="lbaas_workflow">
<title>Common Load-Balancer-as-a-Service workflow</title>
<para>Create a load balancer pool using specific provider:</para>
<screen><prompt>$</prompt> <userinput>neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id &lt;subnet-uuid&gt; <parameter>--provider &lt;provider_name&gt;</parameter></userinput></screen>
<para><parameter>--provider</parameter> is an optional argument; if not used, the pool is created with default provider for LBaaS service.
The default provider however should be configured in the <literal>[service_providers]</literal> section of <filename>neutron.conf</filename> file.
If no default provider is specified for LBaaS, the <parameter>--provider</parameter> option is mandatory for pool creation.</para>
<para>Associate two web servers with pool:</para>
<screen><prompt>$</prompt> <userinput>neutron lb-member-create --address &lt;webserver one IP&gt; --protocol-port 80 mypool</userinput>
<prompt>$</prompt> <userinput>neutron lb-member-create --address &lt;webserver two IP&gt; --protocol-port 80 mypool</userinput></screen>
<para>Create a health monitor which checks to make sure
our instances are still running on the specified
protocol-port:</para>
<screen><prompt>$</prompt> <userinput>neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3</userinput></screen>
<para>Associate health monitor with pool:</para>
<screen><prompt>$</prompt> <userinput>neutron lb-healthmonitor-associate &lt;healthmonitor-uuid&gt; mypool</userinput></screen>
<para>Create a Virtual IP Address (VIP) that when accessed
via the load balancer will direct the requests to one
of the pool members:</para>
<screen><prompt>$</prompt> <userinput>neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id &lt;subnet-uuid&gt; mypool</userinput></screen>
</section>
</section>
<section xml:id="plugin_specific_extensions">
<title>Plugin specific extensions</title>
<?dbhtml stop-chunking?>
<para>Each vendor may choose to implement additional API
extensions to the core API. This section describes the
extensions for each plugin.</para>
<section xml:id="nicira_extensions">
<title>Nicira NVP extensions</title>
<para>The Nicira NVP plugin Extensions</para>
<section xml:id="nicira_nvp_plugin_qos_extension">
<title>Nicira NVP QoS extension</title>
<para>The Nicira NVP QoS extension rate-limits network
ports to guarantee a specific amount of bandwidth
for each port. This extension, by default, is only
accessible by a tenant with an admin role but is
configurable through the
<filename>policy.json</filename> file. To use
this extension, create a queue and specify the
min/max bandwidth rates (kbps) and optionally set
the QoS Marking and DSCP value (if your network
fabric uses these values to make forwarding
decisions). Once created, you can associate a
queue with a network. Then, when ports are created
on that network they are automatically created and
associated with the specific queue size that was
associated with the network. Because one size
queue for a every port on a network may not be
optimal, a scaling factor from the nova flavor
'rxtx_factor' is passed in from OpenStack Compute
when creating the port to scale the queue.</para>
<para>Lastly, if you want to set a specific baseline QoS policy for the amount of
bandwidth a single port can use (unless a network queue is specified with the
network a port is created on) a default queue can be created in neutron which
then causes ports created to be associated with a queue of that size times the
rxtx scaling factor. One thing to note is that after a network queue or default
queue is specified this will not add queues to ports previously created and will
only create queues for ports created thereafter.</para>
<section xml:id="nicira_nvp_qos_api_abstractions">
<title>Nicira NVP QoS API abstractions</title>
<table rules="all">
<caption>Nicira NVP QoS Attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the QoS queue.</td>
</tr>
<tr>
<td>default</td>
<td>Boolean</td>
<td>False by default</td>
<td>If True ports will be created with
this queue size unless the network
port is created or associated with
a queue at port creation time.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Name for QoS queue.</td>
</tr>
<tr>
<td>min</td>
<td>Integer</td>
<td>0</td>
<td>Minimum Bandwidth Rate (kbps).
</td>
</tr>
<tr>
<td>max</td>
<td>Integer</td>
<td>N/A</td>
<td>Maximum Bandwidth Rate (kbps).
</td>
</tr>
<tr>
<td>qos_marking</td>
<td>String</td>
<td>untrusted by default</td>
<td>Whether QoS marking should be
trusted or untrusted.</td>
</tr>
<tr>
<td>dscp</td>
<td>Integer</td>
<td>0</td>
<td>DSCP Marking value.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>The owner of the QoS queue.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="nicira_nvp_qos_walk_through">
<title>Nicira NVP QoS walkthrough</title>
<para>Create QoS Queue (admin-only)</para>
<screen><prompt>$</prompt> <userinput>neutron queue-create--min 10 --max 1000 myqueue</userinput></screen>
<para>Associate queue with a network</para>
<screen><prompt>$</prompt> <userinput>neutron net-create network --queue_id=&lt;queue_id&gt;</userinput></screen>
<para>Create default system queue</para>
<screen><prompt>$</prompt> <userinput>neutron queue-create --default True --min 10 --max 2000 default</userinput></screen>
<para>List QoS Queues:</para>
<screen><prompt>$</prompt> <userinput>neutron queue-list</userinput></screen>
<para>Delete QoS Queue:</para>
<screen><prompt>$</prompt> <userinput>neutron queue-delete &lt;queue_id or name&gt;'</userinput></screen>
</section>
</section>
</section>
</section>
</section>
<section xml:id="section_networking-adv-operational_features">
<title>Advanced operational features</title>
<section xml:id="section_adv_logging">
<title>Logging settings</title>
<para>Networking components use Python logging module to do logging. Logging configuration
can be provided in <filename>neutron.conf</filename> or as command line options.
Command options will override ones in <filename>neutron.conf</filename>.</para>
<para>Two ways to specify the logging configuration for
OpenStack Networking components:</para>
<orderedlist>
<listitem>
<para>Provide logging settings in a logging configuration file.</para>
<para>Please see <link xlink:href="http://docs.python.org/howto/logging.html">Python Logging HOWTO</link> for logging configuration file.</para>
</listitem>
<listitem>
<para>Provide logging setting in <filename>neutron.conf</filename></para>
<programlisting language="ini">[DEFAULT]
# Default log level is WARNING
# Show debugging output in logs (sets DEBUG log level output)
# debug = False
# Show more verbose log output (sets INFO log level output) if debug is False
# verbose = False
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog = False
# syslog_log_facility = LOG_USER
# if use_syslog is False, we can set log_file and log_dir.
# if use_syslog is False and we do not set log_file,
# the log will be printed to stdout.
# log_file =
# log_dir =</programlisting>
</listitem>
</orderedlist>
</section>
<section xml:id="section_adv_notification">
<title>Notifications</title>
<para>Notifications can be sent when Networking resources such as network, subnet and port are created, updated or deleted.</para>
<section xml:id="section_adv_notification_overview">
<title>Notification options</title>
<para>To support DHCP agent, rpc_notifier driver must be set. To set up the notification,
edit notification options in <filename>neutron.conf</filename>:</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
# default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
<section xml:id="section_adv_notification_cases">
<title>Setting cases</title>
<section xml:id="section_adv_notification_cases_log_rpc">
<title>Logging and RPC</title>
<para>The options below will make OpenStack Networking server send notifications via
logging and RPC. The logging options are described in <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml">OpenStack Configuration
Reference</citetitle> . RPC notifications will go to
'notifications.info' queue bound to a topic exchange defined by
'control_exchange' in <filename>neutron.conf</filename>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications
</programlisting>
</section>
<section xml:id="ch_adv_notification_cases_multi_rpc_topics">
<title>Multiple RPC topics</title>
<para>The options below will make OpenStack Networking server send notifications to
multiple RPC topics. RPC notifications will go to 'notifications_one.info' and
'notifications_two.info' queues bound to a topic exchange defined by 'control_exchange'
in <filename>neutron.conf</filename>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications_one,notifications_two
</programlisting>
</section>
</section>
</section>
</section>
<section xml:id="section_auth">
<title>Authentication and authorization</title>
<para>OpenStack Networking uses the OpenStack Identity service
(project name keystone) as the default authentication service.
When OpenStack Identity is enabled Users submitting requests
to the OpenStack Networking service must provide an
authentication token in X-Auth-Token request header. The
aforementioned token should have been obtained by
authenticating with the OpenStack Identity endpoint. For more
information concerning authentication with OpenStack Identity,
please refer to the OpenStack Identity documentation. When
OpenStack Identity is enabled, it is not mandatory to specify
tenant_id for resources in create requests, as the tenant
identifier will be derived from the Authentication token.
Please note that the default authorization settings only allow
administrative users to create resources on behalf of a
different tenant. OpenStack Networking uses information
received from OpenStack Identity to authorize user requests.
OpenStack Networking handles two kind of authorization
policies: <itemizedlist>
<listitem>
<para><emphasis role="bold"
>Operation-based</emphasis>: policies specify
access criteria for specific operations, possibly
with fine-grained control over specific
attributes; </para>
</listitem>
<listitem>
<para><emphasis role="bold">Resource-based:</emphasis>
whether access to specific resource might be
granted or not according to the permissions
configured for the resource (currently available
only for the network resource). The actual
authorization policies enforced in OpenStack
Networking might vary from deployment to
deployment.</para>
</listitem>
</itemizedlist></para>
<para>The policy engine reads entries from the <emphasis
role="italic">policy.json</emphasis> file. The actual
location of this file might vary from distribution to
distribution. Entries can be updated while the system is
running, and no service restart is required. That is to say,
every time the policy file is updated, the policies will be
automatically reloaded. Currently the only way of updating
such policies is to edit the policy file. Please note that in
this section we will use both the terms "policy" and "rule" to
refer to objects which are specified in the same way in the
policy file; in other words, there are no syntax differences
between a rule and a policy. We will define a policy something
which is matched directly from the OpenStack Networking policy
engine, whereas we will define a rule as the elements of such
policies which are then evaluated. For instance in
<code>create_subnet: [["admin_or_network_owner"]]</code>,
<emphasis role="italic">create_subnet</emphasis> is
regarded as a policy, whereas <emphasis role="italic"
>admin_or_network_owner</emphasis> is regarded as a
rule.</para>
<para>Policies are triggered by the OpenStack Networking policy
engine whenever one of them matches an OpenStack Networking
API operation or a specific attribute being used in a given
operation. For instance the <code>create_subnet</code> policy
is triggered every time a <code>POST /v2.0/subnets</code>
request is sent to the OpenStack Networking server; on the
other hand <code>create_network:shared</code> is triggered
every time the <emphasis role="italic">shared</emphasis>
attribute is explicitly specified (and set to a value
different from its default) in a <code>POST
/v2.0/networks</code> request. It is also worth mentioning
that policies can be also related to specific API extensions;
for instance <code>extension:provider_network:set</code> will
be triggered if the attributes defined by the Provider Network
extensions are specified in an API request.</para>
<para>An authorization policy can be composed by one or more rules. If more rules are specified,
evaluation policy will be successful if any of the rules evaluates successfully; if an API
operation matches multiple policies, then all the policies must evaluate successfully. Also,
authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to
another rule, until a terminal rule is reached.</para>
<para>The OpenStack Networking policy engine currently defines the
following kinds of terminal rules:</para>
<para><itemizedlist>
<listitem>
<para><emphasis role="bold">Role-based rules</emphasis>: evaluate successfully if
the user submitting the request has the specified role. For instance
<code>"role:admin"</code>is successful if the user submitting the request is
an administrator.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Field-based rules: </emphasis>evaluate successfully if a
field of the resource specified in the current request matches a specific value.
For instance <code>"field:networks:shared=True"</code> is successful if the
attribute <emphasis role="italic">shared</emphasis> of the <emphasis
role="italic">network</emphasis> resource is set to true.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Generic rules:</emphasis> compare an attribute in the
resource with an attribute extracted from the user's security credentials and
evaluates successfully if the comparison is successful. For instance
<code>"tenant_id:%(tenant_id)s"</code> is successful if the tenant
identifier in the resource is equal to the tenant identifier of the user
submitting the request.</para>
</listitem>
</itemizedlist> The following is an extract from the default policy.json file:</para>
<para>
<programlisting language="bash">{
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"shared": [["field:networks:shared=True"]],
[2] "default": [["rule:admin_or_owner"]],
"create_subnet": [["rule:admin_or_network_owner"]],
"get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],
"update_subnet": [["rule:admin_or_network_owner"]],
"delete_subnet": [["rule:admin_or_network_owner"]],
"create_network": [],
[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]],
[4] "create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [],
[5] "create_port:mac_address": [["rule:admin_or_network_owner"]],
"create_port:fixed_ips": [["rule:admin_or_network_owner"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_or_owner"]],
"delete_port": [["rule:admin_or_owner"]]
}</programlisting>
</para>
<para>[1] is a rule which evaluates successfully if the current user is an administrator or the
owner of the resource specified in the request (tenant identifier is equal).</para>
<para>[2] is the default policy which is always evaluated if an API operation does not match any
of the policies in policy.json.</para>
<para>[3] This policy will evaluate successfully if either <emphasis role="italic"
>admin_or_owner</emphasis>, or <emphasis role="italic">shared</emphasis> evaluates
successfully.</para>
<para>[4] This policy will restrict the ability of manipulating the <emphasis role="italic"
>shared</emphasis> attribute for a network to administrators only.</para>
<para>[5] This policy will restrict the ability of manipulating the <emphasis role="italic"
>mac_address</emphasis> attribute for a port only to administrators and the owner of the
network where the port is attached.</para>
<para>In some cases, some operations should be restricted to administrators only; therefore, as
a further example, let us consider how this sample policy file should be modified in a
scenario where tenants are allowed only to define networks and see their resources, and all
the other operations can be performed only in an administrative context:</para>
<para>
<programlisting language="bash">{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}</programlisting>
</para>
</section>
<section xml:id="section_high_avail">
<title>High Availability</title>
<para>Several aspects of an Networking deployment benefit from high-availabilty to
withstand individual node failures. In general, neutron-server and neutron-dhcp-agent
can be run in an active-active fashion. The neutron-l3-agent service can be run only as
active/passive, to avoid IP conflicts with respect to gateway IP addresses.</para>
<section xml:id="ha_pacemaker">
<title>OpenStack Networking High Availability with
Pacemaker</title>
<para>You can run some OpenStack Networking services into a
cluster (Active / Passive or Active / Active for OpenStack
Networking Server only) with Pacemaker.</para>
<para>Here you can download the latest Resources Agents :<itemizedlist>
<listitem>
<para>neutron-server: <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/quantum-server"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-dhcp-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/quantum-agent-dhcp"
>https://github.com/madkiss/openstack-resource-agents</link>   </para>
</listitem>
<listitem>
<para>neutron-l3-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/quantum-agent-l3"
>https://github.com/madkiss/openstack-resource-agents</link>   </para>
</listitem>
</itemizedlist></para>
<db:note xmlns:db="http://docbook.org/ns/docbook"><db:para> If you need more informations about "<emphasis role="italic">How to build a
cluster</emphasis>", please refer to <link
xlink:href="http://www.clusterlabs.org/wiki/Documentation">Pacemaker
documentation</link>.</db:para></db:note>
</section>
</section>
<section xml:id="section_pagination_and_sorting_support">
<title>Plugin pagination and sorting support</title>
<table rules="all">
<caption>The plugins are supporting native pagination and
sorting</caption>
<thead>
<tr>
<th>Plugin</th>
<th>Support Native Pagination</th>
<th>Support Native Sorting</th>
</tr>
</thead>
<tbody>
<tr>
<td>Open vSwitch</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>LinuxBridge</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</section>
</chapter>