openstack-manuals/doc/admin-guide-cloud/ch_networking.xml
Kevin Jackson 8de2f0f3c1 Updated madkiss Github OCF agent URLs to neutron-*'
Closes-Bug: #1224581
  Change-Id: Ic43d06e62d3e44fb387017ff3f2147cfca85f409

Change-Id: I3ecfa8ce8110c95805117851442e6bc9fb4498ec
2013-09-22 20:05:32 +01:00

2810 lines
144 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_networking">
<title>Networking</title>
<para>Learn OpenStack Networking concepts, architecture, and basic
and advanced neutron and nova command-line interface (CLI)
commands so that you can administer OpenStack Networking in a
cloud.</para>
<section xml:id="neworking-intro">
<title>Introduction to Networking</title>
<para>The OpenStack Networking service, code-named neutron,
provides an API for defining network connectivity and
addressing in the cloud. The OpenStack Networking service
enables operators to leverage different networking
technologies to power their cloud networking.</para>
<para>The OpenStack Networking service also provides an API to
configure and manage a variety of network services ranging
from L3 forwarding and NAT to load balancing, edge
firewalls, and IPSEC VPN.</para>
<para>For a detailed description of the OpenStack Networking
API abstractions and their attributes, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
><citetitle>OpenStack Networking API v2.0
Reference</citetitle></link>.</para>
<section xml:id="section_networking-api">
<title>Networking API</title>
<para>Networking is a virtual network service that
provides a powerful API to define the network
connectivity and IP addressing used by devices from
other services, such as OpenStack Compute.</para>
<para>The Compute API has a virtual server abstraction to
describe computing resources. Similarly, the OpenStack
Networking API has virtual network, subnet, and port
abstractions to describe networking resources. In more
detail:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Network</emphasis>. An
isolated L2 segment, analogous to VLAN in the
physical networking world.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Subnet</emphasis>. A
block of v4 or v6 IP addresses and associated
configuration state.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Port</emphasis>. A
connection point for attaching a single
device, such as the NIC of a virtual server,
to a virtual network. Also describes the
associated network configuration, such as the
MAC and IP addresses to be used on that
port.</para>
</listitem>
</itemizedlist>
<para>You can configure rich network topologies by
creating and configuring networks and subnets, and
then instructing other OpenStack services like
OpenStack Compute to attach virtual devices to ports
on these networks. In particular, OpenStack Networking
supports each tenant having multiple private networks,
and allows tenants to choose their own IP addressing
scheme (even if those IP addresses overlap with those
used by other tenants). The OpenStack Networking
service:</para>
<itemizedlist>
<listitem>
<para>Enables advanced cloud networking use cases,
such as building multi-tiered web applications
and allowing applications to be migrated to
the cloud without changing IP
addresses.</para>
</listitem>
<listitem>
<para>Offers flexibility for the cloud
administrator to customized network
offerings.</para>
</listitem>
<listitem>
<para>Provides a mechanism that lets cloud
administrators expose additional API
capabilities through API extensions. At first,
new functionality is introduced as an API
extension. Over time, the functionality
becomes part of the core OpenStack Networking
API.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="section_plugin-arch">
<title>Plug-in architecture</title>
<para>Enhancing traditional networking solutions to
provide rich cloud networking is challenging.
Traditional networking is not designed to scale to
cloud proportions nor to handle automatic
configuration.</para>
<para>The original OpenStack Compute network
implementation assumed a very basic model of
performing all isolation through Linux VLANs and IP
tables. OpenStack Networking introduces the concept of
a <emphasis role="italic">plug-in</emphasis>, which is
a back-end implementation of the OpenStack Networking
API. A plug-in can use a variety of technologies to
implement the logical API requests. Some OpenStack
Networking plug-ins might use basic Linux VLANs and IP
tables, while others might use more advanced
technologies, such as L2-in-L3 tunneling or OpenFlow,
to provide similar benefits.</para>
<para>OpenStack Networking includes the following
plug-ins:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Big Switch Plug-in
(Floodlight REST Proxy)</emphasis>. <link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Brocade
Plug-in</emphasis>. <link
xlink:href="https://github.com/brocade/brocade"
>https://github.com/brocade/brocade</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cisco</emphasis>.
<link
xlink:href="http://wiki.openstack.org/cisco-neutron"
>http://wiki.openstack.org/cisco-neutron</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cloudbase Hyper-V
Plug-in</emphasis>. <link
xlink:href="http://www.cloudbase.it/quantum-hyper-v-plugin/"
>http://www.cloudbase.it/quantum-hyper-v-plugin/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Linux Bridge
Plug-in</emphasis>. Documentation included
in this guide at <link
xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin</link>
 </para>
</listitem>
<listitem>
<para><emphasis role="bold">Mellanox
Plug-in</emphasis>. <link
xlink:href="https://wiki.openstack.org/wiki/Mellanox-Neutron/"
>
https://wiki.openstack.org/wiki/Mellanox-Neutron/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Midonet
Plug-in</emphasis>. <link
xlink:href="http://www.midokura.com/">
http://www.midokura.com/</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">NEC OpenFlow
Plug-in</emphasis>. <link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Nicira NVP
Plug-in</emphasis>. Documentation is
included in this guide, <link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html"
>NVP Product Overview</link>, and <link
xlink:href="http://www.nicira.com/support"
>NVP Product Support</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Open vSwitch
Plug-in</emphasis>. Documentation included
in this guide.</para>
</listitem>
<listitem>
<para><emphasis role="bold">PLUMgrid</emphasis>.
<link
xlink:href="https://wiki.openstack.org/wiki/PLUMgrid-Neutron"
>https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Ryu
Plug-in</emphasis>. <link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link>
</para>
</listitem>
</itemizedlist>
<para>Plug-ins can have different properties for hardware
requirements, features, performance, scale, or
operator tools. Because OpenStack Networking supports
a large number of plug-ins, the cloud administrator is
able to weigh different options and decide which
networking technology is right for the
deployment.</para>
<?hard-pagebreak?>
<para>Not all OpenStack networking plug-ins are compatible
with all possible OpenStack compute drivers:</para>
<table rules="all">
<caption>Plug-in Compatibility with OpenStack Compute
Drivers</caption>
<thead>
<tr>
<th/>
<th>Libvirt (KVM/QEMU)</th>
<th>XenServer</th>
<th>VMware</th>
<th>Hyper-V</th>
<th>Bare-metal</th>
<th>PowerVM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bigswitch / Floodlight</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Brocade</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cisco</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cloudbase Hyper-V</td>
<td/>
<td/>
<td/>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>Linux Bridge</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Mellanox</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Midonet</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>NEC OpenFlow</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Nicira NVP</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Plumgrid</td>
<td>Yes</td>
<td/>
<td>Yes</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Ryu</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="section_networking-arch">
<title>Networking architecture</title>
<para>Before you deploy Networking, it helps to understand the
Networking components and how these components interact
with each other and with other OpenStack services.</para>
<section xml:id="arch_overview">
<title>Overview</title>
<para>OpenStack Networking is a standalone service, just
like other OpenStack services such as OpenStack
Compute, OpenStack Image service, OpenStack Identity
service, or the OpenStack Dashboard. Like those
services, a deployment of OpenStack Networking often
involves deploying several processes on a variety of
hosts.</para>
<para>The main process of the OpenStack Networking server
is <literal>neutron-server</literal>, which is a
Python daemon that exposes the OpenStack Networking
API and passes user requests to the configured
OpenStack Networking plug-in for additional
processing. Typically, the plug-in requires access to
a database for persistent storage (also similar to
other OpenStack services).</para>
<para>If your deployment uses a controller host to run
centralized OpenStack Compute components, you can
deploy the OpenStack Networking server on that same
host. However, OpenStack Networking is entirely
standalone and can be deployed on its own host as
well. OpenStack Networking also includes additional
agents that might be required, depending on your
deployment:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">plug-in
agent</emphasis>
(<literal>neutron-*-agent</literal>). Runs
on each hypervisor to perform local vswitch
configuration. The agent to be run will depend
on which plug-in you are using, because some
plug-ins do not actually require an agent.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">dhcp agent</emphasis>
(<literal>neutron-dhcp-agent</literal>).
Provides DHCP services to tenant networks.
This agent is the same for all plug-ins.
</para>
</listitem>
<listitem>
<para><emphasis role="bold">l3 agent</emphasis>
<literal>(neutron-l3-agent</literal>).
Provides L3/NAT forwarding to provide external
network access for VMs on tenant networks.
This agent is the same for all plug-ins.
</para>
</listitem>
</itemizedlist>
<para>These agents interact with the main neutron process
through RPC (for example, rabbitmq or qpid) or through
the standard OpenStack Networking API. Further: <itemizedlist>
<listitem>
<para>Networking relies on the OpenStack
Identity service (keystone) for the
authentication and authorization of all
API request. </para>
</listitem>
<listitem>
<para>Compute (nova) interacts with OpenStack
Networking through calls to its standard
API.  As part of creating a VM, the
<systemitem class="service"
>nova-compute</systemitem> service
communicates with the OpenStack Networking
API to plug each virtual NIC on the VM
into a particular network.   </para>
</listitem>
<listitem>
<para>The Dashboard (Horizon) integrates with
the OpenStack Networking API, allowing
administrators and tenant users to create
and manage network services through the
Dashboard GUI.</para>
</listitem>
</itemizedlist></para>
</section>
<section xml:id="networking-services">
<title>Place services on physical hosts</title>
<para>Like other OpenStack services, Networking enables
cloud administrators to run one or more services on
one or more physical devices. At one extreme, the
cloud administrator can run all service daemons on a
single physical host for evaluation purposes.
Alternatively the cloud administrator can run each
service on its own physical host and, in some cases,
can replicate services across multiple hosts for
redundancy. For more information, see the <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration
Reference</citetitle>.</para>
<para>A standard architecture includes a cloud controller
host, a network gateway host, and a set of hypervisors
that run virtual machines. The cloud controller and
network gateway can be on the same host. However, if
you expect VMs to send significant traffic to or from
the Internet, a dedicated network gateway host helps
avoid CPU contention between the <systemitem
role="agent">neutron-l3-agent</systemitem> and
other OpenStack services that forward packets.</para>
</section>
<section xml:id="network-connectivity">
<title>Network connectivity for physical hosts</title>
<mediaobject>
<imageobject>
<imagedata scale="60"
fileref="../common/figures/Neutron-PhysNet-Diagram.png"
/>
</imageobject>
</mediaobject>
<para>A standard OpenStack Networking set up has one or
more of the following distinct physical data center
networks:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Management
network</emphasis>. Provides internal
communication between OpenStack Components. IP
addresses on this network should be reachable
only within the data center.  </para>
</listitem>
<listitem>
<para><emphasis role="bold">Data
network</emphasis>. Provides VM data
communication within the cloud deployment. 
The IP addressing requirements of this network
depend on the OpenStack Networking plug-in
being used. </para>
</listitem>
<listitem>
<para><emphasis role="bold">External
network</emphasis>. Provides VMs with
Internet access in some deployment scenarios. 
IP addresses on this network should be
reachable by anyone on the Internet. </para>
</listitem>
<listitem>
<para><emphasis role="bold">API
network</emphasis>. Exposes all OpenStack
APIs, including the OpenStack Networking API,
to tenants. IP addresses on this network
should be reachable by anyone on the
Internet. The API network may be the same as
the external network, because it is possible
to create an external-network subnet that is
allocated IP ranges that use less than the
full range of IP addresses in an IP
block.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="section_networking-use">
<title>Use Networking</title>
<para>You can use OpenStack Networking in the following ways: <itemizedlist>
<listitem>
<para>Expose the OpenStack Networking API to cloud
tenants, which enables them to build rich
network topologies.</para>
</listitem>
<listitem>
<para>Have the cloud administrator, or an
automated administrative tool, create network
connectivity on behalf of tenants.</para>
</listitem>
</itemizedlist></para>
<para>A tenant or cloud administrator can both perform the
following procedures.</para>
<section xml:id="api_features">
<title>Core Networking API features</title>
<para>After you install and run OpenStack Networking,
tenants and administrators can perform
create-read-update-delete (CRUD) API networking
operations by using either the
<command>neutron</command> CLI tool or the API.
Like other OpenStack CLI tools, the
<command>neutron</command> tool is just a basic
wrapper around the OpenStack Networking API. Any
operation that can be performed using the CLI has an
equivalent API call that can be performed
programmatically.</para>
<para>The CLI includes a number of options. For details,
refer to the <citetitle>OpenStack End User
Guide</citetitle>.</para>
<section xml:id="api_abstractions">
<title>API abstractions</title>
<para>The OpenStack Networking v2.0 API provides
control over both L2 network topologies and the IP
addresses used on those networks (IP Address
Management or IPAM). There is also an extension to
cover basic L3 forwarding and NAT, which provides
capabilities similar to
<command>nova-network</command>.</para>
<para>In the OpenStack Networking API: <itemizedlist>
<listitem>
<para>A 'Network' is an isolated L2
network segment (similar to a VLAN),
which forms the basis for describing
the L2 network topology available in
an OpenStack Networking deployment.
</para>
</listitem>
<listitem>
<para>A 'Subnet' associates a block of IP
addresses and other network
configuration (for example, default
gateways or dns-servers) with an
OpenStack Networking network. Each
subnet represents an IPv4 or IPv6
address block and, if needed, each
OpenStack Networking network can have
multiple subnets.</para>
</listitem>
<listitem>
<para>A 'Port' represents an attachment
port to a L2 OpenStack Networking
network. When a port is created on the
network, by default it is allocated an
available fixed IP address out of one
of the designated subnets for each IP
version (if one exists). When the port
is destroyed, its allocated addresses
return to the pool of available IPs on
the subnet. Users of the OpenStack
Networking API can either choose a
specific IP address from the block, or
let OpenStack Networking choose the
first available IP address.</para>
</listitem>
</itemizedlist></para>
<para>The following table summarizes the attributes
available for each of the previous networking
abstractions. For more operations about API
abstraction and operations, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
>Networking API v2.0 Reference</link>.</para>
<table rules="all">
<caption>Network attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>admin_state_up</systemitem></td>
<td>bool</td>
<td>True</td>
<td>Administrative state of the network.
If specified as False (down), this
network does not forward packets.
</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-str</td>
<td>Generated</td>
<td>UUID for this network.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this network;
is not required to be unique.</td>
</tr>
<tr>
<td><systemitem>shared</systemitem></td>
<td>bool</td>
<td>False</td>
<td>Specifies whether this network
resource can be accessed by any
tenant. The default policy setting
restricts usage of this attribute to
administrative users only.</td>
</tr>
<tr>
<td><systemitem>status</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether this network is
currently operational.</td>
</tr>
<tr>
<td><systemitem>subnets</systemitem></td>
<td>list(uuid-str)</td>
<td>Empty list</td>
<td>List of subnets associated with this
network.</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-str</td>
<td>N/A</td>
<td>Tenant owner of the network. Only
administrative users can set the
tenant identifier; this cannot be
changed using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Subnet Attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>allocation_pools</systemitem></td>
<td>list(dict)</td>
<td>Every address in
<systemitem>cidr</systemitem>,
excluding
<systemitem>gateway_ip</systemitem>
(if configured).</td>
<td><para>List of cidr sub-ranges that are
available for dynamic allocation to
ports. Syntax:</para>
<programlisting>[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]</programlisting>
</td>
</tr>
<tr>
<td><systemitem>cidr</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>IP range for this subnet, based on the
IP version.</td>
</tr>
<tr>
<td><systemitem>dns_nameservers</systemitem></td>
<td>list(string)</td>
<td>Empty list</td>
<td>List of DNS name servers used by hosts
in this subnet.</td>
</tr>
<tr>
<td><systemitem>enable_dhcp</systemitem></td>
<td>bool</td>
<td>True</td>
<td>Specifies whether DHCP is enabled for
this subnet.</td>
</tr>
<tr>
<td><systemitem>gateway_ip</systemitem></td>
<td>string</td>
<td>First address in
<systemitem>cidr</systemitem>
</td>
<td>Default gateway used by devices in
this subnet.</td>
</tr>
<tr>
<td><systemitem>host_routes</systemitem></td>
<td>list(dict)</td>
<td>Empty list</td>
<td>Routes that should be used by devices
with IPs from this subnet (not
including local subnet route).</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID representing this subnet.</td>
</tr>
<tr>
<td><systemitem>ip_version</systemitem></td>
<td>int</td>
<td>4</td>
<td>IP version.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this subnet
(might not be unique).</td>
</tr>
<tr>
<td><systemitem>network_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this subnet is
associated.</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of network. Only administrative
users can set the tenant identifier;
this cannot be changed using
authorization policies.</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Port attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><systemitem>admin_state_up</systemitem></td>
<td>bool</td>
<td>true</td>
<td>Administrative state of this port. If
specified as False (down), this port
does not forward packets.</td>
</tr>
<tr>
<td><systemitem>device_id</systemitem></td>
<td>string</td>
<td>None</td>
<td>Identifies the device using this port
(for example, a virtual server's ID).
</td>
</tr>
<tr>
<td><systemitem>device_owner</systemitem></td>
<td>string</td>
<td>None</td>
<td>Identifies the entity using this port
(for example, a dhcp agent).</td>
</tr>
<tr>
<td><systemitem>fixed_ips</systemitem></td>
<td>list(dict)</td>
<td>Automatically allocated from pool</td>
<td>Specifies IP addresses for this port;
associates the port with the subnets
containing the listed IP addresses.
</td>
</tr>
<tr>
<td><systemitem>id</systemitem></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID for this port.</td>
</tr>
<tr>
<td><systemitem>mac_address</systemitem></td>
<td>string</td>
<td>Generated</td>
<td>Mac address to use on this port.</td>
</tr>
<tr>
<td><systemitem>name</systemitem></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this port
(might not be unique).</td>
</tr>
<tr>
<td><systemitem>network_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this port is
associated.</td>
</tr>
<tr>
<td><systemitem>status</systemitem></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether the network is
currently operational.</td>
</tr>
<tr>
<td><systemitem>tenant_id</systemitem></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of the network. Only
administrative users can set the
tenant identifier; this cannot be
changed using authorization policies.
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="basic_operations">
<title>Basic Networking operations</title>
<para>To learn about advanced capabilities that are
available through the neutron command-line
interface (CLI), read the networking section in
the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html"
> OpenStack End User Guide</link>.</para>
<para>The following table shows example neutron
commands that enable you to complete basic
Networking operations:</para>
<table rules="all">
<caption>Basic Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creates a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet that is associated
with net1.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Lists ports for a specified
tenant.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list</userinput></screen></td>
</tr>
<tr>
<td>Lists ports for a specified tenant and
displays the
<systemitem>id</systemitem>,
<systemitem>fixed_ips</systemitem>,
and
<systemitem>device_owner</systemitem>
columns.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -c device_owner</userinput></screen>
</td>
</tr>
<tr>
<td>Shows information for a specified
port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-show <replaceable>port-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>The <systemitem>device_owner</systemitem>
field describes who owns the port. A port
whose <systemitem>device_owner</systemitem>
begins with: <itemizedlist>
<listitem>
<para><literal>network</literal> is
created by OpenStack
Networking.</para>
</listitem>
<listitem>
<para><literal>compute</literal> is
created by OpenStack Compute.
</para>
</listitem>
</itemizedlist>
</para>
</note>
</section>
<section xml:id="admin_api_config">
<title>Administrative operations</title>
<para>The cloud administrator can perform any
<systemitem>neutron</systemitem> call on
behalf of tenants by specifying an OpenStack
Identity <systemitem>tenant_id</systemitem> in the
request, as follows:</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=<replaceable>tenant-id</replaceable> <replaceable>network-name</replaceable></userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1</userinput></screen>
<note>
<para>To view all tenant IDs in OpenStack
Identity, run the following command as an
OpenStack Identity (keystone) admin
user:</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
</note>
</section>
<section xml:id="advanced_networking">
<title>Advanced Networking operations</title>
<para>The following table shows example neutron
commands that enable you to complete advanced
Networking operations:</para>
<table rules="all">
<caption>Advanced Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creates a network that all tenants can
use.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create --shared public-net</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified
gateway IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet that has no gateway
IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --no-gateway net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with DHCP
disabled.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified set
of host routes.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified set
of dns name servers.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8</userinput></screen></td>
</tr>
<tr>
<td>Displays all ports and IPs allocated
on a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --network_id <replaceable>net-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="using_nova_with_neutron">
<title>Use Compute with Networking</title>
<section xml:id="basic_workflow_with_nova">
<title>Basic Compute and Networking operations</title>
<para>The following table shows example neutron and
nova commands that enable you to complete basic
Compute and Networking operations:</para>
<table rules="all">
<caption>Basic Compute/Networking
operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Checks available networks.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-list</userinput></screen></td>
</tr>
<tr>
<td>Boots a VM with a single NIC on a
selected OpenStack Networking
network.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td><para>Searches for ports with a
<systemitem>device_id</systemitem>
that matches the OpenStack Compute
instance UUID.</para><note>
<para>The
<systemitem>device_id</systemitem>
can also be a logical router
ID.</para>
</note></td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Searches for ports, but shows only the
<systemitem>mac_address</systemitem>
for the port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --field mac_address --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Temporarily disables a port from
sending traffic.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-update <replaceable>port-id</replaceable> --admin_state_up=False</userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<itemizedlist>
<listitem>
<para>When you boot a Compute VM, a port
on the network is automatically
created that corresponds to the VM NIC
and is automatically associated with
the default security group. You can
configure <link
linkend="enabling_ping_and_ssh"
>security group rules</link> to
enable users to access the VM.</para>
</listitem>
<listitem>
<para>When you delete a Compute VM, the
underlying OpenStack Networking port
is automatically deleted.</para>
</listitem>
</itemizedlist>
</note>
</section>
<section xml:id="advanced_vm_creation">
<title>Advanced VM creation operations</title>
<para>The following table shows example nova and
neutron commands that enable you to complete
advanced VM creation operations:</para>
<table rules="all">
<caption>Advanced VM creation operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boots a VM with multiple NICs.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net1-id</replaceable> --nic net-id=<replaceable>net2-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Boots a VM with a specific IP address.
First, create an OpenStack Networking
port with a specific IP address. Then,
boot a VM specifying a
<systemitem>port-id</systemitem>
rather than a
<systemitem>net-id</systemitem>.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-create --fixed-ip subnet_id=<replaceable>subnet-id</replaceable>,ip_address=<replaceable>IP</replaceable> <replaceable>net-id</replaceable></userinput>
<prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic port-id=<replaceable>port-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td>Boots a VM that connects to all
networks that are accessible to the
tenant who submits the request
(without the
<systemitem>--nic</systemitem>
option).</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
</tbody>
</table>
<note>
<para>OpenStack Networking does not currently
support the <command>v4-fixed-ip</command>
parameter of the <command>--nic</command>
option for the <command>nova</command>
command.</para>
</note>
</section>
<section xml:id="enabling_ping_and_ssh">
<title>Security groups (enabling ping and SSH on
VMs)</title>
<para>You must configure security group rules
depending on the type of plug-in you are using. If
you are using a plug-in that:</para>
<itemizedlist>
<listitem>
<para>Implements Networking security groups,
you can configure security group rules
directly by using <command>neutron
security-group-rule-create</command>.
The following example allows
<command>ping</command> and
<command>ssh</command> access to your
VMs.</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol icmp --direction ingress default</userinput>
<prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default</userinput></screen>
</listitem>
<listitem>
<para>Does not implement Networking security
groups, you can configure security group
rules by using the <command>nova
secgroup-add-rule</command> or
<command>euca-authorize</command>
command. The following
<systemitem>nova</systemitem> commands
allow <command>ping</command> and
<command>ssh</command> access to your
VMs.</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
</listitem>
</itemizedlist>
<note>
<para>If your plug-in implements OpenStack
Networking security groups, you can also
leverage Compute security groups by setting
<systemitem>security_group_api =
neutron</systemitem> in
<filename>nova.conf</filename>. After
setting this option, all Compute security
group commands are proxied to OpenStack
Networking.</para>
</note>
</section>
</section>
</section>
<section xml:id="section_networking-adv-features">
<title>Advanced features through API extensions</title>
<para>Several plug-ins implement API extensions that provide
capabilities similar to what was available in
nova-network: These plug-ins are likely to be of interest
to the OpenStack community.</para>
<section xml:id="provider_networks">
<title>Provider networks</title>
<para>Provider networks allow cloud administrators to
create OpenStack Networking networks that map directly
to physical networks in the data center.  This is
commonly used to give tenants direct access to a
public network that can be used to reach the
Internet.  It may also be used to integrate with VLANs
in the network that already have a defined meaning
(for example, allow a VM from the "marketing"
department to be placed on the same VLAN as bare-metal
marketing hosts in the same data center).</para>
<para>The provider extension allows administrators to
explicitly manage the relationship between OpenStack
Networking virtual networks and underlying physical
mechanisms such as VLANs and tunnels. When this
extension is supported, OpenStack Networking client
users with administrative privileges see additional
provider attributes on all virtual networks, and are
able to specify these attributes in order to create
provider networks.</para>
<para>The provider extension is supported by the
openvswitch and linuxbridge plug-ins. Configuration of
these plug-ins requires familiarity with this
extension.</para>
<section xml:id="provider_terminology">
<title>Terminology</title>
<para>A number of terms are used in the provider
extension and in the configuration of plug-ins
supporting the provider extension:<itemizedlist>
<listitem>
<para><emphasis role="bold">virtual
network</emphasis>. An OpenStack
Networking L2 network (identified by a
UUID and optional name) whose ports
can be attached as vNICs to OpenStack
Compute instances and to various
OpenStack Networking agents. The
openvswitch and linuxbridge plug-ins
each support several different
mechanisms to realize virtual
networks.</para>
</listitem>
<listitem>
<para><emphasis role="bold">physical
network</emphasis>. A network
connecting virtualization hosts (such
as, OpenStack Compute nodes) with each
other and with other network
resources. Each physical network may
support multiple virtual networks. The
provider extension and the plug-in
configurations identify physical
networks using simple string
names.</para>
</listitem>
<listitem>
<para><emphasis role="bold">tenant
network</emphasis>. A "normal"
virtual network created by/for a
tenant. The tenant is not aware of how
that network is physically
realized.</para>
</listitem>
<listitem>
<para><emphasis role="bold">provider
network</emphasis>. A virtual
network administratively created to
map to a specific network in the data
center, typically to enable direct
access to non-OpenStack resources on
that network. Tenants can be given
access to provider networks.</para>
</listitem>
<listitem>
<para><emphasis role="bold">VLAN
network</emphasis>. A virtual
network realized as packets on a
specific physical network containing
IEEE 802.1Q headers with a specific
VID field value. VLAN networks sharing
the same physical network are isolated
from each other at L2, and can even
have overlapping IP address spaces.
Each distinct physical network
supporting VLAN networks is treated as
a separate VLAN trunk, with a distinct
space of VID values. Valid VID values
are 1 through 4094.</para>
</listitem>
<listitem>
<para><emphasis role="bold">flat
network</emphasis>. A virtual
network realized as packets on a
specific physical network containing
no IEEE 802.1Q header. Each physical
network can realize at most one flat
network.</para>
</listitem>
<listitem>
<para><emphasis role="bold">local
network</emphasis>. A virtual
network that allows communication
within each host, but not across a
network. Local networks are intended
mainly for single-node test scenarios,
but may have other uses.</para>
</listitem>
<listitem>
<para><emphasis role="bold">GRE
network</emphasis>. A virtual
network realized as network packets
encapsulated using GRE. GRE networks
are also referred to as "tunnels". GRE
tunnel packets are routed by the
host's IP routing table, so GRE
networks are not associated by
OpenStack Networking with specific
physical networks.</para>
</listitem>
</itemizedlist></para>
<para>Both the openvswitch and linuxbridge plug-ins
support VLAN networks, flat networks, and local
networks. Only the openvswitch plug-in currently
supports GRE networks, provided that the host's
Linux kernel supports the required Open vSwitch
features.</para>
</section>
<section xml:id="provider_attributes">
<title>Provider attributes</title>
<para>The provider extension extends the OpenStack
Networking network resource with the following
three additional attributes:</para>
<table rules="all">
<caption>Provider Network Attributes</caption>
<col width="25%"/>
<col width="10%"/>
<col width="25%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>provider:network_type</td>
<td>String</td>
<td>N/A</td>
<td>The physical mechanism by which the
virtual network is realized. Possible
values are "flat", "vlan", "local",
and "gre", corresponding to flat
networks, VLAN networks, local
networks, and GRE networks as defined
above. All types of provider networks
can be created by administrators,
while tenant networks can be realized
as "vlan", "gre", or "local" network
types depending on plug-in
configuration.</td>
</tr>
<tr>
<td>provider:physical_network</td>
<td>String</td>
<td>If a physical network named "default"
has been configured, and if
provider:network_type is "flat" or
"vlan", then "default" is used.</td>
<td>The name of the physical network over
which the virtual network is realized
for flat and VLAN networks. Not
applicable to the "local" or "gre"
network types.</td>
</tr>
<tr>
<td>provider:segmentation_id</td>
<td>Integer</td>
<td>N/A</td>
<td>For VLAN networks, the VLAN VID on the
physical network that realizes the
virtual network. Valid VLAN VIDs are 1
through 4094. For GRE networks, the
tunnel ID. Valid tunnel IDs are any 32
bit unsigned integer. Not applicable
to the "flat" or "local" network
types.</td>
</tr>
</tbody>
</table>
<para>The provider attributes are returned by
OpenStack Networking API operations when the
client is authorized for the
<code>extension:provider_network:view</code>
action through the OpenStack Networking policy
configuration. The provider attributes are only
accepted for network API operations if the client
is authorized for the
<code>extension:provider_network:set</code>
action. The default OpenStack Networking API
policy configuration authorizes both actions for
users with the admin role. See <xref
linkend="section_auth"/> for details on policy
configuration.</para>
</section>
<section xml:id="provider_api_workflow">
<title>Provider Extension API operations</title>
<para>To use the provider extension with the default
policy settings, you must have the administrative
role.</para>
<para>The following table shows example neutron
commands that enable you to complete basic
provider extension API operations:</para>
<table rules="all">
<caption>Basic provider extension API
operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<para>Shows all attributes of a
network, including provider
attributes.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-show &lt;name or net-id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a local provider
network.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type local</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a flat provider network.
When you create flat networks,
&lt;phys-net-name&gt; must be known
to the plug-in. See the <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration
Reference</citetitle> for
details.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type flat --provider:physical_network &lt;phys-net-name&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a VLAN provider network.
When you create VLAN networks,
&lt;phys-net-name&gt; must be known
to the plug-in. See the <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration
Reference</citetitle> for details
on configuring network_vlan_ranges
to identify all physical networks.
When you create VLAN networks,
&lt;VID&gt; can fall either within
or outside any configured ranges of
VLAN IDs from which tenant networks
are allocated.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type vlan --provider:physical_network &lt;phys-net-name&gt; --provider:segmentation_id &lt;VID&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a GRE provider network.
When you create GRE networks,
&lt;tunnel-id&gt; can be either
inside or outside any tunnel ID
ranges from which tenant networks
are allocated.</para>
<para>After you create provider
networks, you can allocate subnets,
which you can use in the same way
as other virtual networks, subject
to authorization policy based on
the specified
&lt;tenant_id&gt;.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type gre --provider:segmentation_id &lt;tunnel-id&gt;</userinput></screen>
</td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="l3_router_and_nat">
<title>L3 Routing and NAT</title>
<para>Just like the core OpenStack Networking API provides
abstract L2 network segments that are decoupled from
the technology used to implement the L2 network,
OpenStack Networking includes an API extension that
provides abstract L3 routers that API users can
dynamically provision and configure. These OpenStack
Networking routers can connect multiple L2 OpenStack
Networking networks, and can also provide a "gateway"
that connects one or more private L2 networks to a
shared "external" network (for example, a public
network for access to the Internet). See the
<citetitle xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration Reference</citetitle> for
details on common models of deploying Networking L3
routers.</para>
<para>The L3 router provides basic NAT capabilities on
"gateway" ports that uplink the router to external
networks. This router SNATs all traffic by default,
and supports "Floating IPs", which creates a static
one-to-one mapping from a public IP on the external
network to a private IP on one of the other subnets
attached to the router. This allows a tenant to
selectively expose VMs on private networks to other
hosts on the external network (and often to all hosts
on the Internet). Floating IPs can be allocated and
then mapped from one OpenStack Networking port to
another, as needed.</para>
<section xml:id="l3_api_abstractions">
<title>L3 API abstractions</title>
<table rules="all">
<caption>Router</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the router.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Human-readable name for the router.
Might not be unique.</td>
</tr>
<tr>
<td>admin_state_up</td>
<td>Bool</td>
<td>True</td>
<td>The administrative state of router. If
false (down), the router does not
forward packets.</td>
</tr>
<tr>
<td>status</td>
<td>String</td>
<td>N/A</td>
<td><para>Indicates whether router is
currently operational.</para></td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the router. Only admin users
can specify a tenant_id other than its
own.</td>
</tr>
<tr>
<td>external_gateway_info</td>
<td>dict contain 'network_id' key-value
pair</td>
<td>Null</td>
<td>External network that this router
connects to for gateway services (for
example, NAT)</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Floating IP</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the floating IP.</td>
</tr>
<tr>
<td>floating_ip_address</td>
<td>string (IP address)</td>
<td>allocated by OpenStack Networking</td>
<td>The external network IP address
available to be mapped to an internal
IP address.</td>
</tr>
<tr>
<td>floating_network_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td><para>The network indicating the set
of subnets from which the floating
IP should be allocated</para></td>
</tr>
<tr>
<td>router_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Read-only value indicating the router
that connects the external network to
the associated internal port, if a
port is associated.</td>
</tr>
<tr>
<td>port_id</td>
<td>uuid-str</td>
<td>Null</td>
<td>Indicates the internal OpenStack
Networking port associated with the
external floating IP.</td>
</tr>
<tr>
<td>fixed_ip_address</td>
<td>string (IP address)</td>
<td>Null</td>
<td>Indicates the IP address on the
internal port that is mapped to by the
floating IP (since an OpenStack
Networking port might have more than
one IP address).</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the Floating IP. Only admin
users can specify a tenant_id other
than its own.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="l3_workflow">
<title>Basic L3 operations</title>
<para>External networks are visible to all users.
However, the default policy settings enable only
administrative users to create, update, and delete
external networks.</para>
<para>The following table shows example neutron
commands that enable you to complete basic L3
operations:</para>
<table rules="all">
<caption>Basic L3 operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<para>Creates external
networks.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create public --router:external=True</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create public 172.16.1.0/24</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Lists external
networks.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-list -- --router:external=True</userinput></screen>
</td>
</tr>
<tr>
<td><para>Creates an internal-only router
that connects to multiple L2
networks privately.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput>
<prompt>$</prompt> <userinput>neutron net-create net2</userinput>
<prompt>$</prompt> <userinput>neutron subnet-create net2 10.0.1.0/24</userinput>
<prompt>$</prompt> <userinput>neutron router-create router1</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router1 &lt;subnet1-uuid&gt;</userinput>
<prompt>$</prompt> <userinput>neutron router-interface-add router1 &lt;subnet2-uuid&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Connects a router to an external
network, which enables that router
to act as a NAT gateway for
external connectivity.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron router-gateway-set router1 &lt;ext-net-id&gt;</userinput></screen>
<para>The router obtains an interface
with the gateway_ip address of the
subnet, and this interface is
attached to a port on the L2
OpenStack Networking network
associated with the subnet. The
router also gets a gateway
interface to the specified external
network. This provides SNAT
connectivity to the external
network as well as support for
floating IPs allocated on that
external networks. Commonly an
external network maps to a network
in the provider</para>
</td>
</tr>
<tr>
<td>
<para>Lists routers.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron router-list</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Shows information for a
specified router.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron router-show &lt;router_id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Shows all internal interfaces
for a router.</para>
</td>
</tr>
<tr>
<td>
<para>Identifies the
<literal>port-id</literal> that
represents the VM NIC to which the
floating IP should map.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -- --device_id=&lt;instance_id&gt;</userinput></screen>
<para>This port must be on an
OpenStack Networking subnet that is
attached to a router uplinked to
the external network used to create
the floating IP.  Conceptually,
this is because the router must be
able to perform the Destination NAT
(DNAT) rewriting of packets from
the Floating IP address (chosen
from a subnet on the external
network) to the internal Fixed IP
(chosen from a private subnet that
is “behind” the router).</para>
</td>
</tr>
<tr>
<td>
<para>Creates a floating IP address
and associates it with a
port.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron floatingip-create &lt;ext-net-id&gt;</userinput>
<prompt>$</prompt> <userinput>neutron floatingip-associate &lt;floatingip-id&gt; &lt;internal VM port-id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a floating IP address
and associates it with a port, in a
single step.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron floatingip-create --port_id &lt;internal VM port-id&gt; &lt;ext-net-id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Lists floating IPs.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron floatingip-list</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Finds floating IP for a
specified VM port.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron floatingip-list -- --port_id=ZZZ</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Disassociates a floating IP
address.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron floatingip-disassociate &lt;floatingip-id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Deletes the floating IP
address.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron floatingip-delete &lt;floatingip-id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Clears the gateway.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron router-gateway-clear router1</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Removes the interfaces from the
router.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron router-interface-delete router1 &lt;subnet-id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Deletes the router.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron router-delete router1</userinput></screen>
</td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="securitygroups">
<title>Security groups</title>
<para>Security groups and security group rules allows
administrators and tenants the ability to specify the
type of traffic and direction (ingress/egress) that is
allowed to pass through a port. A security group is a
container for security group rules.</para>
<para>When a port is created in OpenStack Networking it is
associated with a security group. If a security group
is not specified the port is associated with a
'default' security group. By default, this group drops
all ingress traffic and allows all egress. Rules can
be added to this group in order to change the
behaviour.</para>
<para>To use the OpenStack Compute security group APIs or
use OpenStack Compute to orchestrate the creation of
ports for instances on specific security groups, you
must complete additional configuration. You must
configure the <filename>/etc/nova/nova.conf</filename>
file and set the
<code>security_group_api=neutron</code> option on
every node that runs <systemitem class="service"
>nova-compute</systemitem> and <systemitem
class="service">nova-api</systemitem>. After you
make this change, restart <systemitem class="service"
>nova-api</systemitem> and <systemitem
class="service">nova-compute</systemitem> to pick
up this change. Then, you can use both the OpenStack
Compute and OpenStack Network security group APIs at
the same time.</para>
<note>
<itemizedlist>
<listitem>
<para>To use the OpenStack Compute security
group API with OpenStack Networking, the
OpenStack Networking plug-in must
implement the security group API. The
following plug-ins currently implement
this: Nicira NVP, Open vSwitch, Linux
Bridge, NEC, and Ryu.</para>
</listitem>
<listitem>
<para>You must configure the correct firewall
driver in the
<literal>securitygroup</literal>
section of the plug-in/agent configuration
file. Some plug-ins and agents, such as
Linux Bridge Agent and Open vSwitch Agent,
use the no-operation driver as the
default, which results in non-working
security groups.</para>
</listitem>
<listitem>
<para>When using the security group API
through OpenStack Compute, security groups
are applied to all ports on an instance.
The reason for this is that OpenStack
Compute security group APIs are instances
based and not port based as OpenStack
Networking.</para>
</listitem>
</itemizedlist>
</note>
<section xml:id="securitygroup_api_abstractions">
<title>Security Group API Abstractions</title>
<table rules="all">
<caption>Security Group Attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the security group.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Human-readable name for the security
group. Might not be unique. Cannot be
named default as that is automatically
created for a tenant.</td>
</tr>
<tr>
<td>description</td>
<td>String</td>
<td>None</td>
<td>Human-readable description of a
security group.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the security group. Only
admin users can specify a tenant_id
other than their own.</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Security Group Rules</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the security group rule.</td>
</tr>
<tr>
<td>security_group_id</td>
<td>uuid-str or Integer</td>
<td>allocated by OpenStack Networking</td>
<td>The security group to associate rule
with.</td>
</tr>
<tr>
<td>direction</td>
<td>String</td>
<td>N/A</td>
<td>The direction the traffic is allow
(ingress/egress) from a VM.</td>
</tr>
<tr>
<td>protocol</td>
<td>String</td>
<td>None</td>
<td>IP Protocol (icmp, tcp, udp, and so
on).</td>
</tr>
<tr>
<td>port_range_min</td>
<td>Integer</td>
<td>None</td>
<td>Port at start of range</td>
</tr>
<tr>
<td>port_range_max</td>
<td>Integer</td>
<td>None</td>
<td>Port at end of range</td>
</tr>
<tr>
<td>ethertype</td>
<td>String</td>
<td>None</td>
<td>ethertype in L2 packet (IPv4, IPv6,
and so on)</td>
</tr>
<tr>
<td>remote_ip_prefix</td>
<td>string (IP cidr)</td>
<td>None</td>
<td>CIDR for address range</td>
</tr>
<tr>
<td>remote_group_id</td>
<td>uuid-str or Integer</td>
<td>allocated by OpenStack Networking or
OpenStack Compute</td>
<td>Source security group to apply to
rule.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the security group rule. Only
admin users can specify a tenant_id
other than its own.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="securitygroup_workflow">
<title>Basic security group operations</title>
<para>The following table shows example neutron
commands that enable you to complete basic
security group operations:</para>
<table rules="all">
<caption>Basic security group operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<para>Creates a security group for our
web servers.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron security-group-create webservers --description "security group for webservers"</userinput></screen></td>
</tr>
<tr>
<td><para>Lists security
groups.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron security-group-list</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a security group rule to
allow port 80 ingress.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 &lt;security_group_uuid&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Lists security group
rules.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron security-group-rule-list</userinput></screen>
</td>
</tr>
<tr>
<td><para>Deletes a security group
rule.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron security-group-rule-delete &lt;security_group_rule_uuid&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Deletes a security
group.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron security-group-delete &lt;security_group_uuid&gt;</userinput></screen>
</td>
</tr>
<tr>
<td><para>Creates a port and associates
two security groups.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron port-create --security-group &lt;security_group_id1&gt; --security-group &lt;security_group_id2&gt; &lt;network_id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Removes security groups from a
port.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron port-update --no-security-groups &lt;port_id&gt;</userinput></screen>
</td>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="lbaas_workflow">
<title>Basic Load-Balancer-as-a-Service operations</title>
<note>
<para>The Load-Balancer-as-a-Service (LBaaS) API
provisions and configures load balancers. The
Havana release offers a reference implementation
that is based on the HAProxy software load
balancer.</para>
</note>
<para>The following table shows example neutron commands
that enable you to complete basic LBaaS
operations:</para>
<table rules="all">
<caption>Basic LBaaS operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<para>Creates a load balancer pool by
using specific provider.</para>
<para><parameter>--provider</parameter> is
an optional argument. If not used, the
pool is created with default provider
for LBaaS service. You should
configure the default provider in the
<literal>[service_providers]</literal>
section of
<filename>neutron.conf</filename>
file. If no default provider is
specified for LBaaS, the
<parameter>--provider</parameter>
option is required for pool
creation.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id &lt;subnet-uuid&gt; <parameter>--provider &lt;provider_name&gt;</parameter></userinput></screen></td>
</tr>
<tr>
<td>
<para>Associates two web servers with
pool.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron lb-member-create --address &lt;webserver one IP&gt; --protocol-port 80 mypool</userinput>
<prompt>$</prompt> <userinput>neutron lb-member-create --address &lt;webserver two IP&gt; --protocol-port 80 mypool</userinput></screen></td>
</tr>
<tr>
<td>
<para>Creates a health monitor which
checks to make sure our instances are
still running on the specified
protocol-port.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3</userinput></screen>
</td>
</tr>
<tr>
<td><para>Associates a health monitor with
pool.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron lb-healthmonitor-associate &lt;healthmonitor-uuid&gt; mypool</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a virtual IP (VIP) address
that, when accessed through the load
balancer, directs the requests to one
of the pool members.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id &lt;subnet-uuid&gt; mypool</userinput></screen>
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="plugin_specific_extensions">
<title>Plug-in specific extensions</title>
<?dbhtml stop-chunking?>
<para>Each vendor may choose to implement additional API
extensions to the core API. This section describes the
extensions for each plug-in.</para>
<section xml:id="nicira_extensions">
<title>Nicira NVP extensions</title>
<para>The Nicira NVP plug-in Extensions</para>
<section xml:id="nicira_nvp_plugin_qos_extension">
<title>Nicira NVP QoS extension</title>
<para>The Nicira NVP QoS extension rate-limits
network ports to guarantee a specific amount
of bandwidth for each port. This extension, by
default, is only accessible by a tenant with
an admin role but is configurable through the
<filename>policy.json</filename> file. To
use this extension, create a queue and specify
the min/max bandwidth rates (kbps) and
optionally set the QoS Marking and DSCP value
(if your network fabric uses these values to
make forwarding decisions). Once created, you
can associate a queue with a network. Then,
when ports are created on that network they
are automatically created and associated with
the specific queue size that was associated
with the network. Because one size queue for a
every port on a network may not be optimal, a
scaling factor from the nova flavor
'rxtx_factor' is passed in from OpenStack
Compute when creating the port to scale the
queue.</para>
<para>Lastly, if you want to set a specific
baseline QoS policy for the amount of
bandwidth a single port can use (unless a
network queue is specified with the network a
port is created on) a default queue can be
created in neutron which then causes ports
created to be associated with a queue of that
size times the rxtx scaling factor. Note that
after a network or default queue is specified,
queues are added to ports that are
subsequently created but are not added to
existing ports.</para>
<section xml:id="nicira_nvp_qos_api_abstractions">
<title>Nicira NVP QoS API abstractions</title>
<table rules="all">
<caption>Nicira NVP QoS
Attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the QoS queue.</td>
</tr>
<tr>
<td>default</td>
<td>Boolean</td>
<td>False by default</td>
<td>If True, ports are created with
this queue size unless the network
port is created or associated with
a queue at port creation time.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Name for QoS queue.</td>
</tr>
<tr>
<td>min</td>
<td>Integer</td>
<td>0</td>
<td>Minimum Bandwidth Rate (kbps).
</td>
</tr>
<tr>
<td>max</td>
<td>Integer</td>
<td>N/A</td>
<td>Maximum Bandwidth Rate (kbps).
</td>
</tr>
<tr>
<td>qos_marking</td>
<td>String</td>
<td>untrusted by default</td>
<td>Whether QoS marking should be
trusted or untrusted.</td>
</tr>
<tr>
<td>dscp</td>
<td>Integer</td>
<td>0</td>
<td>DSCP Marking value.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>The owner of the QoS
queue.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="nicira_nvp_qos_walk_through">
<title>Common Nicira NVP QoS
operations</title>
<para>The following table shows example
neutron commands that enable you to
complete basic queue operations:</para>
<table rules="all">
<caption>Basic Nicira NVP QoS
operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<para>Creates QoS Queue
(admin-only).</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron queue-create--min 10 --max 1000 myqueue</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Associates a queue with a
network.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron net-create network --queue_id=&lt;queue_id&gt;</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Creates a default system
queue.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron queue-create --default True --min 10 --max 2000 default</userinput></screen>
</td>
</tr>
<tr>
<td><para>Lists QoS
queues.</para></td>
<td><screen><prompt>$</prompt> <userinput>neutron queue-list</userinput></screen>
</td>
</tr>
<tr>
<td>
<para>Deletes a QoS
queue.</para></td>
<td>
<screen><prompt>$</prompt> <userinput>neutron queue-delete &lt;queue_id or name&gt;'</userinput></screen>
</td>
</tr>
</tbody>
</table>
</section>
</section>
</section>
</section>
</section>
<section xml:id="section_networking-adv-operational_features">
<title>Advanced operational features</title>
<section xml:id="section_adv_logging">
<title>Logging settings</title>
<para>Networking components use Python logging module to
do logging. Logging configuration can be provided in
<filename>neutron.conf</filename> or as command
line options. Command options override ones in
<filename>neutron.conf</filename>.</para>
<para>To configure logging for OpenStack Networking
components, use one of the following methods:</para>
<itemizedlist>
<listitem>
<para>Provide logging settings in a logging
configuration file.</para>
<para>See <link
xlink:href="http://docs.python.org/howto/logging.html"
>Python Logging HOWTO</link> for logging
configuration file.</para>
</listitem>
<listitem>
<para>Provide logging setting in
<filename>neutron.conf</filename></para>
<programlisting language="ini">[DEFAULT]
# Default log level is WARNING
# Show debugging output in logs (sets DEBUG log level output)
# debug = False
# Show more verbose log output (sets INFO log level output) if debug is False
# verbose = False
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog = False
# syslog_log_facility = LOG_USER
# if use_syslog is False, we can set log_file and log_dir.
# if use_syslog is False and we do not set log_file,
# the log will be printed to stdout.
# log_file =
# log_dir =</programlisting>
</listitem>
</itemizedlist>
</section>
<section xml:id="section_adv_notification">
<title>Notifications</title>
<para>Notifications can be sent when Networking resources
such as network, subnet and port are created, updated
or deleted.</para>
<section xml:id="section_adv_notification_overview">
<title>Notification options</title>
<para>To support DHCP agent, rpc_notifier driver must
be set. To set up the notification, edit
notification options in
<filename>neutron.conf</filename>:</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
# default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
<section xml:id="section_adv_notification_cases">
<title>Setting cases</title>
<section
xml:id="section_adv_notification_cases_log_rpc">
<title>Logging and RPC</title>
<para>The following options configure the
OpenStack Networking server to send
notifications through logging and RPC. The
logging options are described in <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration
Reference</citetitle> . RPC notifications
go to 'notifications.info' queue bound to a
topic exchange defined by 'control_exchange'
in <filename>neutron.conf</filename>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rpc_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
<section
xml:id="ch_adv_notification_cases_multi_rpc_topics">
<title>Multiple RPC topics</title>
<para>The following options configure the
OpenStack Networking server to send
notifications to multiple RPC topics. RPC
notifications go to 'notifications_one.info'
and 'notifications_two.info' queues bound to a
topic exchange defined by 'control_exchange'
in <filename>neutron.conf</filename>.</para>
<programlisting language="ini"># ============ Notification System Options =====================
# Notifications can be sent when network/subnet/port are create, updated or deleted.
# There are three methods of sending notifications: logging (via the
# log_file directive), rpc (via a message queue) and
# noop (no notifications sent, the default)
# Notification_driver can be defined multiple times
# Do nothing driver
# notification_driver = neutron.openstack.common.notifier.no_op_notifier
# Logging driver
# notification_driver = neutron.openstack.common.notifier.log_notifier
# RPC driver
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
# default_notification_level is used to form actual topic names or to set logging level
default_notification_level = INFO
# default_publisher_id is a part of the notification payload
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications_one,notifications_two</programlisting>
</section>
</section>
</section>
</section>
<section xml:id="section_auth">
<title>Authentication and authorization</title>
<para>OpenStack Networking uses the OpenStack Identity service
(project name keystone) as the default authentication
service. When OpenStack Identity is enabled Users
submitting requests to the OpenStack Networking service
must provide an authentication token in X-Auth-Token
request header. The aforementioned token should have been
obtained by authenticating with the OpenStack Identity
endpoint. For more information concerning authentication
with OpenStack Identity, please refer to the OpenStack
Identity documentation. When OpenStack Identity is
enabled, it is not mandatory to specify tenant_id for
resources in create requests, as the tenant ID is derived
from the Authentication token.</para>
<note>
<para>The default authorization settings only allow
administrative users to create resources on behalf of
a different tenant. OpenStack Networking uses
information received from OpenStack Identity to
authorize user requests. OpenStack Networking handles
two kind of authorization policies:</para>
</note>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Operation-based</emphasis>
policies specify access criteria for specific
operations, possibly with fine-grained control
over specific attributes;</para>
</listitem>
<listitem>
<para><emphasis role="bold">Resource-based</emphasis>
policies specify whether access to specific
resource is granted or not according to the
permissions configured for the resource (currently
available only for the network resource). The
actual authorization policies enforced in
OpenStack Networking might vary from deployment to
deployment.</para>
</listitem>
</itemizedlist>
<para>The policy engine reads entries from the <emphasis
role="italic">policy.json</emphasis> file. The actual
location of this file might vary from distribution to
distribution. Entries can be updated while the system is
running, and no service restart is required. Every time
the policy file is updated, the policies are automatically
reloaded. Currently the only way of updating such policies
is to edit the policy file. In this section, the terms
<emphasis role="italic">policy</emphasis> and
<emphasis role="italic">rule</emphasis> refer to
objects that are specified in the same way in the policy
file. There are no syntax differences between a rule and a
policy. A policy is something that is matched directly
from the OpenStack Networking policy engine. A rule is an
element in a policy, which is evaluated. For instance in
<code>create_subnet:
[["admin_or_network_owner"]]</code>, <emphasis
role="italic">create_subnet</emphasis> is a policy,
and <emphasis role="italic"
>admin_or_network_owner</emphasis> is a rule.</para>
<para>Policies are triggered by the OpenStack Networking
policy engine whenever one of them matches an OpenStack
Networking API operation or a specific attribute being
used in a given operation. For instance the
<code>create_subnet</code> policy is triggered every
time a <code>POST /v2.0/subnets</code> request is sent to
the OpenStack Networking server; on the other hand
<code>create_network:shared</code> is triggered every
time the <emphasis role="italic">shared</emphasis>
attribute is explicitly specified (and set to a value
different from its default) in a <code>POST
/v2.0/networks</code> request. It is also worth
mentioning that policies can be also related to specific
API extensions; for instance
<code>extension:provider_network:set</code> is be
triggered if the attributes defined by the Provider
Network extensions are specified in an API request.</para>
<para>An authorization policy can be composed by one or more
rules. If more rules are specified, evaluation policy
succeeds if any of the rules evaluates successfully; if an
API operation matches multiple policies, then all the
policies must evaluate successfully. Also, authorization
rules are recursive. Once a rule is matched, the rule(s)
can be resolved to another rule, until a terminal rule is
reached.</para>
<para>The OpenStack Networking policy engine currently defines
the following kinds of terminal rules:</para>
<para><itemizedlist>
<listitem>
<para><emphasis role="bold">Role-based
rules</emphasis> evaluate successfully if
the user who submits the request has the
specified role. For instance
<code>"role:admin"</code>is successful if
the user submitting the request is an
administrator.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Field-based rules
</emphasis>evaluate successfully if a field of
the resource specified in the current request
matches a specific value. For instance
<code>"field:networks:shared=True"</code>
is successful if the attribute <emphasis
role="italic">shared</emphasis> of the
<emphasis role="italic">network</emphasis>
resource is set to true.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Generic
rules</emphasis> compare an attribute in
the resource with an attribute extracted from
the user's security credentials and evaluates
successfully if the comparison is successful.
For instance
<code>"tenant_id:%(tenant_id)s"</code> is
successful if the tenant identifier in the
resource is equal to the tenant identifier of
the user submitting the request.</para>
</listitem>
</itemizedlist> The following is an extract from the
default policy.json file:</para>
<para>
<programlisting language="bash">{
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"shared": [["field:networks:shared=True"]],
[2] "default": [["rule:admin_or_owner"]],
"create_subnet": [["rule:admin_or_network_owner"]],
"get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],
"update_subnet": [["rule:admin_or_network_owner"]],
"delete_subnet": [["rule:admin_or_network_owner"]],
"create_network": [],
[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]],
[4] "create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [],
[5] "create_port:mac_address": [["rule:admin_or_network_owner"]],
"create_port:fixed_ips": [["rule:admin_or_network_owner"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_or_owner"]],
"delete_port": [["rule:admin_or_owner"]]
}</programlisting>
</para>
<para>[1] is a rule which evaluates successfully if the
current user is an administrator or the owner of the
resource specified in the request (tenant identifier is
equal).</para>
<para>[2] is the default policy which is always evaluated if
an API operation does not match any of the policies in
policy.json.</para>
<para>[3] This policy evaluates successfully if either
<emphasis role="italic">admin_or_owner</emphasis>, or
<emphasis role="italic">shared</emphasis> evaluates
successfully.</para>
<para>[4] This policy restricts the ability to manipulate the
<emphasis role="italic">shared</emphasis> attribute
for a network to administrators only.</para>
<para>[5] This policy restricts the ability to manipulate the
<emphasis role="italic">mac_address</emphasis>
attribute for a port only to administrators and the owner
of the network where the port is attached.</para>
<para>In some cases, some operations should be restricted to
administrators only. The following example shows you how
to modify a policy file to permit tenants to define
networks and see their resources and permit administrative
users to perform all other operations:</para>
<programlisting language="bash">{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}</programlisting>
</section>
<section xml:id="section_high_avail">
<title>High Availability</title>
<para>The use of high-availability in a Networking deployment
helps prevent individual node failures. In general, you
can run neutron-server and neutron-dhcp-agent in an
active-active fashion. You can run the neutron-l3-agent
service as active/passive, which avoids IP conflicts with
respect to gateway IP addresses.</para>
<section xml:id="ha_pacemaker">
<title>OpenStack Networking High Availability with
Pacemaker</title>
<para>You can run some OpenStack Networking services into
a cluster (Active / Passive or Active / Active for
OpenStack Networking Server only) with
Pacemaker.</para>
<para>Download the latest resources agents:</para>
<itemizedlist>
<listitem>
<para>neutron-server: <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-server"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-dhcp-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-dhcp"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-l3-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-l3"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
</itemizedlist>
<note xmlns:db="http://docbook.org/ns/docbook">
<para>For information about how to build a cluster,
see <link
xlink:href="http://www.clusterlabs.org/wiki/Documentation"
>Pacemaker documentation</link>.</para>
</note>
</section>
</section>
<section xml:id="section_pagination_and_sorting_support">
<title>Plug-in pagination and sorting support</title>
<table rules="all">
<caption>Plug-ins that support native pagination and
sorting</caption>
<thead>
<tr>
<th>Plug-in</th>
<th>Support Native Pagination</th>
<th>Support Native Sorting</th>
</tr>
</thead>
<tbody>
<tr>
<td>Open vSwitch</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>LinuxBridge</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</section>
</chapter>