openstack-manuals/doc/admin-guide-cloud/ch_networking.xml

1454 lines
74 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_networking">
<?dbhtml stop-chunking?>
<title>Networking</title>
<para>Learn Networking concepts, architecture, and basic and
advanced neutron and nova command-line interface (CLI)
commands so that you can administer Networking in a
cloud.</para>
<section xml:id="section_networking-intro">
<title>Introduction to Networking</title>
<para>The Networking service, code-named Neutron, provides an
API for defining network connectivity and addressing in
the cloud. The Networking service enables operators to
leverage different networking technologies to power their
cloud networking. The Networking service also provides an API to configure
and manage a variety of network services ranging from L3
forwarding and NAT to load balancing, edge firewalls, and
IPSEC VPN.</para>
<para>For a detailed description of the Networking API
abstractions and their attributes, see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
><citetitle>OpenStack Networking API v2.0
Reference</citetitle></link>.</para>
<section xml:id="section_networking-api">
<title>Networking API</title>
<para>Networking is a virtual network service that
provides a powerful API to define the network
connectivity and IP addressing used by devices from
other services, such as Compute.</para>
<para>The Compute API has a virtual server abstraction to
describe computing resources. Similarly, the
Networking API has virtual network, subnet, and port
abstractions to describe networking resources.</para>
<para>
<table rules="all">
<caption>Networking resources</caption>
<col width="10%"/>
<col width="90%"/>
<thead>
<tr>
<th>Resource</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Network</emphasis></td>
<td>An isolated L2 segment, analogous to VLAN in the physical networking world.</td>
</tr>
<tr>
<td><emphasis role="bold">Subnet</emphasis></td>
<td>A block of v4 or v6 IP addresses and associated configuration state.</td>
</tr>
<tr>
<td><emphasis role="bold">Port</emphasis></td>
<td>A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.</td>
</tr>
</tbody>
</table>
</para>
<para>You can configure rich network topologies by
creating and configuring networks and subnets, and
then instructing other OpenStack services like Compute
to attach virtual devices to ports on these networks.</para><para>In particular, Networking supports each tenant having
multiple private networks, and allows tenants to
choose their own IP addressing scheme (even if those
IP addresses overlap with those used by other
tenants). The Networking service:</para>
<itemizedlist>
<listitem>
<para>Enables advanced cloud networking use cases,
such as building multi-tiered web applications
and allowing applications to be migrated to
the cloud without changing IP
addresses.</para>
</listitem>
<listitem>
<para>Offers flexibility for the cloud
administrator to customize network
offerings.</para>
</listitem>
<listitem>
<para>Enables developers to extend the Networking
API. Over time, the extended functionality
becomes part of the core Networking
API.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="section_plugin-arch">
<title>Plug-in architecture</title>
<para>The original Compute network implementation assumed
a basic model of isolation through Linux VLANs and IP
tables. Networking introduces the concept of a
<emphasis role="italic">plug-in</emphasis>, which
is a back-end implementation of the Networking API. A
plug-in can use a variety of technologies to implement
the logical API requests. Some Networking plug-ins
might use basic Linux VLANs and IP tables, while
others might use more advanced technologies, such as
L2-in-L3 tunneling or OpenFlow, to provide similar
benefits.</para>
<para>
<table rules="all">
<caption>Available networking plugins</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Plugin</th>
<th>Documentation</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Big Switch Plug-in (Floodlight REST Proxy)</emphasis></td>
<td>
<link xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin</link>
</td>
</tr>
<tr>
<td><emphasis role="bold">Brocade Plug-in</emphasis></td>
<td><link xlink:href="https://github.com/brocade/brocade"
>https://github.com/brocade/brocade</link></td>
</tr>
<tr>
<td><emphasis role="bold">Cisco</emphasis></td>
<td><link xlink:href="http://wiki.openstack.org/cisco-neutron"
>http://wiki.openstack.org/cisco-neutron</link></td>
</tr>
<tr>
<td><emphasis role="bold">Cloudbase Hyper-V Plug-in</emphasis></td>
<td><link xlink:href="http://www.cloudbase.it/quantum-hyper-v-plugin/"
>http://www.cloudbase.it/quantum-hyper-v-plugin/</link></td>
</tr>
<tr>
<td><emphasis role="bold">Linux Bridge Plug-in</emphasis></td>
<td><link xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Mellanox Plug-in</emphasis></td>
<td><link xlink:href="https://wiki.openstack.org/wiki/Mellanox-Neutron/"
>https://wiki.openstack.org/wiki/Mellanox-Neutron/</link></td>
</tr>
<tr>
<td><emphasis role="bold">Midonet Plug-in</emphasis></td>
<td><link xlink:href="http://www.midokura.com/"
>http://www.midokura.com/</link></td>
</tr>
<tr>
<td><emphasis role="bold">ML2 (Modular Layer 2) Plug-in</emphasis></td>
<td><link xlink:href="https://wiki.openstack.org/wiki/Neutron/ML2"
>https://wiki.openstack.org/wiki/Neutron/ML2</link></td>
</tr>
<tr>
<td><emphasis role="bold">NEC OpenFlow Plug-in</emphasis></td>
<td><link xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></td>
</tr>
<tr>
<td><emphasis role="bold">Nicira NVP Plug-in</emphasis></td>
<td><link xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html"
>NVP Product Overview</link>, <link xlink:href="http://www.nicira.com/support"
>NVP Product Support</link></td>
</tr>
<tr>
<td><emphasis role="bold">Open vSwitch Plug-in</emphasis></td>
<td>included in this guide</td>
</tr>
<tr>
<td><emphasis role="bold">PLUMgrid</emphasis></td>
<td><link xlink:href="https://wiki.openstack.org/wiki/PLUMgrid-Neutron"
>https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron</link></td>
</tr>
<tr>
<td><emphasis role="bold">Ryu Plug-in</emphasis></td>
<td><link xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></td>
</tr>
</tbody>
</table>
</para>
<para>Plug-ins can have different properties for hardware requirements, features,
performance, scale, or operator tools. Because Networking supports a large number of
plug-ins, the cloud administrator can weigh options to decide on the right
networking technology for the deployment.</para>
<para>In the Havana release, OpenStack Networking provides the <emphasis role="bold">Modular
Layer 2 (ML2)</emphasis> plug-in that can concurrently use multiple layer 2
networking technologies that are found in real-world data centers. It currently
works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2
framework simplifies the addition of support for new L2 technologies and reduces the
effort that is required to add and maintain them compared to monolithic
plug-ins.</para>
<note>
<title>Plugins Deprecation Notice:</title>
<para>The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana
release and will be removed in the Icehouse release. All features have been
ported to the ML2 plug-in in the form of mechanism drivers. ML2 currently
provides Linux Bridge, Open vSwitch and Hyper-v mechanism drivers.</para>
</note>
<para>Not all Networking plug-ins are compatible with all
possible Compute drivers:</para>
<table rules="all">
<caption>Plug-in compatibility with Compute
drivers</caption>
<thead>
<tr>
<th/>
<th>Libvirt (KVM/QEMU)</th>
<th>XenServer</th>
<th>VMware</th>
<th>Hyper-V</th>
<th>Bare-metal</th>
<th>PowerVM</th>
</tr>
</thead>
<tbody>
<tr>
<td>Bigswitch / Floodlight</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Brocade</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cisco</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Cloudbase Hyper-V</td>
<td/>
<td/>
<td/>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>Linux Bridge</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Mellanox</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Midonet</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>ML2</td>
<td>Yes</td>
<td/>
<td/>
<td>Yes</td>
<td/>
<td/>
</tr>
<tr>
<td>NEC OpenFlow</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Nicira NVP</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Open vSwitch</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Plumgrid</td>
<td>Yes</td>
<td/>
<td>Yes</td>
<td/>
<td/>
<td/>
</tr>
<tr>
<td>Ryu</td>
<td>Yes</td>
<td/>
<td/>
<td/>
<td/>
<td/>
</tr>
</tbody>
</table>
</section>
</section>
<section xml:id="section_networking-arch">
<title>Networking architecture</title>
<para>Before you deploy Networking, it helps to understand the
Networking components and how these components interact
with each other and with other OpenStack services.</para>
<section xml:id="arch_overview">
<title>Overview</title>
<para>Networking is a standalone service, just like other
OpenStack services such as Compute, Image service,
Identity service, or the Dashboard. Like those
services, a deployment of Networking often involves
deploying several processes on a variety of
hosts.</para>
<para>The Networking server uses the <systemitem
class="service">neutron-server</systemitem> daemon
to expose the Networking API and to pass user requests
to the configured Networking plug-in for additional
processing. Typically, the plug-in requires access to
a database for persistent storage (also similar to
other OpenStack services).</para>
<para>If your deployment uses a controller host to run
centralized Compute components, you can deploy the
Networking server on that same host. However,
Networking is entirely standalone and can be deployed
on its own host as well. Depending on your deployment,
Networking can also include the following
agents.</para>
<para>
<table rules="all">
<caption>Networking agents</caption>
<col width="30%"/>
<col width="70%"/>
<thead>
<tr>
<th>Agent</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">plug-in agent</emphasis>
(<literal>neutron-*-agent</literal>)</td>
<td>Runs on each hypervisor to perform local vswitch
configuration. The agent that runs depends on
the plug-in that you use, and some plug-ins do
not require an agent.</td>
</tr>
<tr>
<td><emphasis role="bold">dhcp agent</emphasis>
(<literal>neutron-dhcp-agent</literal>)</td>
<td>Provides DHCP services to tenant networks.
Some plug-ins use this agent.</td>
</tr>
<tr>
<td><emphasis role="bold">l3 agent</emphasis>
(<literal>neutron-l3-agent</literal>)</td>
<td>Provides L3/NAT forwarding to provide external
network access for VMs on tenant networks.
Some plug-ins use this agent.</td>
</tr>
</tbody>
</table>
</para>
<para>These agents interact with the main neutron process
through RPC (for example, rabbitmq or qpid) or through
the standard Networking API. Further:</para>
<itemizedlist>
<listitem>
<para>Networking relies on the Identity service
(Keystone) for the authentication and
authorization of all API requests.</para>
</listitem>
<listitem>
<para>Compute (Nova) interacts with Networking
through calls to its standard API.  As part of
creating a VM, the <systemitem class="service"
>nova-compute</systemitem> service
communicates with the Networking API to plug
each virtual NIC on the VM into a particular
network.  </para>
</listitem>
<listitem>
<para>The Dashboard (Horizon) integrates with the
Networking API, enabling administrators and
tenant users to create and manage network
services through a web-based GUI.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="networking-services">
<title>Place services on physical hosts</title>
<para>Like other OpenStack services, Networking enables
cloud administrators to run one or more services on
one or more physical devices. At one extreme, the
cloud administrator can run all service daemons on a
single physical host for evaluation purposes.
Alternatively the cloud administrator can run each
service on its own physical host and, in some cases,
can replicate services across multiple hosts for
redundancy. For more information, see the <citetitle
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:html="http://www.w3.org/1999/xhtml"
>OpenStack Configuration
Reference</citetitle>.</para>
<para>A standard architecture includes a cloud controller
host, a network gateway host, and a set of hypervisors
that run virtual machines. The cloud controller and
network gateway can be on the same host. However, if
you expect VMs to send significant traffic to or from
the Internet, a dedicated network gateway host helps
avoid CPU contention between the <systemitem
class="service">neutron-l3-agent</systemitem> and
other OpenStack services that forward packets.</para>
</section>
<section xml:id="network-connectivity">
<title>Network connectivity for physical hosts</title>
<mediaobject>
<imageobject>
<imagedata scale="50"
fileref="../common/figures/Neutron-PhysNet-Diagram.png"
/>
</imageobject>
</mediaobject>
<para>A standard Networking set up has one or more of the
following distinct physical data center
networks.</para>
<para>
<table rules="all">
<caption>General distinct physical data center networks</caption>
<col width="20%"/>
<col width="80%"/>
<thead>
<tr>
<th>Network</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Management network</emphasis></td>
<td>Provides internal communication between OpenStack Components. IP
addresses on this network should be reachable only within the data center.</td>
</tr>
<tr>
<td><emphasis role="bold">Data network</emphasis></td>
<td>Provides VM data communication within the cloud deployment. The IP addressing
requirements of this network depend on the Networking plug-in that is used.</td>
</tr>
<tr>
<td><emphasis role="bold">External network</emphasis></td>
<td>Provides VMs with Internet access in some deployment scenarios.
Anyone on the Internet can reach IP addresses on this network.</td>
</tr>
<tr>
<td><emphasis role="bold">API network</emphasis></td>
<td>Exposes all OpenStack APIs, including the Networking API, to
tenants. IP addresses on this network should be reachable by anyone on the Internet. The
API network might be the same as the external network, because it is possible to create an
external-network subnet that is allocated IP ranges that use less than the full range of IP
addresses in an IP block.</td>
</tr>
</tbody>
</table>
</para>
</section>
</section>
<section xml:id="section_networking-use">
<title>Use Networking</title>
<para>You can use Networking in the following ways:</para>
<itemizedlist>
<listitem>
<para>Expose the Networking API to cloud tenants,
which enables them to build rich network
topologies.</para>
</listitem>
<listitem>
<para>Have the cloud administrator, or an automated
administrative tool, create network connectivity
on behalf of tenants.</para>
</listitem>
</itemizedlist>
<para>A tenant or cloud administrator can both perform the
following procedures.</para>
<section xml:id="api_features">
<title>Core Networking API features</title>
<para>After you install and run Networking, tenants and
administrators can perform create-read-update-delete
(CRUD) API networking operations by using the
Networking API directly or the neutron command-line
interface (CLI). The neutron CLI is a wrapper around
the Networking API. Every Networking API call has a
corresponding neutron command.</para>
<para>The CLI includes a number of options. For details,
refer to the <link
xlink:href="http://docs.openstack.org/user-guide/content/"
><citetitle>OpenStack End User
Guide</citetitle></link>.</para>
<section xml:id="api_abstractions">
<title>API abstractions</title>
<para>The Networking v2.0 API provides control over
both L2 network topologies and the IP addresses
used on those networks (IP Address Management or
IPAM). There is also an extension to cover basic
L3 forwarding and NAT, which provides capabilities
similar to <command>nova-network</command>.</para>
<para><table rules="all">
<caption>API abstractions</caption>
<col width="10%"/>
<col width="90%"/>
<thead>
<tr>
<th>Abstraction</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><emphasis role="bold">Network</emphasis></td>
<td>An isolated L2 network segment
(similar to a VLAN) that forms the basis
for describing the L2 network topology
available in an Networking deployment.</td>
</tr>
<tr>
<td><emphasis role="bold">Subnet</emphasis></td>
<td>Associates a block of IP
addresses and other network configuration,
such as, default gateways or dns-servers,
with an Networking network. Each subnet
represents an IPv4 or IPv6 address block
and, if needed, each Networking network
can have multiple subnets.</td>
</tr>
<tr>
<td><emphasis role="bold">Port</emphasis></td>
<td>Represents an attachment port to a
L2 Networking network. When a port is
created on the network, by default it is
allocated an available fixed IP address
out of one of the designated subnets for
each IP version (if one exists). When the
port is destroyed, its allocated addresses
return to the pool of available IPs on the
subnet. Users of the Networking API can
either choose a specific IP address from
the block, or let Networking choose the
first available IP address.</td>
</tr>
</tbody>
</table></para>
<?hard-pagebreak?>
<para>The following table summarizes the attributes
available for each networking abstraction. For
information about API abstraction and operations,
see the <link
xlink:href="http://docs.openstack.org/api/openstack-network/2.0/content/"
>Networking API v2.0 Reference</link>.</para>
<table rules="all">
<caption>Network attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><option>admin_state_up</option></td>
<td>bool</td>
<td>True</td>
<td>Administrative state of the network.
If specified as False (down), this
network does not forward packets.
</td>
</tr>
<tr>
<td><option>id</option></td>
<td>uuid-str</td>
<td>Generated</td>
<td>UUID for this network.</td>
</tr>
<tr>
<td><option>name</option></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this network;
is not required to be unique.</td>
</tr>
<tr>
<td><option>shared</option></td>
<td>bool</td>
<td>False</td>
<td>Specifies whether this network
resource can be accessed by any
tenant. The default policy setting
restricts usage of this attribute to
administrative users only.</td>
</tr>
<tr>
<td><option>status</option></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether this network is
currently operational.</td>
</tr>
<tr>
<td><option>subnets</option></td>
<td>list(uuid-str)</td>
<td>Empty list</td>
<td>List of subnets associated with this
network.</td>
</tr>
<tr>
<td><option>tenant_id</option></td>
<td>uuid-str</td>
<td>N/A</td>
<td>Tenant owner of the network. Only
administrative users can set the
tenant identifier; this cannot be
changed using authorization policies.
</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Subnet Attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><option>allocation_pools</option></td>
<td>list(dict)</td>
<td>Every address in
<option>cidr</option>, excluding
<option>gateway_ip</option> (if
configured).</td>
<td><para>List of cidr sub-ranges that are
available for dynamic allocation to
ports. Syntax:</para>
<programlisting language="json">[ { "start":"10.0.0.2",
"end": "10.0.0.254"} ]</programlisting>
</td>
</tr>
<tr>
<td><option>cidr</option></td>
<td>string</td>
<td>N/A</td>
<td>IP range for this subnet, based on the
IP version.</td>
</tr>
<tr>
<td><option>dns_nameservers</option></td>
<td>list(string)</td>
<td>Empty list</td>
<td>List of DNS name servers used by hosts
in this subnet.</td>
</tr>
<tr>
<td><option>enable_dhcp</option></td>
<td>bool</td>
<td>True</td>
<td>Specifies whether DHCP is enabled for
this subnet.</td>
</tr>
<tr>
<td><option>gateway_ip</option></td>
<td>string</td>
<td>First address in <option>cidr</option>
</td>
<td>Default gateway used by devices in
this subnet.</td>
</tr>
<tr>
<td><option>host_routes</option></td>
<td>list(dict)</td>
<td>Empty list</td>
<td>Routes that should be used by devices
with IPs from this subnet (not
including local subnet route).</td>
</tr>
<tr>
<td><option>id</option></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID representing this subnet.</td>
</tr>
<tr>
<td><option>ip_version</option></td>
<td>int</td>
<td>4</td>
<td>IP version.</td>
</tr>
<tr>
<td><option>name</option></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this subnet
(might not be unique).</td>
</tr>
<tr>
<td><option>network_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this subnet is
associated.</td>
</tr>
<tr>
<td><option>tenant_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of network. Only administrative
users can set the tenant identifier;
this cannot be changed using
authorization policies.</td>
</tr>
</tbody>
</table>
<?hard-pagebreak?>
<table rules="all">
<caption>Port attributes</caption>
<col width="20%"/>
<col width="15%"/>
<col width="17%"/>
<col width="47%"/>
<thead>
<tr>
<th>Attribute</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><option>admin_state_up</option></td>
<td>bool</td>
<td>true</td>
<td>Administrative state of this port. If
specified as False (down), this port
does not forward packets.</td>
</tr>
<tr>
<td><option>device_id</option></td>
<td>string</td>
<td>None</td>
<td>Identifies the device using this port
(for example, a virtual server's ID).
</td>
</tr>
<tr>
<td><option>device_owner</option></td>
<td>string</td>
<td>None</td>
<td>Identifies the entity using this port
(for example, a dhcp agent).</td>
</tr>
<tr>
<td><option>fixed_ips</option></td>
<td>list(dict)</td>
<td>Automatically allocated from pool</td>
<td>Specifies IP addresses for this port;
associates the port with the subnets
containing the listed IP addresses.
</td>
</tr>
<tr>
<td><option>id</option></td>
<td>uuid-string</td>
<td>Generated</td>
<td>UUID for this port.</td>
</tr>
<tr>
<td><option>mac_address</option></td>
<td>string</td>
<td>Generated</td>
<td>Mac address to use on this port.</td>
</tr>
<tr>
<td><option>name</option></td>
<td>string</td>
<td>None</td>
<td>Human-readable name for this port
(might not be unique).</td>
</tr>
<tr>
<td><option>network_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Network with which this port is
associated.</td>
</tr>
<tr>
<td><option>status</option></td>
<td>string</td>
<td>N/A</td>
<td>Indicates whether the network is
currently operational.</td>
</tr>
<tr>
<td><option>tenant_id</option></td>
<td>uuid-string</td>
<td>N/A</td>
<td>Owner of the network. Only
administrative users can set the
tenant identifier; this cannot be
changed using authorization policies.
</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="basic_operations">
<title>Basic Networking operations</title>
<para>To learn about advanced capabilities that are
available through the neutron command-line
interface (CLI), read the networking section in
the <link
xlink:href="http://docs.openstack.org/user-guide/content/index.html"
> OpenStack End User Guide</link>.</para>
<para>The following table shows example neutron
commands that enable you to complete basic
Networking operations:</para>
<table rules="all">
<caption>Basic Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creates a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create net1</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet that is associated
with net1.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Lists ports for a specified
tenant.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list</userinput></screen></td>
</tr>
<tr>
<td>Lists ports for a specified tenant and
displays the <option>id</option>,
<option>fixed_ips</option>, and
<option>device_owner</option>
columns.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list -c id -c fixed_ips -c device_owner</userinput></screen>
</td>
</tr>
<tr>
<td>Shows information for a specified
port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-show <replaceable>port-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>The <option>device_owner</option> field
describes who owns the port. A port whose
<option>device_owner</option> begins
with:</para>
<itemizedlist>
<listitem>
<para><literal>network</literal> is
created by Networking.</para>
</listitem>
<listitem>
<para><literal>compute</literal> is
created by Compute.</para>
</listitem>
</itemizedlist>
</note>
</section>
<section xml:id="admin_api_config">
<title>Administrative operations</title>
<para>The cloud administrator can run any
<command>neutron</command> command on behalf
of tenants by specifying an Identity
<option>tenant_id</option> in the command, as
follows:</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=<replaceable>tenant-id</replaceable> <replaceable>network-name</replaceable></userinput></screen>
<para>For example:</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1</userinput></screen>
<note>
<para>To view all tenant IDs in Identity, run the
following command as an Identity Service admin
user:</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
</note>
</section>
<?hard-pagebreak?>
<section xml:id="advanced_networking">
<title>Advanced Networking operations</title>
<para>The following table shows example neutron
commands that enable you to complete advanced
Networking operations:</para>
<table rules="all">
<caption>Advanced Networking operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Creates a network that all tenants can
use.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-create --shared public-net</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified
gateway IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet that has no gateway
IP address.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create --no-gateway net1 10.0.0.0/24</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with DHCP
disabled.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified set
of host routes.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2</userinput></screen></td>
</tr>
<tr>
<td>Creates a subnet with a specified set
of dns name servers.</td>
<td><screen><prompt>$</prompt> <userinput>neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8</userinput></screen></td>
</tr>
<tr>
<td>Displays all ports and IPs allocated
on a network.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --network_id <replaceable>net-id</replaceable></userinput></screen></td>
</tr>
</tbody>
</table>
</section>
</section>
<?hard-pagebreak?>
<section xml:id="using_nova_with_neutron">
<title>Use Compute with Networking</title>
<section xml:id="basic_workflow_with_nova">
<title>Basic Compute and Networking operations</title>
<para>The following table shows example neutron and
nova commands that enable you to complete basic
Compute and Networking operations:</para>
<table rules="all">
<caption>Basic Compute/Networking
operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Action</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Checks available networks.</td>
<td><screen><prompt>$</prompt> <userinput>neutron net-list</userinput></screen></td>
</tr>
<tr>
<td>Boots a VM with a single NIC on a
selected Networking network.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td><para>Searches for ports with a
<option>device_id</option> that
matches the Compute instance UUID.
See <xref
linkend="network_compute_note"/>.
</para>
</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Searches for ports, but shows only the
<option>mac_address</option> for
the port.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-list --field mac_address --device_id=<replaceable>vm-id</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Temporarily disables a port from
sending traffic.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-update <replaceable>port-id</replaceable> --admin_state_up=False</userinput></screen></td>
</tr>
</tbody>
</table>
<note>
<para>The <option>device_id</option> can also be a
logical router ID.</para>
</note>
<note xml:id="network_compute_note">
<title>VM creation and deletion</title>
<itemizedlist>
<listitem>
<para>When you boot a Compute VM, a port
on the network that corresponds to the
VM NIC is automatically created and
associated with the default security
group. You can configure <link
linkend="enabling_ping_and_ssh"
>security group rules</link> to
enable users to access the VM.</para>
</listitem>
<listitem>
<para>When you delete a Compute VM, the
underlying Networking port is
automatically deleted.</para>
</listitem>
</itemizedlist>
</note>
</section>
<section xml:id="advanced_vm_creation">
<title>Advanced VM creation operations</title>
<para>The following table shows example nova and
neutron commands that enable you to complete
advanced VM creation operations:</para>
<table rules="all">
<caption>Advanced VM creation operations</caption>
<col width="40%"/>
<col width="60%"/>
<thead>
<tr>
<th>Operation</th>
<th>Command</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boots a VM with multiple NICs.</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic net-id=<replaceable>net1-id</replaceable> --nic net-id=<replaceable>net2-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen></td>
</tr>
<tr>
<td>Boots a VM with a specific IP address.
First, create an Networking port with
a specific IP address. Then, boot a VM
specifying a <option>port-id</option>
rather than a
<option>net-id</option>.</td>
<td><screen><prompt>$</prompt> <userinput>neutron port-create --fixed-ip subnet_id=<replaceable>subnet-id</replaceable>,ip_address=<replaceable>IP</replaceable> <replaceable>net-id</replaceable></userinput>
<prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> --nic port-id=<replaceable>port-id</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
<tr>
<td>Boots a VM that connects to all
networks that are accessible to the
tenant who submits the request
(without the <option>--nic</option>
option).</td>
<td><screen><prompt>$</prompt> <userinput>nova boot --image <replaceable>img</replaceable> --flavor <replaceable>flavor</replaceable> <replaceable>vm-name</replaceable></userinput></screen>
</td>
</tr>
</tbody>
</table>
<note>
<para>Networking does not currently support the
<command>v4-fixed-ip</command> parameter
of the <command>--nic</command> option for the
<command>nova</command> command.</para>
</note>
</section>
<section xml:id="enabling_ping_and_ssh">
<title>Security groups (enabling ping and SSH on
VMs)</title>
<para>You must configure security group rules
depending on the type of plug-in you are using. If
you are using a plug-in that:</para>
<itemizedlist>
<listitem>
<para>Implements Networking security groups,
you can configure security group rules
directly by using <command>neutron
security-group-rule-create</command>.
The following example allows
<command>ping</command> and
<command>ssh</command> access to your
VMs.</para>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol icmp \
--direction ingress default</userinput></screen>
<screen><prompt>$</prompt> <userinput>neutron security-group-rule-create --protocol tcp --port-range-min 22 \
--port-range-max 22 --direction ingress default</userinput></screen>
</listitem>
<listitem>
<para>Does not implement Networking security
groups, you can configure security group
rules by using the <command>nova
secgroup-add-rule</command> or
<command>euca-authorize</command>
command. The following
<command>nova</command> commands allow
<command>ping</command> and
<command>ssh</command> access to your
VMs.</para>
<screen><prompt>$</prompt> <userinput>nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0</userinput>
<prompt>$</prompt> <userinput>nova secgroup-add-rule default tcp 22 22 0.0.0.0/0</userinput></screen>
</listitem>
</itemizedlist>
<note>
<para>If your plug-in implements Networking
security groups, you can also leverage Compute
security groups by setting
<code>security_group_api = neutron</code>
in the <filename>nova.conf</filename> file.
After you set this option, all Compute
security group commands are proxied to
Networking.</para>
</note>
</section>
</section>
</section>
<xi:include href="section_networking_adv_features.xml"/>
<xi:include href="section_networking_adv_operational_features.xml"/>
<section xml:id="section_auth">
<title>Authentication and authorization</title>
<para>Networking uses the Identity Service as the default
authentication service. When the Identity Service is
enabled, users who submit requests to the Networking
service must provide an authentication token in
<literal>X-Auth-Token</literal> request header. Users
obtain this token by authenticating with the Identity
Service endpoint. For more information about
authentication with the Identity Service, see <link
xlink:href="http://docs.openstack.org/api/openstack-identity-service/2.0/content/"
><citetitle>OpenStack Identity Service API v2.0
Reference</citetitle></link>. When the Identity
Service is enabled, it is not mandatory to specify the
tenant ID for resources in create requests because the
tenant ID is derived from the authentication token.</para>
<note>
<para>The default authorization settings only allow
administrative users to create resources on behalf of
a different tenant. Networking uses information
received from Identity to authorize user requests.
Networking handles two kind of authorization
policies:</para>
</note>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Operation-based</emphasis>
policies specify access criteria for specific
operations, possibly with fine-grained control
over specific attributes;</para>
</listitem>
<listitem>
<para><emphasis role="bold">Resource-based</emphasis>
policies specify whether access to specific
resource is granted or not according to the
permissions configured for the resource (currently
available only for the network resource). The
actual authorization policies enforced in
Networking might vary from deployment to
deployment.</para>
</listitem>
</itemizedlist>
<para>The policy engine reads entries from the
<filename>policy.json</filename> file. The actual
location of this file might vary from distribution to
distribution. Entries can be updated while the system is
running, and no service restart is required. Every time
the policy file is updated, the policies are automatically
reloaded. Currently the only way of updating such policies
is to edit the policy file. In this section, the terms
<emphasis role="italic">policy</emphasis> and
<emphasis role="italic">rule</emphasis> refer to
objects that are specified in the same way in the policy
file. There are no syntax differences between a rule and a
policy. A policy is something that is matched directly
from the Networking policy engine. A rule is an element in
a policy, which is evaluated. For instance in
<code>create_subnet:
[["admin_or_network_owner"]]</code>, <emphasis
role="italic">create_subnet</emphasis> is a policy,
and <emphasis role="italic"
>admin_or_network_owner</emphasis> is a rule.</para>
<para>Policies are triggered by the Networking policy engine
whenever one of them matches an Networking API operation
or a specific attribute being used in a given operation.
For instance the <code>create_subnet</code> policy is
triggered every time a <code>POST /v2.0/subnets</code>
request is sent to the Networking server; on the other
hand <code>create_network:shared</code> is triggered every
time the <emphasis role="italic">shared</emphasis>
attribute is explicitly specified (and set to a value
different from its default) in a <code>POST
/v2.0/networks</code> request. It is also worth
mentioning that policies can be also related to specific
API extensions; for instance
<code>extension:provider_network:set</code> is be
triggered if the attributes defined by the Provider
Network extensions are specified in an API request.</para>
<para>An authorization policy can be composed by one or more
rules. If more rules are specified, evaluation policy
succeeds if any of the rules evaluates successfully; if an
API operation matches multiple policies, then all the
policies must evaluate successfully. Also, authorization
rules are recursive. Once a rule is matched, the rule(s)
can be resolved to another rule, until a terminal rule is
reached.</para>
<para>The Networking policy engine currently defines the
following kinds of terminal rules:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Role-based
rules</emphasis> evaluate successfully if the
user who submits the request has the specified
role. For instance <code>"role:admin"</code> is
successful if the user who submits the request is
an administrator.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Field-based rules
</emphasis>evaluate successfully if a field of the
resource specified in the current request matches
a specific value. For instance
<code>"field:networks:shared=True"</code> is
successful if the <literal>shared</literal>
attribute of the <literal>network</literal>
resource is set to true.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Generic rules</emphasis>
compare an attribute in the resource with an
attribute extracted from the user's security
credentials and evaluates successfully if the
comparison is successful. For instance
<code>"tenant_id:%(tenant_id)s"</code> is
successful if the tenant identifier in the
resource is equal to the tenant identifier of the
user submitting the request.</para>
</listitem>
</itemizedlist>
<para>The following is an extract from the default
<filename>policy.json</filename> file:</para>
<programlisting language="bash">{
[1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"shared": [["field:networks:shared=True"]],
[2] "default": [["rule:admin_or_owner"]],
"create_subnet": [["rule:admin_or_network_owner"]],
"get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],
"update_subnet": [["rule:admin_or_network_owner"]],
"delete_subnet": [["rule:admin_or_network_owner"]],
"create_network": [],
[3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]],
[4] "create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [],
[5] "create_port:mac_address": [["rule:admin_or_network_owner"]],
"create_port:fixed_ips": [["rule:admin_or_network_owner"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_or_owner"]],
"delete_port": [["rule:admin_or_owner"]]
}</programlisting>
<para>[1] is a rule which evaluates successfully if the
current user is an administrator or the owner of the
resource specified in the request (tenant identifier is
equal).</para>
<para>[2] is the default policy which is always evaluated if
an API operation does not match any of the policies in
<filename>policy.json</filename>.</para>
<para>[3] This policy evaluates successfully if either
<emphasis role="italic">admin_or_owner</emphasis>, or
<emphasis role="italic">shared</emphasis> evaluates
successfully.</para>
<para>[4] This policy restricts the ability to manipulate the
<emphasis role="italic">shared</emphasis> attribute
for a network to administrators only.</para>
<para>[5] This policy restricts the ability to manipulate the
<emphasis role="italic">mac_address</emphasis>
attribute for a port only to administrators and the owner
of the network where the port is attached.</para>
<para>In some cases, some operations should be restricted to
administrators only. The following example shows you how
to modify a policy file to permit tenants to define
networks and see their resources and permit administrative
users to perform all other operations:</para>
<programlisting language="bash">{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}</programlisting>
</section>
<section xml:id="section_high_avail">
<title>High Availability</title>
<para>The use of high-availability in a Networking deployment
helps prevent individual node failures. In general, you
can run <systemitem class="service"
>neutron-server</systemitem> and <systemitem
class="service">neutron-dhcp-agent</systemitem> in an
active-active fashion. You can run the <systemitem
class="service">neutron-l3-agent</systemitem> service
as active/passive, which avoids IP conflicts with respect
to gateway IP addresses.</para>
<section xml:id="ha_pacemaker">
<title>Networking High Availability with Pacemaker</title>
<para>You can run some Networking services into a cluster
(Active / Passive or Active / Active for Networking
Server only) with Pacemaker.</para>
<para>Download the latest resources agents:</para>
<itemizedlist>
<listitem>
<para>neutron-server: <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-server"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-dhcp-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-dhcp"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
<listitem>
<para>neutron-l3-agent : <link
xlink:href="https://github.com/madkiss/openstack-resource-agents/blob/master/ocf/neutron-agent-l3"
>https://github.com/madkiss/openstack-resource-agents</link></para>
</listitem>
</itemizedlist>
<note xmlns:db="http://docbook.org/ns/docbook">
<para>For information about how to build a cluster,
see <link
xlink:href="http://www.clusterlabs.org/wiki/Documentation"
>Pacemaker documentation</link>.</para>
</note>
</section>
</section>
<section xml:id="section_pagination_and_sorting_support">
<title>Plug-in pagination and sorting support</title>
<table rules="all">
<caption>Plug-ins that support native pagination and
sorting</caption>
<thead>
<tr>
<th>Plug-in</th>
<th>Support Native Pagination</th>
<th>Support Native Sorting</th>
</tr>
</thead>
<tbody>
<tr>
<td>ML2</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>Open vSwitch</td>
<td>True</td>
<td>True</td>
</tr>
<tr>
<td>Linux Bridge</td>
<td>True</td>
<td>True</td>
</tr>
</tbody>
</table>
</section>
</chapter>