openstack-manuals/doc/admin-guide-network/ch_adv_features.xml
Diane Fleming 64b6c9261e Folder rename, file rename, flattening of directories
Current folder name	New folder name	        Book title
----------------------------------------------------------
basic-install 	        DELETE
cli-guide	        DELETE
common	                common
NEW	                admin-guide-cloud	Cloud Administrators Guide
docbkx-example	        DELETE
openstack-block-storage-admin 	DELETE
openstack-compute-admin 	DELETE
openstack-config 	config-reference	OpenStack Configuration Reference
openstack-ha 	        high-availability-guide	OpenStack High Availabilty Guide
openstack-image	        image-guide	OpenStack Virtual Machine Image Guide
openstack-install 	install-guide	OpenStack Installation Guide
openstack-network-connectivity-admin 	admin-guide-network 	OpenStack Networking Administration Guide
openstack-object-storage-admin 	DELETE
openstack-security 	security-guide	OpenStack Security Guide
openstack-training 	training-guide	OpenStack Training Guide
openstack-user 	        user-guide	OpenStack End User Guide
openstack-user-admin 	user-guide-admin	OpenStack Admin User Guide
glossary	        NEW        	OpenStack Glossary

bug: #1220407

Change-Id: Id5ffc774b966ba7b9a591743a877aa10ab3094c7
author: diane fleming
2013-09-08 15:15:50 -07:00

920 lines
48 KiB
XML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="ch_adv_features">
<title>Advanced Features through API Extensions</title>
<para>This section discusses two API extensions implemented by
several plugins.  We include them in this guide as they
provide capabilities similar to what was available in
nova-network and are thus likely to be relevant to a large
portion of the OpenStack community.  </para>
<section xml:id="provider_networks">
<title>Provider Networks</title>
<para>Provider networks allow cloud administrators to create
OpenStack Networking networks that map directly to
physical networks in the data center.  This is commonly
used to give tenants direct access to a "public" network
that can be used to reach the Internet.  It may also be
used to integrate with VLANs in the network that already
have a defined meaning (e.g., allow a VM from the
"marketing" department to be placed on the same VLAN as
bare-metal marketing hosts in the same data
center).</para>
<para>The provider extension allows administrators to
explicitly manage the relationship between OpenStack
Networking virtual networks and underlying physical
mechanisms such as VLANs and tunnels. When this extension
is supported, OpenStack Networking client users with
administrative privileges see additional provider
attributes on all virtual networks, and are able to
specify these attributes in order to create provider
networks.</para>
<para>The provider extension is supported by the openvswitch
and linuxbridge plugins. Configuration of these plugins
requires familiarity with this extension.</para>
<section xml:id="provider_terminology">
<title>Terminology</title>
<para>A number of terms are used in the provider extension
and in the configuration of plugins supporting the
provider extension:<itemizedlist>
<listitem>
<para><emphasis role="bold">virtual
network</emphasis> - An OpenStack
Networking L2 network (identified by a
UUID and optional name) whose ports can be
attached as vNICs to OpenStack Compute
instances and to various OpenStack
Networking agents. The openvswitch and
linuxbridge plugins each support several
different mechanisms to realize virtual
networks.</para>
</listitem>
<listitem>
<para><emphasis role="bold">physical
network</emphasis> - A network
connecting virtualization hosts (i.e.
OpenStack Compute nodes) with each other
and with other network resources. Each
physical network may support multiple
virtual networks. The provider extension
and the plugin configurations identify
physical networks using simple string
names.</para>
</listitem>
<listitem>
<para><emphasis role="bold">tenant
network</emphasis> - A "normal"
virtual network created by/for a tenant.
The tenant is not aware of how that
network is physically realized.</para>
</listitem>
<listitem>
<para><emphasis role="bold">provider
network</emphasis> - A virtual network
administratively created to map to a
specific network in the data center,
typically to enable direct access to
non-OpenStack resources on that network.
Tenants can be given access to provider
networks.</para>
</listitem>
<listitem>
<para><emphasis role="bold">VLAN
network</emphasis> - A virtual network
realized as packets on a specific physical
network containing IEEE 802.1Q headers
with a specific VID field value. VLAN
networks sharing the same physical network
are isolated from each other at L2, and
can even have overlapping IP address
spaces. Each distinct physical network
supporting VLAN networks is treated as a
separate VLAN trunk, with a distinct space
of VID values. Valid VID values are 1
through 4094.</para>
</listitem>
<listitem>
<para><emphasis role="bold">flat
network</emphasis> - A virtual network
realized as packets on a specific physical
network containing no IEEE 802.1Q header.
Each physical network can realize at most
one flat network.</para>
</listitem>
<listitem>
<para><emphasis role="bold">local
network</emphasis> - A virtual network
that allows communication within each
host, but not across a network. Local
networks are intended mainly for
single-node test scenarios, but may have
other uses.</para>
</listitem>
<listitem>
<para><emphasis role="bold">GRE
network</emphasis> - A virtual network
realized as network packets encapsulated
using GRE. GRE networks are also referred
to as "tunnels". GRE tunnel packets are
routed by the host's IP routing table, so
GRE networks are not associated by
OpenStack Networking with specific
physical networks.</para>
</listitem>
</itemizedlist></para>
<para>Both the openvswitch and linuxbridge plugins support
VLAN networks, flat networks, and local networks. Only
the openvswitch plugin currently supports GRE
networks, provided that the host's Linux kernel
supports the required Open vSwitch features.</para>
</section>
<section xml:id="provider_attributes">
<title>Provider Attributes</title>
<para>The provider extension extends the OpenStack
Networking network resource with the following three
additional attributes:</para>
<table rules="all">
<caption>Provider Network Attributes</caption>
<col width="25%"/>
<col width="10%"/>
<col width="25%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>provider:network_type</td>
<td>String</td>
<td>N/A</td>
<td>The physical mechanism by which the
virtual network is realized. Possible
values are "flat", "vlan", "local", and
"gre", corresponding to flat networks,
VLAN networks, local networks, and GRE
networks as defined above. All types of
provider networks can be created by
administrators, while tenant networks can
be realized as "vlan", "gre", or "local"
network types depending on plugin
configuration.</td>
</tr>
<tr>
<td>provider:physical_network</td>
<td>String</td>
<td>If a physical network named "default" has
been configured, and if
provider:network_type is "flat" or "vlan",
then "default" is used.</td>
<td>The name of the physical network over
which the virtual network is realized for
flat and VLAN networks. Not applicable to
the "local" or "gre" network types.</td>
</tr>
<tr>
<td>provider:segmentation_id</td>
<td>Integer</td>
<td>N/A</td>
<td>For VLAN networks, the VLAN VID on the
physical network that realizes the virtual
network. Valid VLAN VIDs are 1 through
4094. For GRE networks, the tunnel ID.
Valid tunnel IDs are any 32 bit unsigned
integer. Not applicable to the "flat" or
"local" network types.</td>
</tr>
</tbody>
</table>
<para>The provider attributes are returned by OpenStack
Networking API operations when the client is
authorized for the
<code>extension:provider_network:view</code>
action via the OpenStack Networking policy
configuration. The provider attributes are only
accepted for network API operations if the client is
authorized for the
<code>extension:provider_network:set</code>
action. The default OpenStack Networking API policy
configuration authorizes both actions for users with
the admin role. See <xref linkend="ch_auth"/> for
details on policy configuration.</para>
</section>
<section xml:id="provider_api_workflow">
<title>Provider API Workflow</title>
<para>Show all attributes of a network, including provider
attributes when invoked with the admin role:</para>
<para>
<screen><userinput>neutron net-show &lt;name or net-id&gt;</userinput></screen>
</para>
<para>Create a local provider network (admin-only):</para>
<para>
<screen><userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type local</userinput></screen>
</para>
<para>Create a flat provider network (admin-only):</para>
<para>
<screen><userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type flat --provider:physical_network &lt;phys-net-name&gt;</userinput></screen>
</para>
<para>Create a VLAN provider network (admin-only):</para>
<para>
<screen><userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type vlan --provider:physical_network &lt;phys-net-name&gt; --provider:segmentation_id &lt;VID&gt;</userinput></screen>
</para>
<para>Create a GRE provider network (admin-only):</para>
<para>
<screen><userinput>neutron net-create &lt;name&gt; --tenant_id &lt;tenant-id&gt; --provider:network_type gre --provider:segmentation_id &lt;tunnel-id&gt;</userinput></screen>
</para>
<para>When creating flat networks or VLAN networks,
&lt;phys-net-name&gt; must be known to the plugin. See
<xref linkend="ovs_neutron_plugin"/> and <xref
linkend="linuxbridge_conf"/> for details on
configuring network_vlan_ranges to identify all
physical networks. When creating VLAN networks,
&lt;VID&gt; can fall either within or outside any
configured ranges of VLAN IDs from which tenant
networks are allocated. Similarly, when creating GRE
networks, &lt;tunnel-id&gt; can fall either within or
outside any tunnel ID ranges from which tenant
networks are allocated.</para>
<para>Once provider networks have been created, subnets
can be allocated and they can be used similarly to
other virtual networks, subject to authorization
policy based on the specified
&lt;tenant_id&gt;.</para>
</section>
</section>
<section xml:id="l3_router_and_nat">
<title>L3 Routing and NAT</title>
<para>Just like the core OpenStack Networking API provides
abstract L2 network segments that are decoupled from the
technology used to implement the L2 network, OpenStack
Networking includes an API extension that provides
abstract L3 routers that API users can dynamically
provision and configure. These OpenStack Networking
routers can connect multiple L2 OpenStack Networking
networks, and can also provide a "gateway" that connects
one or more private L2 networks to a shared "external"
network (e.g., a public network for access to the
Internet). See <xref linkend="use_cases_single_router"/>
and <xref linkend="use_cases_tenant_router"/> for details
on common models of deploying OpenStack Networking L3
routers.</para>
<para>The L3 router provides basic NAT capabilities on
"gateway" ports that uplink the router to external
networks. This router SNATs all traffic by default, and
supports "Floating IPs", which creates a static one-to-one
mapping from a public IP on the external network to a
private IP on one of the other subnets attached to the
router. This allows a tenant to selectively expose VMs on
private networks to other hosts on the external network
(and often to all hosts on the Internet). Floating IPs can
be allocated and then mapped from one OpenStack Networking
port to another, as needed.</para>
<section xml:id="l3_api_abstractions">
<title>L3 API Abstractions</title>
<table rules="all">
<caption>Router</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the router.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Human-readable name for the router. Might
not be unique.</td>
</tr>
<tr>
<td>admin_state_up</td>
<td>Bool</td>
<td>True</td>
<td>The administrative state of router. If
false (down), the router does not forward
packets.</td>
</tr>
<tr>
<td>status</td>
<td>String</td>
<td>N/A</td>
<td><para>Indicates whether router is
currently operational.</para></td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the router. Only admin users can
specify a tenant_id other than its own.
</td>
</tr>
<tr>
<td>external_gateway_info</td>
<td>dict contain 'network_id' key-value
pair</td>
<td>Null</td>
<td>External network that this router connects
to for gateway services (e.g., NAT)</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Floating IP</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the floating IP.</td>
</tr>
<tr>
<td>floating_ip_address</td>
<td>string (IP address)</td>
<td>allocated by OpenStack Networking</td>
<td>The external network IP address available
to be mapped to an internal IP
address.</td>
</tr>
<tr>
<td>floating_network_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td><para>The network indicating the set of
subnets from which the floating IP
should be allocated</para></td>
</tr>
<tr>
<td>router_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Read-only value indicating the router that
connects the external network to the
associated internal port, if a port is
associated.</td>
</tr>
<tr>
<td>port_id</td>
<td>uuid-str</td>
<td>Null</td>
<td>Indicates the internal OpenStack
Networking port associated with the
external floating IP.</td>
</tr>
<tr>
<td>fixed_ip_address</td>
<td>string (IP address)</td>
<td>Null</td>
<td>Indicates the IP address on the internal
port that is mapped to by the floating IP
(since an OpenStack Networking port might
have more than one IP address).</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the Floating IP. Only admin users
can specify a tenant_id other than its
own.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="l3_workflow">
<title>Common L3 Workflow</title>
<para>Create external networks (admin-only)</para>
<screen><computeroutput>neutron net-create public --router:external=True
neutron subnet-create public 172.16.1.0/24 </computeroutput></screen>
<para>Viewing external networks:</para>
<screen><computeroutput>neutron net-list -- --router:external=True</computeroutput></screen>
<para>Creating routers</para>
<para>Internal-only router to connect multiple L2 networks
privately.</para>
<screen><computeroutput>neutron net-create net1
neutron subnet-create net1 10.0.0.0/24
neutron net-create net2
neutron subnet-create net2 10.0.1.0/24
neutron router-create router1
neutron router-interface-add router1 &lt;subnet1-uuid&gt;
neutron router-interface-add router1 &lt;subnet2-uuid&gt;</computeroutput></screen>
<para>The router will get an interface with the gateway_ip
address of the subnet, and this interface will be
attached to a port on the L2 OpenStack Networking
network associated with the subnet. The router will
also get an gateway interface to the specified
external network.  This will provide SNAT connectivity
to the external network as well as support for
floating IPs allocated on that external networks (see
below).  Commonly an external network maps to a
network in the provider</para>
<para>A router can also be connected to an “external
network”, allowing that router to act as a NAT gateway
for external connectivity.</para>
<screen><computeroutput>neutron router-gateway-set router1 &lt;ext-net-id&gt; </computeroutput></screen>
<para>Viewing routers:</para>
<para>List all routers:
<screen><computeroutput>neutron router-list</computeroutput></screen></para>
<para>Show a specific router:
<screen><computeroutput>neutron router-show &lt;router_id&gt;</computeroutput></screen></para>
<para>Show all internal interfaces for a router:
<screen><computeroutput>neutron port-list -- --device_id=&lt;router_id&gt;</computeroutput></screen></para>
<para>Associating / Disassociating Floating IPs:</para>
<para>First, identify the port-id representing the VM NIC
that the floating IP should map to:</para>
<screen><computeroutput>neutron port-list -c id -c fixed_ips -- --device_id=&lt;instance_id&gt;</computeroutput></screen>
<para>This port must be on an OpenStack Networking subnet
that is attached to a router uplinked to the external
network that will be used to create the floating IP. 
Conceptually, this is because the router must be able
to perform the Destination NAT (DNAT) rewriting of
packets from the Floating IP address (chosen from a
subnet on the external network) to the internal Fixed
IP (chosen from a private subnet that is “behind” the
router).  </para>
<para>Create floating IP unassociated, then
associate</para>
<screen><computeroutput>neutron floatingip-create &lt;ext-net-id&gt;
neutron floatingip-associate &lt;floatingip-id&gt; &lt;internal VM port-id&gt; </computeroutput></screen>
<para>create floating IP and associate in a single
step</para>
<screen><computeroutput>neutron floatingip-create --port_id &lt;internal VM port-id&gt; &lt;ext-net-id&gt; </computeroutput></screen>
<para>Viewing Floating IP State:</para>
<screen><computeroutput>neutron floatingip-list</computeroutput></screen>
<para>Find floating IP for a particular VM port:</para>
<screen><computeroutput>neutron floatingip-list -- --port_id=ZZZ</computeroutput></screen>
<para>Disassociate a Floating IP:</para>
<screen><computeroutput>neutron floatingip-disassociate &lt;floatingip-id&gt;</computeroutput></screen>
<para>L3 Tear Down</para>
<para>Delete the Floating IP:</para>
<screen><computeroutput>neutron floatingip-delete &lt;floatingip-id&gt; </computeroutput></screen>
<para>Then clear the gateway:</para>
<screen><computeroutput>neutron router-gateway-clear router1</computeroutput></screen>
<para>Then remove the interfaces from the router:</para>
<screen><computeroutput>neutron router-interface-delete router1 &lt;subnet-id&gt; </computeroutput></screen>
<para>Finally, delete the router:</para>
<screen><computeroutput>neutron router-delete router1</computeroutput></screen>
</section>
</section>
<section xml:id="securitygroups">
<title>Security Groups</title>
<para>Security groups and security group rules allows
administrators and tenants the ability to specify the type
of traffic and direction (ingress/egress) that is allowed
to pass through a port. A security group is a container
for security group rules.</para>
<para>When a port is created in OpenStack Networking it is
associated with a security group. If a security group is
not specified the port will be associated with a 'default'
security group. By default this group will drop all
ingress traffic and allow all egress. Rules can be added
to this group in order to change the behaviour.</para>
<para>If one desires to use the OpenStack Compute security group APIs and/or have OpenStack
Compute orchestrate the creation of new ports for instances on specific security groups,
additional configuration is needed. To enable this, one must configure the following
file <filename>/etc/nova/nova.conf</filename> and set the config option
security_group_api=neutron on every node running <systemitem class="service">nova-compute</systemitem> and <systemitem class="service">nova-api</systemitem>. After this
change is made restart <systemitem class="service">nova-api</systemitem> and <systemitem class="service">nova-compute</systemitem> in order to pick up this change. After
this change is made one will be able to use both the OpenStack Compute and OpenStack
Network security group API at the same time.</para>
<note>
<itemizedlist>
<listitem><para>To use the OpenStack Compute security group
API with OpenStack Networking, the OpenStack Networking
plugin must implement the security group API. The
following plugins currently implement this: Nicira
NVP, Open vSwitch, Linux Bridge, NEC, and Ryu.</para></listitem>
<listitem><para>You must configure the correct firewall driver in the
<literal>securitygroup</literal> section of the plugin/agent configuration file.
Some plugins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use
the no-operation driver as the default, which results in non-working security
groups.</para></listitem>
<listitem><para>When using the security group API through OpenStack
Compute, security groups are applied to all ports on
an instance. The reason for this is that OpenStack
Compute security group APIs are instances based and
not port based as OpenStack Networking.</para></listitem>
</itemizedlist>
</note>
<section xml:id="securitygroup_api_abstractions">
<title>Security Group API Abstractions</title>
<table rules="all">
<caption>Security Group Attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the security group.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Human-readable name for the security
group. Might not be unique. Cannot be
named default as that is automatically
created for a tenant.</td>
</tr>
<tr>
<td>description</td>
<td>String</td>
<td>None</td>
<td>Human-readable description of a security
group.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the security group. Only admin
users can specify a tenant_id other than
their own.</td>
</tr>
</tbody>
</table>
<table rules="all">
<caption>Security Group Rules</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the security group rule.</td>
</tr>
<tr>
<td>security_group_id</td>
<td>uuid-str or Integer</td>
<td>allocated by OpenStack Networking</td>
<td>The security group to associate rule
with.</td>
</tr>
<tr>
<td>direction</td>
<td>String</td>
<td>N/A</td>
<td>The direction the traffic is allow
(ingress/egress) from a VM.</td>
</tr>
<tr>
<td>protocol</td>
<td>String</td>
<td>None</td>
<td>IP Protocol (icmp, tcp, udp, etc).</td>
</tr>
<tr>
<td>port_range_min</td>
<td>Integer</td>
<td>None</td>
<td>Port at start of range</td>
</tr>
<tr>
<td>port_range_max</td>
<td>Integer</td>
<td>None</td>
<td>Port at end of range</td>
</tr>
<tr>
<td>ethertype</td>
<td>String</td>
<td>None</td>
<td>ethertype in L2 packet (IPv4, IPv6,
etc)</td>
</tr>
<tr>
<td>remote_ip_prefix</td>
<td>string (IP cidr)</td>
<td>None</td>
<td>CIDR for address range</td>
</tr>
<tr>
<td>remote_group_id</td>
<td>uuid-str or Integer</td>
<td>allocated by OpenStack Networking or
OpenStack Compute</td>
<td>Source security group to apply to
rule.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>Owner of the security group rule. Only
admin users can specify a tenant_id other
than its own.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="securitygroup_workflow">
<title>Common Security Group Commands</title>
<para>Create a security group for our web servers:</para>
<screen><computeroutput>
neutron security-group-create webservers --description "security group for webservers"</computeroutput></screen>
<para>Viewing security groups:</para>
<screen><computeroutput>neutron security-group-list</computeroutput></screen>
<para>Creating security group rule to allow port 80
ingress:</para>
<screen><computeroutput>
neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 &lt;security_group_uuid&gt;</computeroutput></screen>
<para>List security group rules:</para>
<screen><computeroutput>neutron security-group-rule-list </computeroutput></screen>
<para>Delete a security group rule:</para>
<screen><computeroutput>neutron security-group-rule-delete &lt;security_group_rule_uuid&gt;</computeroutput></screen>
<para>Delete security group:</para>
<screen><computeroutput>neutron security-group-delete &lt;security_group_uuid&gt; </computeroutput></screen>
<para>Create a port and associated two security
groups:</para>
<screen><computeroutput>neutron port-create --security-group &lt;security_group_id1&gt; --security-group &lt;security_group_id2&gt; &lt;network_id&gt;</computeroutput></screen>
<para>Remove security groups from a port:</para>
<screen><computeroutput>neutron port-update --no-security-groups &lt;port_id&gt;</computeroutput></screen>
</section>
</section>
<section xml:id="lbaas">
<title>Load-Balancer-as-a-Service</title>
<note>
<para>The Load-Balancer-as-a-Service API is an
experimental API meant to give early adopters and
vendors a chance to build implementations against. The
reference implementation should probably not be run in
production environments.</para>
</note>
<section xml:id="lbaas_workflow">
<title>Common Load-Balancer-as-a-Service Workflow</title>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Find the correct subnet
ID.</emphasis> The load balancer virtual IP (vip) and
the instances that provide the balanced service must all
be on the same subnet. The first step then is to obtain
a list of available subnets and their IDs:</para>
<screen><computeroutput>
neutron subnet-list</computeroutput></screen>
</listitem>
<listitem>
<para><emphasis role="bold">Create a load balancer
pool</emphasis> using the appropriate subnet ID from the
list obtained above:</para>
<screen><computeroutput>
neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id &lt;subnet-uuid&gt;</computeroutput></screen>
<para>Valid options for <code>--lb-method</code> depend on
the backend provider. For the reference implementation
based on HAProxy valid options are: ROUND_ROBIN,
LEAST_CONNECTIONS, or SOURCE_IP</para>
<para>Valid options for protocol are: HTTP, HTTPS, or TCP</para>
</listitem>
<listitem>
<para><emphasis role="bold">Associate servers</emphasis> with pool:</para>
<screen><computeroutput>
neutron lb-member-create --address &lt;webserver one IP&gt; --protocol-port 80 mypool
neutron lb-member-create --address &lt;webserver two IP&gt; --protocol-port 80 mypool</computeroutput></screen>
<para>Optionally <code>--weight</code> may be specified as
an integer in the range 0..256. The weight of a member
determines the portion of requests or connections it
services compared to the other members of the pool. A
value of 0 means the member will not participate in
load-balancing but will still accept persistent
connections.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Create a health
monitor</emphasis> which checks to make sure our
instances are still running on the specified
protocol-port:</para>
<screen><computeroutput>
neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3</computeroutput></screen>
<para>Valid options for <code>--type</code> are: PING,
TCP, HTTP, HTTPS. It is also possible to set
<code>--url_path</code> which defaults to "/" and if
specified must begin with a leading slash</para>
</listitem>
<listitem>
<para><emphasis role="bold">Associate health monitor with pool:</emphasis></para>
<screen><computeroutput>
neutron lb-healthmonitor-associate &lt;healthmonitor-uuid&gt; mypool</computeroutput></screen>
</listitem>
<listitem>
<para><emphasis role="bold">Create a Virtual IP Address
(VIP)</emphasis> that when accessed via the load
balancer will direct the requests to one of the pool
members:</para>
<screen><computeroutput>
neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id &lt;subnet-uuid&gt; mypool</computeroutput></screen>
<para>Values for <code>--protocol</code> here are these
same as in the pool creation step above.</para>
<para>Connection rate limiting can be implemented using
the <code>--connection-limit</code> flag and specifying
maximum connections per second.</para>
<para>As written above the load balancer will not have
persistent sessions, to define persistent sessions so
that a given client will always connect to the same
backend (so long as it is still operational) use the
following form:</para>
<screen><computeroutput>
neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id &lt;subnet-uuid&gt; --session-persistence type=dict type=&lt;type&gt;,[cookie_name=&lt;name&gt;] mypool</computeroutput></screen>
<para>Valid session persistence types are: APP_COOKIE,
HTTP_COOKIE or SOURCE_IP.</para>
<para>The APP_COOKIE type reuses a cookie from your
application to manage persistence and requires the
additional option <code>cookie_name=&lt;name&gt;</code>
to inform the load balancer of which cookie name to use,
this <code>cookie_name</code> is unused with other
persistence types.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="plugin_specific_extensions">
<title>Plugin Specific Extensions</title>
<?dbhtml stop-chunking?>
<para>Each vendor may choose to implement additional API
extensions to the core API. This section describes the
extensions for each plugin.</para>
<section xml:id="nicira_extensions">
<title>Nicira NVP Extensions</title>
<para>The Nicira NVP plugin Extensions</para>
<section xml:id="nicira_nvp_plugin_qos_extension">
<title>Nicira NVP QoS Extension</title>
<para>The Nicira NVP QoS extension rate-limits network
ports to guarantee a specific amount of bandwidth
for each port. This extension, by default, is only
accessible by a tenant with an admin role but is
configurable through the
<filename>policy.json</filename> file. To use
this extension, create a queue and specify the
min/max bandwidth rates (kbps) and optionally set
the QoS Marking and DSCP value (if your network
fabric uses these values to make forwarding
decisions). Once created, you can associate a
queue with a network. Then, when ports are created
on that network they are automatically created and
associated with the specific queue size that was
associated with the network. Because one size
queue for a every port on a network may not be
optimal, a scaling factor from the nova flavor
'rxtx_factor' is passed in from OpenStack Compute
when creating the port to scale the queue.</para>
<para>Lastly, if you want to set a specific baseline QoS policy for the amount of
bandwidth a single port can use (unless a network queue is specified with the
network a port is created on) a default queue can be created in neutron which
then causes ports created to be associated with a queue of that size times the
rxtx scaling factor. One thing to note is that after a network queue or default
queue is specified this will not add queues to ports previously created and will
only create queues for ports created thereafter.</para>
<section xml:id="nicira_nvp_qos_api_abstractions">
<title>Nicira NVP QoS API Abstractions</title>
<table rules="all">
<caption>Nicira NVP QoS Attributes</caption>
<col width="20%"/>
<col width="20%"/>
<col width="20%"/>
<col width="40%"/>
<thead>
<tr>
<th>Attribute name</th>
<th>Type</th>
<th>Default Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>id</td>
<td>uuid-str</td>
<td>generated</td>
<td>UUID for the QoS queue.</td>
</tr>
<tr>
<td>default</td>
<td>Boolean</td>
<td>False by default</td>
<td>If True ports will be created with
this queue size unless the network
port is created or associated with
a queue at port creation time.</td>
</tr>
<tr>
<td>name</td>
<td>String</td>
<td>None</td>
<td>Name for QoS queue.</td>
</tr>
<tr>
<td>min</td>
<td>Integer</td>
<td>0</td>
<td>Minimum Bandwidth Rate (kbps).
</td>
</tr>
<tr>
<td>max</td>
<td>Integer</td>
<td>N/A</td>
<td>Maximum Bandwidth Rate (kbps).
</td>
</tr>
<tr>
<td>qos_marking</td>
<td>String</td>
<td>untrusted by default</td>
<td>Whether QoS marking should be
trusted or untrusted.</td>
</tr>
<tr>
<td>dscp</td>
<td>Integer</td>
<td>0</td>
<td>DSCP Marking value.</td>
</tr>
<tr>
<td>tenant_id</td>
<td>uuid-str</td>
<td>N/A</td>
<td>The owner of the QoS queue.</td>
</tr>
</tbody>
</table>
</section>
<section xml:id="nicira_nvp_qos_walk_through">
<title>Nicira NVP QoS Walk Through</title>
<para>Create QoS Queue (admin-only)</para>
<screen><computeroutput>neutron queue-create--min 10 --max 1000 myqueue</computeroutput></screen>
<para>Associate queue with a network</para>
<screen><computeroutput>neutron net-create network --queue_id=&lt;queue_id&gt;</computeroutput></screen>
<para>Create default system queue</para>
<screen><computeroutput>neutron queue-create --default True --min 10 --max 2000 default</computeroutput></screen>
<para>List QoS Queues:</para>
<screen><computeroutput>neutron queue-list</computeroutput></screen>
<para>Delete QoS Queue:</para>
<screen><computeroutput>neutron queue-delete &lt;queue_id or name&gt;'</computeroutput></screen>
</section>
</section>
</section>
</section>
</chapter>