openstack-manuals/doc/training-guide/associate-network-node-concept-neutron.xml
Sean Roberts 1832ca8214 adding neutron use cases
part of restructure work

implements bp/training-manuals

Change-Id: Ieb7e74e5af1d38fb744903fc0a5f127b2df91be8
2013-11-10 18:54:46 -08:00

658 lines
34 KiB
XML

<?xml version="1.0" encoding="utf-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0"
xml:id="associate-network-node-concept-neutron">
<title>Concept Neutron</title>
<section xml:id="mnetworking-in-openstack">
<title>Networking in OpenStack</title>
<para><guilabel>Networking in OpenStack</guilabel></para>
<para>OpenStack Networking provides a rich tenant-facing API
for defining network connectivity and addressing in the
cloud. The OpenStack Networking project gives operators
the ability to leverage different networking technologies
to power their cloud networking. It is a virtual network
service that provides a powerful API to define the network
connectivity and addressing used by devices from other
services, such as OpenStack Compute. It has a rich API
which consists of the following components.</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Network:</emphasis> An
isolated L2 segment, analogous to VLAN in the physical
networking world.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Subnet:</emphasis> A block
of v4 or v6 IP addresses and associated configuration
state.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Port:</emphasis> A
connection point for attaching a single device, such
as the NIC of a virtual server, to a virtual network.
Also describes the associated network configuration,
such as the MAC and IP addresses to be used on that
port.</para>
</listitem>
</itemizedlist>
<para>You can configure rich network topologies by creating
and configuring networks and subnets, and then instructing
other OpenStack services like OpenStack Compute to attach
virtual devices to ports on these networks. In
particular, OpenStack Networking supports each tenant
having multiple private networks, and allows tenants to
choose their own IP addressing scheme, even if those IP
addresses overlap with those used by other tenants. This
enables very advanced cloud networking use cases, such as
building multi-tiered web applications and allowing
applications to be migrated to the cloud without changing
IP addresses.</para>
<para><guilabel>Plugin Architecture: Flexibility to Choose
Different Network Technologies</guilabel></para>
<para>Enhancing traditional networking solutions to provide rich
cloud networking is challenging. Traditional networking is not
designed to scale to cloud proportions or to configure
automatically.</para>
<para>The original OpenStack Compute network implementation
assumed a very basic model of performing all isolation through
Linux VLANs and IP tables. OpenStack Networking introduces the
concept of a plugin, which is a pluggable back-end
implementation of the OpenStack Networking API. A plugin can
use a variety of technologies to implement the logical API
requests. Some OpenStack Networking plugins might use basic
Linux VLANs and IP tables, while others might use more
advanced technologies, such as L2-in-L3 tunneling or OpenFlow,
to provide similar benefits.</para>
<para>The current set of plugins include:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Open vSwitch:</emphasis>
Documentation included in this guide.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cisco:</emphasis> Documented
externally at: <link
xlink:href="http://wiki.openstack.org/cisco-quantum"
>http://wiki.openstack.org/cisco-quantum</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Linux Bridge:</emphasis>
Documentation included in this guide and <link
xlink:href="http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin"
>http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin</link>
</para>
</listitem>
<listitem>
<para><emphasis role="bold">Nicira NVP:</emphasis>
Documentation include in this guide, <link
xlink:href="http://www.vmware.com/products/datacenter-virtualization/nicira.html"
>NVP Product Overview </link>, and <link
xlink:href="http://www.nicira.com/support">NVP
Product Support</link>.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Ryu:</emphasis>
<link
xlink:href="https://github.com/osrg/ryu/wiki/OpenStack"
>https://github.com/osrg/ryu/wiki/OpenStack</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">NEC OpenFlow:</emphasis>
<link
xlink:href="http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin"
>http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Big Switch, Floodlight REST
Proxy:</emphasis>
<link
xlink:href="http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin"
>http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">PLUMgrid:</emphasis>
<link
xlink:href="https://wiki.openstack.org/wiki/Plumgrid-quantum"
>https://wiki.openstack.org/wiki/Plumgrid-quantum</link></para>
</listitem>
<listitem>
<para><emphasis role="bold">Hyper-V
Plugin</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Brocade
Plugin</emphasis></para>
</listitem>
<listitem>
<para><emphasis role="bold">Midonet
Plugin</emphasis></para>
</listitem>
</itemizedlist>
<para>Plugins can have different properties in terms of hardware
requirements, features, performance, scale, operator tools,
etc. Supporting many plugins enables the cloud administrator
to weigh different options and decide which networking
technology is right for the deployment.</para>
<para>Components of OpenStack Networking</para>
<para>To deploy OpenStack Networking, it is useful to understand
the different components that make up the solution and how
those components interact with each other and with other
OpenStack services.</para>
<para>OpenStack Networking is a standalone service, just like
other OpenStack services such as OpenStack Compute, OpenStack
Image service, OpenStack Identity service, and the OpenStack
Dashboard. Like those services, a deployment of OpenStack
Networking often involves deploying several processes on a
variety of hosts.</para>
<para>The main process of the OpenStack Networking server is
quantum-server, which is a Python daemon that exposes the
OpenStack Networking API and passes user requests to the
configured OpenStack Networking plugin for additional
processing. Typically, the plugin requires access to a
database for persistent storage, similar to other OpenStack
services.</para>
<para>If your deployment uses a controller host to run centralized
OpenStack Compute components, you can deploy the OpenStack
Networking server on that same host. However, OpenStack
Networking is entirely standalone and can be deployed on its
own server as well. OpenStack Networking also includes
additional agents that might be required depending on your
deployment:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">plugin agent
(quantum-*-agent):</emphasis>Runs on each
hypervisor to perform local vswitch configuration.
Agent to be run depends on which plugin you are using,
as some plugins do not require an agent.</para>
</listitem>
<listitem>
<para><emphasis role="bold">dhcp agent
(quantum-dhcp-agent):</emphasis>Provides DHCP
services to tenant networks. This agent is the same
across all plugins.</para>
</listitem>
<listitem>
<para><emphasis role="bold">l3 agent
(quantum-l3-agent):</emphasis>Provides L3/NAT
forwarding to provide external network access for VMs
on tenant networks. This agent is the same across all
plugins.</para>
</listitem>
</itemizedlist>
<para>These agents interact with the main quantum-server process
in the following ways:</para>
<itemizedlist>
<listitem>
<para>Through RPC. For example, rabbitmq or qpid.</para>
</listitem>
<listitem>
<para>Through the standard OpenStack Networking
API.</para>
</listitem>
</itemizedlist>
<para>OpenStack Networking relies on the OpenStack Identity
Project (Keystone) for authentication and authorization of all
API request.</para>
<para>OpenStack Compute interacts with OpenStack Networking
through calls to its standard API. As part of creating a VM,
nova-compute communicates with the OpenStack Networking API to
plug each virtual NIC on the VM into a particular
network.</para>
<para>The OpenStack Dashboard (Horizon) has integration with the
OpenStack Networking API, allowing administrators and tenant
users, to create and manage network services through the
Horizon GUI.</para>
<para><emphasis role="bold">Place Services on Physical
Hosts</emphasis></para>
<para>Like other OpenStack services, OpenStack Networking provides
cloud administrators with significant flexibility in deciding
which individual services should run on which physical
devices. On one extreme, all service daemons can be run on a
single physical host for evaluation purposes. On the other,
each service could have its own physical hosts, and some cases
be replicated across multiple hosts for redundancy.</para>
<para>In this guide, we focus primarily on a standard architecture
that includes a “cloud controller” host, a “network gateway”
host, and a set of hypervisors for running VMs. The "cloud
controller" and "network gateway" can be combined in simple
deployments, though if you expect VMs to send significant
amounts of traffic to or from the Internet, a dedicated
network gateway host is suggested to avoid potential CPU
contention between packet forwarding performed by the
quantum-l3-agent and other OpenStack services.</para>
<para><emphasis role="bold">Network Connectivity for Physical
Hosts</emphasis></para>
<figure>
<title>Network Diagram</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/image33.png"/>
</imageobject>
</mediaobject>
</figure>
<para>A standard OpenStack Networking setup has up to four
distinct physical data center networks:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Management
network:</emphasis>Used for internal communication
between OpenStack Components. The IP addresses on this
network should be reachable only within the data
center.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Data network:</emphasis>Used
for VM data communication within the cloud deployment.
The IP addressing requirements of this network depend
on the OpenStack Networking plugin in use.</para>
</listitem>
<listitem>
<para><emphasis role="bold">External
network:</emphasis>Used to provide VMs with Internet
access in some deployment scenarios. The IP addresses
on this network should be reachable by anyone on the
Internet.</para>
</listitem>
<listitem>
<para><emphasis role="bold">API network:</emphasis>Exposes
all OpenStack APIs, including the OpenStack Networking
API, to tenants. The IP addresses on this network
should be reachable by anyone on the Internet. This
may be the same network as the external network, as it
is possible to create a subnet for the external
network that uses IP allocation ranges to use only
less than the full range of IP addresses in an IP
block.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="openstack-networking-concepts">
<title>OpenStack Networking Concepts</title>
<para><emphasis role="bold">Network Types</emphasis></para>
<para>The OpenStack Networking configuration provided by the
Rackspace Private Cloud cookbooks allows you to choose between
VLAN or GRE isolated networks, both provider- and
tenant-specific. From the provider side, an administrator can
also create a flat network.</para>
<para>The type of network that is used for private tenant networks
is determined by the network_type attribute, which can be
edited in the Chef override_attributes. This attribute sets
both the default provider network type and the only type of
network that tenants are able to create. Administrators can
always create flat and VLAN networks. GRE networks of any type
require the network_type to be set to gre.</para>
<para><emphasis role="bold">Namespaces</emphasis></para>
<para>For each network you create, the Network node (or Controller
node, if combined) will have a unique network namespace
(netns) created by the DHCP and Metadata agents. The netns
hosts an interface and IP addresses for dnsmasq and the
quantum-ns-metadata-proxy. You can view the namespaces with
the ip netns [list], and can interact with the namespaces with
the ip netns exec &lt;namespace&gt; &lt;command&gt;
command.</para>
<para><emphasis role="bold">Metadata</emphasis></para>
<para>Not all networks or VMs need metadata access. Rackspace
recommends that you use metadata if you are using a single
network. If you need metadata, you may also need a default
route. (If you don't need a default route, no-gateway will
do.)</para>
<para>To communicate with the metadata IP address inside the
namespace, instances need a route for the metadata network
that points to the dnsmasq IP address on the same namespaced
interface. OpenStack Networking only injects a route when you
do not specify a gateway-ip in the subnet.</para>
<para>If you need to use a default route and provide instances
with access to the metadata route, create the subnet without
specifying a gateway IP and with a static route from 0.0.0.0/0
to your gateway IP address. Adjust the DHCP allocation pool so
that it will not assign the gateway IP. With this
configuration, dnsmasq will pass both routes to instances.
This way, metadata will be routed correctly without any
changes on the external gateway.</para>
<para><emphasis role="bold">OVS Bridges</emphasis></para>
<para>An OVS bridge for provider traffic is created and configured
on the nodes where single-network-node and single-compute are
applied. Bridges are created, but physical interfaces are not
added. An OVS bridge is not created on a Controller-only
node.</para>
<para>When creating networks, you can specify the type and
properties, such as Flat vs. VLAN, Shared vs. Tenant, or
Provider vs. Overlay. These properties identify and determine
the behavior and resources of instances attached to the
network. The cookbooks will create bridges for the
configuration that you specify, although they do not add
physical interfaces to provider bridges. For example, if you
specify a network type of GRE, a br-tun tunnel bridge will be
created to handle overlay traffic.</para>
</section>
<section xml:id="neutron-use-cases">
<title>Neutron Use Cases</title>
<para>As of now you must be wondering, how to use these awesome
features that OpenStack Networking has given to us.</para>
<para><guilabel><anchor xml:id="h.lrsgdytf1mh5"/>Use Case: Single Flat
Network</guilabel></para>
<para>In the simplest use case, a single OpenStack Networking
network exists. This is a "shared" network, meaning it is
visible to all tenants via the OpenStack Networking API.
Tenant VMs have a single NIC, and receive a fixed IP
address from the subnet(s) associated with that network.
This essentially maps to the FlatManager and
FlatDHCPManager models provided by OpenStack Compute.
Floating IPs are not supported.</para>
<para>It is common that such an OpenStack Networking network
is a "provider network", meaning it was created by the
OpenStack administrator to map directly to an existing
physical network in the data center. This allows the
provider to use a physical router on that data center
network as the gateway for VMs to reach the outside world.
For each subnet on an external network, the gateway
configuration on the physical router must be manually
configured outside of OpenStack.</para>
<figure>
<title>Single Flat Network</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/image34.png"/>
</imageobject>
</mediaobject>
</figure>
<para><guilabel>Use Case: Multiple Flat
Network</guilabel></para>
<para>This use case is very similar to the above Single Flat
Network use case, except that tenants see multiple shared
networks via the OpenStack Networking API and can choose
which network (or networks) to plug into.</para>
<figure>
<title>Multiple Flat Network</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/image35.png"/>
</imageobject>
</mediaobject>
</figure>
<para><guilabel>Use Case: Mixed Flat and Private
Network</guilabel></para>
<para>This use case is an extension of the above flat network
use cases, in which tenants also optionally have access to
private per-tenant networks. In addition to seeing one or
more shared networks via the OpenStack Networking API,
tenants can create additional networks that are only
visible to users of that tenant. When creating VMs, those
VMs can have NICs on any of the shared networks and/or any
of the private networks belonging to the tenant. This
enables the creation of "multi-tier" topologies using VMs
with multiple NICs. It also supports a model where a VM
acting as a gateway can provide services such as routing,
NAT, or load balancing.</para>
<figure>
<title>Mixed Flat and Private Network</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/image36.png"/>
</imageobject>
</mediaobject>
</figure>
<para><guilabel>Use Case: Provider Router with Private
Networks</guilabel></para>
<para>This use provides each tenant with one or more private
networks, which connect to the outside world via an
OpenStack Networking router. The case where each tenant
gets exactly one network in this form maps to the same
logical topology as the VlanManager in OpenStack Compute
(of course, OpenStack Networking doesn't require VLANs).
Using the OpenStack Networking API, the tenant would only
see a network for each private network assigned to that
tenant. The router object in the API is created and owned
by the cloud admin.</para>
<para>This model supports giving VMs public addresses using
"floating IPs", in which the router maps public addresses
from the external network to fixed IPs on private
networks. Hosts without floating IPs can still create
outbound connections to the external network, as the
provider router performs SNAT to the router's external IP.
The IP address of the physical router is used as the
gateway_ip of the external network subnet, so the provider
has a default router for Internet traffic.</para>
<para>The router provides L3 connectivity between private
networks, meaning that different tenants can reach each
others instances unless additional filtering (e.g.,
security groups) is used. Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus,
it is likely that the admin would create the private
networks on behalf of tenants.</para>
<figure>
<title>Provider Router with Private Networks</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/image37.png"/>
</imageobject>
</mediaobject>
</figure>
<para><guilabel>Use Case: Per-tenant Routers with Private
Networks</guilabel></para>
<para>A more advanced router scenario in which each tenant
gets at least one router, and potentially has access to
the OpenStack Networking API to create additional routers.
The tenant can create their own networks, potentially
uplinking those networks to a router. This model enables
tenant-defined multi-tier applications, with each tier
being a separate network behind the router. Since there
are multiple routers, tenant subnets can be overlapping
without conflicting, since access to external networks all
happens via SNAT or Floating IPs. Each router uplink and
floating IP is allocated from the external network
subnet.</para>
<figure>
<title>Per-tenant Routers with Private Networks</title>
<mediaobject>
<imageobject>
<imagedata fileref="figures/image38.png"/>
</imageobject>
</mediaobject>
</figure>
</section>
<section xml:id="security-in-neutron">
<title>Security in Neutron</title>
<para><guilabel>Security Groups</guilabel></para>
<para>Security groups and security group rules allows
administrators and tenants the ability to specify the type
of traffic and direction (ingress/egress) that is allowed
to pass through a port. A security group is a container
for security group rules.</para>
<para>When a port is created in OpenStack Networking it is
associated with a security group. If a security group is
not specified the port will be associated with a 'default'
security group. By default this group will drop all
ingress traffic and allow all egress. Rules can be added
to this group in order to change the behaviour.</para>
<para>If one desires to use the OpenStack Compute security
group APIs and/or have OpenStack Compute orchestrate the
creation of new ports for instances on specific security
groups, additional configuration is needed. To enable
this, one must configure the following file
/etc/nova/nova.conf and set the config option
security_group_api=neutron on every node running
nova-compute and nova-api. After this change is made
restart nova-api and nova-compute in order to pick up this
change. After this change is made one will be able to use
both the OpenStack Compute and OpenStack Network security
group API at the same time.</para>
<para><guilabel>Authentication and Authorization</guilabel></para>
<para>OpenStack Networking uses the OpenStack Identity service
(project name keystone) as the default authentication
service. When OpenStack Identity is enabled Users
submitting requests to the OpenStack Networking service
must provide an authentication token in X-Auth-Token
request header. The aforementioned token should have been
obtained by authenticating with the OpenStack Identity
endpoint. For more information concerning authentication
with OpenStack Identity, please refer to the OpenStack
Identity documentation. When OpenStack Identity is
enabled, it is not mandatory to specify tenant_id for
resources in create requests, as the tenant identifier
will be derived from the Authentication token. Please note
that the default authorization settings only allow
administrative users to create resources on behalf of a
different tenant. OpenStack Networking uses information
received from OpenStack Identity to authorize user
requests. OpenStack Networking handles two kind of
authorization policies:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Operation-based:</emphasis>
policies specify access criteria for specific
operations, possibly with fine-grained control over
specific attributes;</para>
</listitem>
<listitem>
<para><emphasis role="bold"
>Resource-based:</emphasis>whether access to specific
resource might be granted or not according to the
permissions configured for the resource (currently
available only for the network resource). The actual
authorization policies enforced in OpenStack
Networking might vary from deployment to
deployment.</para>
</listitem>
</itemizedlist>
<para>The policy engine reads entries from the policy.json
file. The actual location of this file might vary from
distribution to distribution. Entries can be updated while
the system is running, and no service restart is required.
That is to say, every time the policy file is updated, the
policies will be automatically reloaded. Currently the
only way of updating such policies is to edit the policy
file. Please note that in this section we will use both
the terms "policy" and "rule" to refer to objects which
are specified in the same way in the policy file; in other
words, there are no syntax differences between a rule and
a policy. We will define a policy something which is
matched directly from the OpenStack Networking policy
engine, whereas we will define a rule as the elements of
such policies which are then evaluated. For instance in
create_subnet: [["admin_or_network_owner"]], create_subnet
is regarded as a policy, whereas admin_or_network_owner is
regarded as a rule.</para>
<para>Policies are triggered by the OpenStack Networking
policy engine whenever one of them matches an OpenStack
Networking API operation or a specific attribute being
used in a given operation. For instance the create_subnet
policy is triggered every time a POST /v2.0/subnets
request is sent to the OpenStack Networking server; on the
other hand create_network:shared is triggered every time
the shared attribute is explicitly specified (and set to a
value different from its default) in a POST /v2.0/networks
request. It is also worth mentioning that policies can be
also related to specific API extensions; for instance
extension:provider_network:set will be triggered if the
attributes defined by the Provider Network extensions are
specified in an API request.</para>
<para>An authorization policy can be composed by one or more
rules. If more rules are specified, evaluation policy will
be successful if any of the rules evaluates successfully;
if an API operation matches multiple policies, then all
the policies must evaluate successfully. Also,
authorization rules are recursive. Once a rule is matched,
the rule(s) can be resolved to another rule, until a
terminal rule is reached.</para>
<para>The OpenStack Networking policy engine currently defines
the following kinds of terminal rules:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Role-based
rules:</emphasis> evaluate successfully if the
user submitting the request has the specified role.
For instance "role:admin"is successful if the user
submitting the request is an administrator.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Field-based
rules:</emphasis> evaluate successfully if a field
of the resource specified in the current request
matches a specific value. For instance
"field:networks:shared=True" is successful if the
attribute shared of the network resource is set to
true.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Generic
rules:</emphasis>compare an attribute in the resource
with an attribute extracted from the user's security
credentials and evaluates successfully if the
comparison is successful. For instance
"tenant_id:%(tenant_id)s" is successful if the tenant
identifier in the resource is equal to the tenant
identifier of the user submitting the request.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="floating-ips">
<title>Floating IP Addresses And Security Rules</title>
<para>OpenStack Networking has the concept of Fixed IPs and
Floating IPs. Fixed IPs are assigned to an instance on
creation and stay the same until the instance is explicitly
terminated. Floating ips are ip addresses that can be
dynamically associated with an instance. This address can be
disassociated and associated with another instance at any
time.</para>
<para>Various tasks carried out by Floating IP's as of
now.</para>
<itemizedlist>
<listitem>
<para>create IP ranges under a certain group, only
available for admin role.</para>
</listitem>
<listitem>
<para>allocate an floating IP to a certain tenant,
only available for admin role.</para>
</listitem>
<listitem>
<para>deallocate an floating IP from a certain
tenant</para>
</listitem>
<listitem>
<para>associate an floating IP to a given
instance</para>
</listitem>
<listitem>
<para>disassociate an floating IP from a certain
instance</para>
</listitem>
</itemizedlist>
<para>Just as shown by above figure, we will have
nova-network-api to support nova client floating
commands. nova-network-api will invoke quantum cli lib
to interactive with quantum server via API. The data
about floating IPs will be store in to quantum DB.
Quantum Agent, which is running on compute host will
enforce the floating IP.</para>
<para><guilabel>Multiple Floating
IP Pools</guilabel></para>
<para>The L3 API in OpenStack Networking supports multiple
floating IP pools. In OpenStack Networking, a floating
IP pool is represented as an external network and a
floating IP is allocated from a subnet associated with
the external network. Since each L3 agent can be
associated with at most one external network, we need
to invoke multiple L3 agent to define multiple
floating IP pools. 'gateway_external_network_id'in L3
agent configuration file indicates the external
network that the L3 agent handles. You can run
multiple L3 agent instances on one host.</para>
<para>In addition, when you run multiple L3 agents, make
sure that handle_internal_only_routersis set to
Trueonly for one L3 agent in an OpenStack Networking
deployment and set to Falsefor all other L3 agents.
Since the default value of this parameter is True, you
need to configure it carefully.</para>
<para>Before starting L3 agents, you need to create
routers and external networks, then update the
configuration files with UUID of external networks and
start L3 agents.</para>
<para>For the first agent, invoke it with the following
l3_agent.ini where handle_internal_only_routers is
True.</para>
</section>
</section>