Installing OpenStack Networking ServiceInitial prerequisitesIf you are building a host from scratch and will use
OpenStack Networking, we strongly recommend that you use
Ubuntu 12.04 or 12.10 or Fedora 17 or 18. These platforms
have OpenStack Networking packages and receive significant
testing.OpenStack Networking requires at least dnsmasq 2.59,
which contains all the necessary options.Install packages (Ubuntu)This procedure uses the Cloud Archive for Ubuntu. You can
read more about it at http://bit.ly/Q8OJ9M.Point to Havana PPAs:$echo deb http://ubuntu-cloud.archive.canonical.com/ubuntu precise-updates/havana main | sudo tee /etc/apt/sources.list.d/havana.list$sudo apt-get install ubuntu-cloud-keyring$sudo apt-get update$sudo apt-get upgradeInstall neutron-serverThe neutron-server handles OpenStack Networking's
user requests and exposes the API.To install the neutron-serverInstall neutron-server and CLI for accessing the
API:$sudo apt-get install neutron-server python-neutronclientYou must also install the plugin you choose to use,
for example:$sudo apt-get install neutron-plugin-<plugin-name>Most plugins require that you install a database and
configure it in a plugin configuration file. For
example:$sudo apt-get install mysql-server python-mysqldb python-sqlalchemyIf you already use a database for other OpenStack
services, you only need to create a neutron
database:$mysql -u <user> -p <pass> -e “create database neutron”Configure the database in the plugin’s configuration
file:Find the plugin configuration file in
/etc/neutron/plugins/<plugin-name>
(for example,
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini).
Set in the file:sql_connection = mysql://<user>:<password>@localhost/neutron?charset=utf8RPC SetupMany OpenStack Networking plugins use RPC to enable
agents to communicate with the main neutron-server process. If
your plugin requires agents, they can use the same RPC
mechanism used by other OpenStack components like Nova.To use RabbitMQ as the message bus for RPCInstall RabbitMQ on a host reachable through the
management network (this step is not necessary if
RabbitMQ has already been installed for another
service, like Compute):$sudo apt-get install rabbitmq-server$rabbitmqctl change_password guest <password>Update
/etc/neutron/neutron.conf with
the following values:rabbit_host=<mgmt-IP-of-rabbit-host> rabbit_password=<password> rabbit_userid=guestThe /etc/neutron/neutron.conf
file should be copied to and used on all hosts running
neutron-server
or any neutron-*-agent binaries.Plugin Configuration: OVS PluginIf you use the Open vSwitch (OVS) plugin in a deployment
with multiple hosts, you will need to use either tunneling
or vlans to isolate traffic from multiple networks.
Tunneling is easier to deploy because it does not require
configuring VLANs on network switches.The following procedure uses tunneling:To configure OpenStack Networking to use the OVS
pluginEdit
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
to specify the following values (for
database configuration, see ):enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=<data-net-IP-address-of-node>If you are using the neutron DHCP agent, add the
following to
/etc/neutron/dhcp_agent.ini:dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.confCreate
/etc/neutron/dnsmasq-neutron.conf,
and add the following values to lower the MTU size on
instances and prevent packet fragmentation over the
GRE tunnel:dhcp-option-force=26,1400After performing that change on the node running
neutron-server, restart neutron-server to pick
up the new settings:$sudo service neutron-server restartPlugin Configuration: Nicira NVP PluginTo configure OpenStack Networking to use the NVP
pluginInstall the NVP plugin, as follows:$sudo apt-get install neutron-plugin-niciraEdit
/etc/neutron/neutron.conf and
set:core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2Example neutron.conf file for
NVP:core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2rabbit_host = 192.168.203.10allow_overlapping_ips = TrueTo tell OpenStack Networking about a controller
cluster, create a new [cluster:<name>] section
in the
/etc/neutron/plugins/nicira/nvp.ini
file, and add the following entries (for database
configuration, see ): The UUID of the NVP Transport Zone that
should be used by default when a tenant creates
a network. This value can be retrieved from the
NVP Manager Transport Zones page:default_tz_uuid = <uuid_of_the_transport_zone>A connection string indicating parameters to
be used by the NVP plugin when connecting to the
NVP webservice API. There will be one of these
lines in the file for each NVP controller in
your deployment. An NVP operator will likely
want to update the NVP controller IP and
password, but the remaining fields can be the
defaults:nvp_controller_connection = <controller_node_ip>:<controller_port>:<api_user>:<api_password>:<request_timeout>:<http_timeout>:<retries>:<redirects>The UUID of an NVP L3 Gateway Service that
should be used by default when a tenant creates
a router. This value can be retrieved from the
NVP Manager Gateway Services page:default_l3_gw_service_uuid = <uuid_of_the_gateway_service>Ubuntu packaging currently does not update
the neutron init script to point to the NVP
configuration file. Instead, you must manually
update
/etc/default/neutron-server
with the following:NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.iniRestart neutron-server to pick up the new
settings:$sudo service neutron-server restartExample nvp.ini file:[database]sql_connection=mysql://root:root@127.0.0.1/neutron [cluster:main]default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33cdefault_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cfnvp_controller_connection=10.0.0.2:443:admin:admin:30:10:2:2nvp_controller_connection=10.0.0.3:443:admin:admin:30:10:2:2nvp_controller_connection=10.0.0.4:443:admin:admin:30:10:2:2 To debug nvp.ini configuration
issues, run the following command from the host running
neutron-server:
$check-nvp-config <path/to/nvp.ini>This
command tests whether neutron-server can log into all of the NVP
Controllers, SQL server, and whether all of the UUID
values are correct.Configuring Big Switch, Floodlight REST Proxy
PluginTo use the REST Proxy plugin with OpenStack
NetworkingEdit
/etc/neutron/neutron.conf and
set:core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2Edit the plugin configuration file,
/etc/neutron/plugins/bigswitch/restproxy.ini,
and specify a comma-separated list of
controller_ip:port pairs:
server = <controller-ip>:<port>For
database configuration, see .Restart neutron-server to pick up the new
settings:$sudo service neutron-server restartConfiguring Ryu PluginTo use the Ryu plugin with OpenStack
NetworkingInstall the Ryu plugin, as follows:$sudo apt-get install neutron-plugin-ryuEdit
/etc/neutron/neutron.conf and
set:core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2Edit
/etc/neutron/plugins/ryu/ryu.ini
(for database configuration, see ), and update the
following in the [ovs]
section for the
ryu-neutron-agent: The
openflow_rest_api is
used to tell where Ryu is listening for REST
API. Substitute
ip-address and
port-no based on your
Ryu setup.The ovsdb_interface is
used for Ryu to access the
ovsdb-server.
Substitute eth0 based on your set up. The IP
address is derived from the interface name. If
you want to change this value irrespective of
the interface name,
ovsdb_ip can be
specified. If you use a non-default port for
ovsdb-server, it can
be specified by
ovsdb_port.tunnel_interface
needs to be set to tell what IP address is used
for tunneling (if tunneling isn't used, this
value is ignored). The IP address is derived
from the network interface name.You can use the same configuration file for many
Compute nodes by using a network interface name with a
different IP address:openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> Restart neutron-server to pick up the
new settings:$sudo service neutron-server restartConfiguring PLUMgrid PluginTo use the PLUMgrid plugin with OpenStack
NetworkingEdit
/etc/neutron/neutron.conf and
set:core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2Edit /etc/neutron/plugins/plumgrid/plumgrid.ini under the
[PLUMgridDirector] section, and specify the IP address,
port, admin user name, and password of the PLUMgrid Director:
[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"
For database configuration, see
.Restart neutron-server to pick up the new
settings:$sudo service neutron-server restartInstall Software on Data Forwarding NodesPlugins typically have requirements for particular
software that must be run on each node that handles data
packets. This includes any node running nova-compute, as well as nodes
running dedicated OpenStack Networking service agents like
neutron-dhcp-agent,
neutron-l3-agent, or
neutron-lbaas-agent (see below for
more information about individual service agents).A data-forwarding node typically has a network interface
with an IP address on the “management network” and another
interface on the “data network”.In this section, you will learn how to install and
configure a subset of the available plugins, which may include
the installation of switching software (for example, Open
vSwitch) as well as agents used to communicate with the
neutron-server
process running elsewhere in the data center.Node Setup: OVS PluginIf you use the Open vSwitch plugin, you must also
install Open vSwitch as well as the
neutron-plugin-openvswitch-agent
agent on each data-forwarding node:Do not install the openvswitch-brcompat package as
it breaks the security groups functionality.To set up each node for the OVS pluginInstall the OVS agent package (this pulls in the
Open vSwitch software as a dependency):$sudo apt-get install neutron-plugin-openvswitch-agentOn each node running
neutron-plugin-openvswitch-agent: Replicate the
ovs_neutron_plugin.ini
file created in the first step onto the node.
If using tunneling, the node's
ovs_neutron_plugin.ini
file must also be updated with the node's IP
address configured on the data network using the
local_ip value.
Restart Open vSwitch to properly load the kernel
module:$sudo service openvswitch-switch restartRestart the agent:$sudo service neutron-plugin-openvswitch-agent restartAll nodes running
neutron-plugin-openvswitch-agent
must have an OVS bridge named "br-int". To create the
bridge, run:$sudo ovs-vsctl add-br br-intNode Setup: Nicira NVP PluginIf you use the Nicira NVP plugin, you must also install
Open vSwitch on each data-forwarding node. However, you do
not need to install an additional agent on each node.It is critical that you are running a version of Open
vSwitch that is compatible with the current version of the
NVP Controller software. Do not use the version of Open
vSwitch installed by default on Ubuntu. Instead, use the
version of Open Vswitch provided on the Nicira support
portal for your version of the NVP Controller.To set up each node for the Nicira NVP
pluginEnsure each data-forwarding node has an IP address
on the "management network", as well as an IP address
on the "data network" used for tunneling data traffic.
For full details on configuring your forwarding node,
please see the NVP Administrator
Guide.Use the NVP Administrator
Guide to add the node as a "Hypervisor"
using the NVP Manager GUI. Even if your forwarding
node has no VMs and is only used for services agents
like neutron-dhcp-agent or
neutron-lbaas-agent, it
should still be added to NVP as a Hypervisor.After following the NVP Administrator
Guide, use the page for this Hypervisor
in the NVP Manager GUI to confirm that the node is
properly connected to the NVP Controller Cluster and
that the NVP Controller Cluster can see the
integration bridge "br-int".Node Setup: Ryu PluginIf you use the Ryu plugin, you must install both Open
vSwitch and Ryu, in addition to the Ryu agent package: To set up each node for the Ryu pluginInstall Ryu (there isn't currently an Ryu package
for ubuntu):$sudo pip install ryuInstall the Ryu agent and Open vSwitch
packages:$sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkmsReplicate the
ovs_ryu_plugin.ini and
neutron.conf files created in
the above step on all nodes running
neutron-plugin-ryu-agent.
Restart Open vSwitch to properly load the kernel
module:$sudo service openvswitch-switch restartRestart the agent:$sudo service neutron-plugin-ryu-agent restartAll nodes running
neutron-plugin-ryu-agent
also require that an OVS bridge named "br-int" exists
on each node. To create the bridge, run:$sudo ovs-vsctl add-br br-intInstall DHCP AgentThe DHCP service agent is compatible with all existing
plugins and is required for all deployments where VMs should
automatically receive IP addresses via DHCP. To install and configure the DHCP agentYou must configure the host running the
neutron-dhcp-agent as a "data
forwarding node" according to your plugin's requirements
(see ).Install the DHCP agent:$sudo apt-get install neutron-dhcp-agentFinally, update any options in
/etc/neutron/dhcp_agent.ini that
depend on the plugin in use (see the sub-sections).
DHCP Agent Setup: OVS PluginThe following DHCP agent options are required in the
/etc/neutron/dhcp_agent.ini file for
the OVS plugin:[DEFAULT]
ovs_use_veth = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverDHCP Agent Setup: NVP PluginThe following DHCP agent options are required in the
/etc/neutron/dhcp_agent.ini file for
the NVP plugin:[DEFAULT]
ovs_use_veth = True
enable_metadata_network = True
enable_isolated_metadata = True
use_namespaces = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverDHCP Agent Setup: Ryu PluginThe following DHCP agent options are required in the
/etc/neutron/dhcp_agent.ini file for
the Ryu plugin:[DEFAULT]
ovs_use_veth = True
use_namespace = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverInstall L3 AgentNeutron has a widely used API extension to allow
administrators and tenants to create "routers" that connect to
L2 networks.Many plugins rely on the L3 service agent to implement the
L3 functionality. However, the following plugins already have
built-in L3 capabilities:Nicira NVP PluginFloodlight/BigSwitch Plugin, which supports both the
open source Floodlight controller and the proprietary
BigSwitch controller.Only the proprietary BigSwitch controller
implements L3 functionality. When using Floodlight as
your OpenFlow controller, L3 functionality is not
available.PLUMgrid PluginDo note configure or use
neutron-l3-agent if you use one of
these plugins.To install the L3 Agent for all other pluginsInstall the
neutron-l3-agent binary on
the network node:$sudo apt-get install neutron-l3-agentTo uplink the node that runs
neutron-l3-agent to the
external network, create a bridge named "br-ex" and
attach the NIC for the external network to this bridge.For example, with Open vSwitch and NIC eth1
connected to the external network, run:$sudo ovs-vsctl add-br br-ex$sudo ovs-vsctl add-port br-ex eth1Do not manually configure an IP address on the NIC
connected to the external network for the node running
neutron-l3-agent. Rather, you
must have a range of IP addresses from the external
network that can be used by OpenStack Networking for
routers that uplink to the external network. This range
must be large enough to have an IP address for each
router in the deployment, as well as each floating IP.
The neutron-l3-agent uses
the Linux IP stack and iptables to perform L3 forwarding
and NAT. In order to support multiple routers with
potentially overlapping IP addresses,
neutron-l3-agent defaults to
using Linux network namespaces to provide isolated
forwarding contexts. As a result, the IP addresses of
routers will not be visible simply by running
ip addr list or
ifconfig on the node. Similarly,
you will not be able to directly ping
fixed IPs.To do either of these things, you must run the
command within a particular router's network namespace.
The namespace will have the name "qrouter-<UUID of
the router>. The following commands are examples of
running commands in the namespace of a router with UUID
47af3868-0fa8-4447-85f6-1304de32153b:$ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list$ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip>Install LBaaS AgentStarting with the Havana release, the Neutron
Load-Balancer-as-a-Service (LBaaS) supports
an agent scheduling mechanism, so several
neutron-lbaas-agents can be run on
several nodes (one per one). To install the LBaas agent and configure the
nodeInstall the agent by running:$sudo apt-get install neutron-lbaas-agentIf you are using: An OVS-based plugin (OVS, NVP, Ryu, NEC,
BigSwitch/Floodlight), you must set:interface_driver = neutron.agent.linux.interface.OVSInterfaceDriverA plugin that uses Linux Bridge, you must
set:interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriverTo use the reference implementation, you must also
set:device_driver = neutron.plugins.services.agent_loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriverMake sure to set the following parameter in
neutron.conf on the host that
runs neutron-server:service_plugins = neutron.plugins.services.agent_loadbalancer.plugin.LoadBalancerPluginInstall FWaaS AgentThe FWaaS agent is colocated with the Neutron L3 agent and does
not require any additional packages apart from those required for
the Neutron L3 agent. The FWaaS functionality can be enabled by
setting the configuration as described below.
Configuring FWaaS Service and AgentMake sure to set the following parameter in
neutron.conf on the host that
runs neutron-server:service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPluginTo use the reference implementation, you must also
add a FWaaS driver configuration to the
neutron.conf on every node
on which the Neutron L3 agent is deployed:[fwaas]
driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver
enabled = TrueInstall OpenStack Networking CLI ClientInstall the OpenStack Networking CLI client by
running:$sudo apt-get install python-pyparsing python-cliff python-NeutronclientInitialization and File LocationsYou can start and stop OpenStack Networking services using
the 'service' command. For example:$sudo service neutron-server stop$sudo service neutron-server status$sudo service neutron-server start$sudo service neutron-server restartLog files are in the
/var/log/neutron directory.Configuration files are in the
/etc/neutron directory.Installing Packages (Fedora)You can retrieve the OpenStack packages for Fedora from:
https://apps.fedoraproject.org/packages/s/openstack
You can find additional information here: https://fedoraproject.org/wiki/OpenStack
RPC SetupOpenStack Networking uses RPC to allow DHCP agents and any
plugin agents to communicate with the main neutron-server process.
Typically, the agent can use the same RPC mechanism used by
other OpenStack components like Nova.To use Qpid AMQP as the message bus for RPCEnsure that Qpid is installed on a host reachable
via the management network (if this is already the case
because of deploying another service like Nova, the
existing Qpid setup is sufficient):$sudo yum install qpid-cpp-server qpid-cpp-server-daemon$sudo chkconfig qpidd on$sudo service qpidd startUpdate
/etc/neutron/neutron.conf with
the following values:rpc_backend = neutron.openstack.common.rpc.impl_qpid qpid_hostname = <mgmt-IP-of-qpid-host>Fedora packaging includes utility scripts that
configure all of the necessary configuration files, and
which can also be used to understand how each OpenStack
Networking service is configured. The scripts use the
package openstack-utils. To
install the package, execute:sudo yum install openstack-utilsInstall neutron-server and pluginTo install and configure the Neutron server and
pluginInstall the server and relevant plugin.The client is installed as a dependency for the
OpenStack Networking service. Each plugin has its own
package, named openstack-neutron-<plugin>. A
complete list of the supported plugins can be seen at:
https://fedoraproject.org/wiki/Neutron#Neutron_Plugins.
The following examples use the Open vSwitch
plugin:$sudo yum install openstack-neutron$sudo yum install openstack-neutron-openvswitchMost plugins require that you install a database and
configure it in a plugin configuration file. The Fedora
packaging for OpenStack Networking include server-setup
utility scripts that will take care of this. For
example:$sudo neutron-server-setup --plugin openvswitchEnable and start the service:$sudo chkconfig neutron-server on$sudo service neutron-server startInstall neutron-plugin-*-agentSome plugins utilize an agent that is run on any node that
handles data packets. This includes any node running
nova-compute, as
well as nodes running dedicated OpenStack Networking agents
like neutron-dhcp-agent and
neutron-l3-agent (see below). If
your plugin uses an agent, this section describes how to run
the agent for this plugin, as well as the basic configuration
options.Open vSwitch AgentTo install and configure the Open vSwitch
agentInstall the OVS agent:$sudo yum install openstack-neutron-openvswitchRun the agent setup script:$sudo neutron-node-setup --plugin openvswitchAll hosts running
neutron-plugin-openvswitch-agent
require the OVS bridge named "br-int". To create the
bridge, run:$sudo ovs-vsctl add-br br-intEnable and start the agent:$sudo chkconfig neutron-openvswitch-agent on$sudo service neutron-openvswitch-agent start$sudo chkconfig openvswitch on$sudo service openvswitch startEnable the OVS cleanup utility:$sudo chkconfig neutron-ovs-cleanup onInstall DHCP AgentTo install and configure the DHCP agentThe DHCP agent is part of the
openstack-neutron package;
install the package using:$sudo yum install openstack-neutronRun the agent setup script:$sudo neutron-dhcp-setup --plugin openvswitchEnable and start the agent:$sudo chkconfig neutron-dhcp-agent on$sudo service neutron-dhcp-agent startInstall L3 AgentTo install and configure the L3 agentCreate a bridge "br-ex" that will be used to uplink
this node running
neutron-l3-agent to the
external network, then attach the NIC attached to the
external network to this bridge. For example, with Open
vSwitch and NIC eth1 connected to the external network,
run:$sudo ovs-vsctl add-br br-ex$sudo ovs-vsctl add-port br-ex eth1The node running neutron-l3-agent should not have an
IP address manually configured on the NIC connected to
the external network. Rather, you must have a range of
IP addresses from the external network that can be used
by OpenStack Networking for routers that uplink to the
external network. This range must be large enough to
have an IP address for each router in the deployment, as
well as each floating IP.The L3 agent is part of the
openstack-neutron package;
install the package using:$sudo yum install openstack-neutronRun the agent setup script:$sudo neutron-l3-setup --plugin openvswitchEnable and start the agent:$sudo chkconfig enable neutron-l3-agent on$sudo service neutron-l3-agent startEnable and start the meta data agent:$sudo chkconfig neutron-metadata-agent on$sudo service neutron-metadata-agent startInstall OpenStack Networking CLI clientInstall the OpenStack Networking CLI client:$sudo yum install python-neutronclientInitialization and File LocationsYou can start and stop services by using the
service command. For example:$sudo service neutron-server stop$sudo service neutron-server status$sudo service neutron-server start$sudo service neutron-server restartLog files are in the
/var/log/neutron directory.Configuration files are in the
/etc/neutron directory.Set up for deployment use casesThis section describes how to configure the OpenStack
Networking service and its components for some typical use
cases.OpenStack Networking Deployment Use Cases
The following common-use cases for OpenStack Networking are
not exhaustive, but can be combined to create more complex use cases.
Use Case: Single Flat NetworkIn the simplest use case, a single OpenStack Networking network is created. This is a
"shared" network, meaning it is visible to all tenants via the OpenStack Networking
API. Tenant VMs have a single NIC, and receive
a fixed IP address from the subnet(s) associated with that network.
This use case essentially maps to the FlatManager
and FlatDHCPManager models provided by OpenStack Compute. Floating IPs are not
supported.This network type is often created by the OpenStack administrator
to map directly to an existing physical network in the data center (called a
"provider network"). This allows the provider to use a physical
router on that data center network as the gateway for VMs to reach
the outside world. For each subnet on an external network, the gateway
configuration on the physical router must be manually configured
outside of OpenStack.Use Case: Multiple Flat Network
This use case is similar to the above Single Flat Network use case,
except that tenants can see multiple shared networks via the OpenStack Networking API
and can choose which network (or networks) to plug into.
Use Case: Mixed Flat and Private Network
This use case is an extension of the above Flat Network use cases.
In addition to being able to see one or more shared networks via
the OpenStack Networking API, tenants can also have access to private per-tenant
networks (only visible to tenant users).
Created VMs can have NICs on any of the shared networks and/or any of the private networks
belonging to the tenant. This enables the creation of "multi-tier"
topologies using VMs with multiple NICs. It also supports a model where
a VM acting as a gateway can provide services such as routing, NAT, or
load balancing.
Use Case: Provider Router with Private Networks
This use case provides each tenant with one or more private networks, which
connect to the outside world via an OpenStack Networking router.
When each tenant gets exactly one network, this architecture maps to the same
logical topology as the VlanManager in OpenStack Compute (although of course, OpenStack Networking doesn't
require VLANs). Using the OpenStack Networking API, the tenant can only see a
network for each private network assigned to that tenant. The router
object in the API is created and owned by the cloud administrator.
This model supports giving VMs public addresses using
"floating IPs", in which the router maps public addresses from the
external network to fixed IPs on private networks. Hosts without floating
IPs can still create outbound connections to the external network, because
the provider router performs SNAT to the router's external IP. The
IP address of the physical router is used as the gateway_ip of the
external network subnet, so the provider has a default router for
Internet traffic.
The router provides L3 connectivity between private networks, meaning
that different tenants can reach each other's instances unless additional
filtering is used (for example, security groups). Because there is only a single
router, tenant networks cannot use overlapping IPs. Thus, it is likely
that the administrator would create the private networks on behalf of the tenants.
Use Case: Per-tenant Routers with Private Networks
This use case represents a more advanced router scenario in which each tenant gets
at least one router, and potentially has access to the OpenStack Networking API to
create additional routers. The tenant can create their own networks,
potentially uplinking those networks to a router. This model enables
tenant-defined, multi-tier applications, with
each tier being a separate network behind the router. Since there are
multiple routers, tenant subnets can overlap without conflicting,
since access to external networks all happens via SNAT or Floating IPs.
Each router uplink and floating IP is allocated from the external network
subnet.