Overview This chapter describes the high-level concepts and components of an OpenStack Networking deployment.
What is OpenStack Networking? The OpenStack Networking project was created to provide a rich API for defining network connectivity and addressing in the cloud. The OpenStack Networking project gives operators the ability to leverage different networking technologies to power their cloud networking.   For a detailed description of the OpenStack Networking API abstractions and their attributes, see the OpenStack Networking API Guide (v2.0).
OpenStack Networking API: Rich Control over Network Functionality OpenStack Networking is a virtual network service that provides a powerful API to define the network connectivity and addressing used by devices from other services, such as OpenStack Compute.   The OpenStack Compute API has a virtual server abstraction to describe computing resources. Similarly, the OpenStack Networking API has virtual network, subnet, and port abstractions to describe networking resources. In more detail: Network. An isolated L2 segment, analogous to VLAN in the physical networking world. Subnet. A block of v4 or v6 IP addresses and associated configuration state. Port. A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.   You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like OpenStack Compute to attach virtual devices to ports on these networks.  In particular, OpenStack Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme (even if those IP addresses overlap with those used by other tenants). The OpenStack Networking service: Enables advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses. Offers flexibility for the cloud administrator to customized network offerings. Provides a mechanism that lets cloud administrators expose additional API capabilities through API extensions.  Commonly, new capabilities are first introduced as an API extension, and over time become part of the core OpenStack Networking API.
Plugin Architecture: Flexibility to Choose Different Network Technologies Enhancing traditional networking solutions to provide rich cloud networking is challenging. Traditional networking is not designed to scale to cloud proportions nor to handle automatic configuration. The original OpenStack Compute network implementation assumed a very basic model of performing all isolation through Linux VLANs and IP tables. OpenStack Networking introduces the concept of a plugin, which is a back-end implementation of the OpenStack Networking API. A plugin can use a variety of technologies to implement the logical API requests.  Some OpenStack Networking plugins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits. The following plugins are currently included in the OpenStack Networking distribution: Big Switch Plugin (Floodlight REST Proxy). http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin Brocade Plugin. https://github.com/brocade/brocade Cisco. http://wiki.openstack.org/cisco-neutron Cloudbase Hyper-V Plugin. http://www.cloudbase.it/quantum-hyper-v-plugin/ Linux Bridge Plugin. Documentation included in this guide and at http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin   Mellanox Plugin. https://wiki.openstack.org/wiki/Mellanox-Neutron/ Midonet Plugin. http://www.midokura.com/ NEC OpenFlow Plugin. http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin Nicira NVP Plugin. Documentation include in this guide, NVP Product Overview , and NVP Product Support. Open vSwitch Plugin. Documentation included in this guide. PLUMgrid. https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron Ryu Plugin. https://github.com/osrg/ryu/wiki/OpenStack Plugins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because OpenStack Networking supports a large number of plugins, the cloud administrator is able to weigh different options and decide which networking technology is right for the deployment. Not all OpenStack networking plugins are compatible with all possible OpenStack compute drivers:
Plugin Compatability with OpenStack Compute Drivers
Libvirt (KVM/QEMU) XenServer VMware Hyper-V Bare-metal PowerVM
Bigswitch / Floodlight Yes
Brocade Yes
Cisco Yes
Cloudbase Hyper-V Yes
Linux Bridge Yes
Mellanox Yes
Midonet Yes
NEC OpenFlow Yes
Nicira NVP Yes Yes Yes
Open vSwitch Yes
Plumgrid Yes Yes
Ryu Yes
OpenStack Networking Architecture This section describes the high-level components of an OpenStack Networking deployment. Before you deploy OpenStack Networking, it is useful to understand the different components that make up the solution, and how these components interact with each other and with other OpenStack services.
Overview OpenStack Networking is a standalone service, just like other OpenStack services such as OpenStack Compute, OpenStack Image service, OpenStack Identity service, or the OpenStack Dashboard. Like those services, a deployment of OpenStack Networking often involves deploying several processes on a variety of hosts. The main process of the OpenStack Networking server is neutron-server, which is a Python daemon that exposes the OpenStack Networking API and passes user requests to the configured OpenStack Networking plugin for additional processing. Typically, the plugin requires access to a database for persistent storage (also similar to other OpenStack services). If your deployment uses a controller host to run centralized OpenStack Compute components, you can deploy the OpenStack Networking server on that same host. However, OpenStack Networking is entirely standalone and can be deployed on its own host as well. OpenStack Networking also includes additional agents that might be required, depending on your deployment: plugin agent (neutron-*-agent). Runs on each hypervisor to perform local vswitch configuration. The agent to be run will depend on which plugin you are using, because some plugins do not actually require an agent. dhcp agent (neutron-dhcp-agent). Provides DHCP services to tenant networks. This agent is the same for all plugins. l3 agent (neutron-l3-agent). Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. This agent is the same for all plugins. The above agents interact with the main Neutron process through RPC (for example, rabbitmq or qpid) or through the standard OpenStack Networking API. Further: OpenStack Networking relies on the OpenStack Identity service (keystone) for the authentication and authorization of all API request.  OpenStack Compute (nova) interacts with OpenStack Networking through calls to its standard API.  As part of creating a VM, the nova-compute service communicates with the OpenStack Networking API to plug each virtual NIC on the VM into a particular network.    The OpenStack Dashboard (horizon) integrates with the OpenStack Networking API, allowing administrators and tenant users to create and manage network services through the Dashboard GUI.  
Place Services on Physical Hosts Like other OpenStack services, OpenStack Networking provides cloud administrators with significant flexibility in deciding which individual services should run on which physical devices. At one extreme, all service daemons can be run on a single physical host for evaluation purposes. At the other, each service could have its own physical hosts and, in some cases, be replicated across multiple hosts for redundancy. For more information, see the chapter on high availability. In this guide, we focus primarily on a standard architecture that includes a “cloud controller” host, a “network gateway” host, and a set of hypervisors for running VMs.  The "cloud controller" and "network gateway" can be combined in simple deployments. However, if you expect VMs to send significant amounts of traffic to or from the Internet, a dedicated network gateway host is recommended to avoid potential CPU contention between packet forwarding performed by the neutron-l3-agent and other OpenStack services.
Network Connectivity for Physical Hosts A standard OpenStack Networking setup has up to four distinct physical data center networks: Management network. Used for internal communication between OpenStack Components.   IP addresses on this network should be reachable only within the data center.  Data network. Used for VM data communication within the cloud deployment.  The IP addressing requirements of this network depend on the OpenStack Networking plugin being used.  External network. Used to provide VMs with Internet access in some deployment scenarios.  IP addresses on this network should be reachable by anyone on the Internet.  API network. Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network may be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.
OpenStack Networking Deployment Use Cases The following common-use cases for OpenStack Networking are not exhaustive, but can be combined to create more complex use cases.
Use Case: Single Flat Network In the simplest use case, a single OpenStack Networking network is created. This is a "shared" network, meaning it is visible to all tenants via the OpenStack Networking API. Tenant VMs have a single NIC, and receive a fixed IP address from the subnet(s) associated with that network. This use case essentially maps to the FlatManager and FlatDHCPManager models provided by OpenStack Compute. Floating IPs are not supported. This network type is often created by the OpenStack administrator to map directly to an existing physical network in the data center (called a "provider network"). This allows the provider to use a physical router on that data center network as the gateway for VMs to reach the outside world. For each subnet on an external network, the gateway configuration on the physical router must be manually configured outside of OpenStack.
Use Case: Multiple Flat Network This use case is similar to the above Single Flat Network use case, except that tenants can see multiple shared networks via the OpenStack Networking API and can choose which network (or networks) to plug into.
Use Case: Mixed Flat and Private Network This use case is an extension of the above Flat Network use cases. In addition to being able to see one or more shared networks via the OpenStack Networking API, tenants can also have access to private per-tenant networks (only visible to tenant users). Created VMs can have NICs on any of the shared networks and/or any of the private networks belonging to the tenant. This enables the creation of "multi-tier" topologies using VMs with multiple NICs. It also supports a model where a VM acting as a gateway can provide services such as routing, NAT, or load balancing.
Use Case: Provider Router with Private Networks This use case provides each tenant with one or more private networks, which connect to the outside world via an OpenStack Networking router. When each tenant gets exactly one network, this architecture maps to the same logical topology as the VlanManager in OpenStack Compute (although of course, OpenStack Networking doesn't require VLANs). Using the OpenStack Networking API, the tenant can only see a network for each private network assigned to that tenant. The router object in the API is created and owned by the cloud administrator. This model supports giving VMs public addresses using "floating IPs", in which the router maps public addresses from the external network to fixed IPs on private networks. Hosts without floating IPs can still create outbound connections to the external network, because the provider router performs SNAT to the router's external IP. The IP address of the physical router is used as the gateway_ip of the external network subnet, so the provider has a default router for Internet traffic. The router provides L3 connectivity between private networks, meaning that different tenants can reach each other's instances unless additional filtering is used (for example, security groups). Because there is only a single router, tenant networks cannot use overlapping IPs. Thus, it is likely that the administrator would create the private networks on behalf of the tenants.
Use Case: Per-tenant Routers with Private Networks This use case represents a more advanced router scenario in which each tenant gets at least one router, and potentially has access to the OpenStack Networking API to create additional routers. The tenant can create their own networks, potentially uplinking those networks to a router. This model enables tenant-defined, multi-tier applications, with each tier being a separate network behind the router. Since there are multiple routers, tenant subnets can overlap without conflicting, since access to external networks all happens via SNAT or Floating IPs. Each router uplink and floating IP is allocated from the external network subnet.