Networking in OpenStack Networking in OpenStack OpenStack Networking provides a rich tenant-facing API for defining network connectivity and addressing in the cloud. The OpenStack Networking project gives operators the ability to leverage different networking technologies to power their cloud networking. It is a virtual network service that provides a powerful API to define the network connectivity and addressing used by devices from other services, such as OpenStack Compute. It has a rich API which consists of the following components. Network: An isolated L2 segment, analogous to VLAN in the physical networking world. Subnet: A block of v4 or v6 IP addresses and associated configuration state. Port: A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like OpenStack Compute to attach virtual devices to ports on these networks. In particular, OpenStack Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme, even if those IP addresses overlap with those used by other tenants. This enables very advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses. Plugin Architecture: Flexibility to Choose Different Network Technologies Enhancing traditional networking solutions to provide rich cloud networking is challenging. Traditional networking is not designed to scale to cloud proportions or to configure automatically. The original OpenStack Compute network implementation assumed a very basic model of performing all isolation through Linux VLANs and IP tables. OpenStack Networking introduces the concept of a plugin, which is a pluggable back-end implementation of the OpenStack Networking API. A plugin can use a variety of technologies to implement the logical API requests. Some OpenStack Networking plugins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits. The current set of plugins include: Open vSwitch: Documentation included in this guide. Cisco: Documented externally at: http://wiki.openstack.org/cisco-quantum Linux Bridge: Documentation included in this guide and http://wiki.openstack.org/Quantum-Linux-Bridge-Plugin Nicira NVP: Documentation include in this guide, NVP Product Overview , and NVP Product Support. Ryu: https://github.com/osrg/ryu/wiki/OpenStack NEC OpenFlow: http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin Big Switch, Floodlight REST Proxy: http://www.openflowhub.org/display/floodlightcontroller/Quantum+REST+Proxy+Plugin PLUMgrid: https://wiki.openstack.org/wiki/Plumgrid-quantum Hyper-V Plugin Brocade Plugin Midonet Plugin Plugins can have different properties in terms of hardware requirements, features, performance, scale, operator tools, etc. Supporting many plugins enables the cloud administrator to weigh different options and decide which networking technology is right for the deployment. Components of OpenStack Networking To deploy OpenStack Networking, it is useful to understand the different components that make up the solution and how those components interact with each other and with other OpenStack services. OpenStack Networking is a standalone service, just like other OpenStack services such as OpenStack Compute, OpenStack Image service, OpenStack Identity service, and the OpenStack Dashboard. Like those services, a deployment of OpenStack Networking often involves deploying several processes on a variety of hosts. The main process of the OpenStack Networking server is quantum-server, which is a Python daemon that exposes the OpenStack Networking API and passes user requests to the configured OpenStack Networking plugin for additional processing. Typically, the plugin requires access to a database for persistent storage, similar to other OpenStack services. If your deployment uses a controller host to run centralized OpenStack Compute components, you can deploy the OpenStack Networking server on that same host. However, OpenStack Networking is entirely standalone and can be deployed on its own server as well. OpenStack Networking also includes additional agents that might be required depending on your deployment: plugin agent (quantum-*-agent):Runs on each hypervisor to perform local vswitch configuration. Agent to be run depends on which plugin you are using, as some plugins do not require an agent. dhcp agent (quantum-dhcp-agent):Provides DHCP services to tenant networks. This agent is the same across all plugins. l3 agent (quantum-l3-agent):Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. This agent is the same across all plugins. These agents interact with the main quantum-server process in the following ways: Through RPC. For example, rabbitmq or qpid. Through the standard OpenStack Networking API. OpenStack Networking relies on the OpenStack Identity Project (Keystone) for authentication and authorization of all API request. OpenStack Compute interacts with OpenStack Networking through calls to its standard API. As part of creating a VM, nova-compute communicates with the OpenStack Networking API to plug each virtual NIC on the VM into a particular network. The OpenStack Dashboard (Horizon) has integration with the OpenStack Networking API, allowing administrators and tenant users, to create and manage network services through the Horizon GUI. Place Services on Physical Hosts Like other OpenStack services, OpenStack Networking provides cloud administrators with significant flexibility in deciding which individual services should run on which physical devices. On one extreme, all service daemons can be run on a single physical host for evaluation purposes. On the other, each service could have its own physical hosts, and some cases be replicated across multiple hosts for redundancy. In this guide, we focus primarily on a standard architecture that includes a “cloud controller” host, a “network gateway” host, and a set of hypervisors for running VMs. The "cloud controller" and "network gateway" can be combined in simple deployments, though if you expect VMs to send significant amounts of traffic to or from the Internet, a dedicated network gateway host is suggested to avoid potential CPU contention between packet forwarding performed by the quantum-l3-agent and other OpenStack services. Network Connectivity for Physical Hosts
Network Diagram
A standard OpenStack Networking setup has up to four distinct physical data center networks: Management network:Used for internal communication between OpenStack Components. The IP addresses on this network should be reachable only within the data center. Data network:Used for VM data communication within the cloud deployment. The IP addressing requirements of this network depend on the OpenStack Networking plugin in use. External network:Used to provide VMs with Internet access in some deployment scenarios. The IP addresses on this network should be reachable by anyone on the Internet. API network:Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. The IP addresses on this network should be reachable by anyone on the Internet. This may be the same network as the external network, as it is possible to create a subnet for the external network that uses IP allocation ranges to use only less than the full range of IP addresses in an IP block.