Networking Learn Networking concepts, architecture, and basic and advanced neutron and nova command-line interface (CLI) commands so that you can administer Networking in a cloud.
Introduction to Networking The Networking service, code-named Neutron, provides an API for defining network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their cloud networking. The Networking service also provides an API to configure and manage a variety of network services ranging from L3 forwarding and NAT to load balancing, edge firewalls, and IPSEC VPN. For a detailed description of the Networking API abstractions and their attributes, see the OpenStack Networking API v2.0 Reference.
Networking API Networking is a virtual network service that provides a powerful API to define the network connectivity and IP addressing used by devices from other services, such as Compute. The Compute API has a virtual server abstraction to describe computing resources. Similarly, the Networking API has virtual network, subnet, and port abstractions to describe networking resources. In more detail: Network. An isolated L2 segment, analogous to VLAN in the physical networking world. Subnet. A block of v4 or v6 IP addresses and associated configuration state. Port. A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like Compute to attach virtual devices to ports on these networks.In particular, Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme (even if those IP addresses overlap with those used by other tenants). The Networking service: Enables advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses. Offers flexibility for the cloud administrator to customize network offerings. Enables developers to extend the Networking API. Over time, the extended functionality becomes part of the core Networking API.
Plug-in architecture The original Compute network implementation assumed a basic model of isolation through Linux VLANs and IP tables. Networking introduces the concept of a plug-in, which is a back-end implementation of the Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits. Networking includes the following plug-ins: Big Switch Plug-in (Floodlight REST Proxy). http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin Brocade Plug-in. https://github.com/brocade/brocade Cisco. http://wiki.openstack.org/cisco-neutron Cloudbase Hyper-V Plug-in. http://www.cloudbase.it/quantum-hyper-v-plugin/ Linux Bridge Plug-in. Documentation included in this guide at http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin Mellanox Plug-in. https://wiki.openstack.org/wiki/Mellanox-Neutron/ Midonet Plug-in. http://www.midokura.com/ ML2 (Modular Layer 2) Plug-in. https://wiki.openstack.org/wiki/Neutron/ML2 NEC OpenFlow Plug-in. http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin Nicira NVP Plug-in. Documentation is included in this guide, NVP Product Overview, and NVP Product Support Open vSwitch Plug-in. Documentation included in this guide. PLUMgrid. https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron Ryu Plug-in. https://github.com/osrg/ryu/wiki/OpenStack Plug-ins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because Networking supports a large number of plug-ins, the cloud administrator can weigh options to decide on the right networking technology for the deployment. In the Havana release, OpenStack Networking provides the Modular Layer 2 (ML2) plug-in that can concurrently use multiple layer 2 networking technologies that are found in real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2 framework simplifies the addition of support for new L2 technologies and reduces the effort that is required to add and maintain them compared to monolithic plug-ins. Plugins Deprecation Notice: The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana release and will be removed in the Icehouse release. All features have been ported to the ML2 plug-in in the form of mechanism drivers. ML2 currently provides Linux Bridge, Open vSwitch and Hyper-v mechanism drivers. Not all Networking plug-ins are compatible with all possible Compute drivers:
Plug-in compatibility with Compute drivers
Libvirt (KVM/QEMU) XenServer VMware Hyper-V Bare-metal PowerVM
Bigswitch / Floodlight Yes
Brocade Yes
Cisco Yes
Cloudbase Hyper-V Yes
Linux Bridge Yes
Mellanox Yes
Midonet Yes
ML2 Yes Yes
NEC OpenFlow Yes
Nicira NVP Yes Yes Yes
Open vSwitch Yes
Plumgrid Yes Yes
Ryu Yes
Networking architecture Before you deploy Networking, it helps to understand the Networking components and how these components interact with each other and with other OpenStack services.
Overview Networking is a standalone service, just like other OpenStack services such as Compute, Image service, Identity service, or the Dashboard. Like those services, a deployment of Networking often involves deploying several processes on a variety of hosts. The Networking server uses the neutron-server daemon to expose the Networking API and to pass user requests to the configured Networking plug-in for additional processing. Typically, the plug-in requires access to a database for persistent storage (also similar to other OpenStack services). If your deployment uses a controller host to run centralized Compute components, you can deploy the Networking server on that same host. However, Networking is entirely standalone and can be deployed on its own host as well. Depending on your deployment, Networking can also include the following agents: plug-in agent (neutron-*-agent). Runs on each hypervisor to perform local vswitch configuration. The agent that runs depends on the plug-in that you use, and some plug-ins do not require an agent. dhcp agent (neutron-dhcp-agent). Provides DHCP services to tenant networks. Some plug-ins use this agent. l3 agent (neutron-l3-agent). Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. Some plug-ins use this agent. These agents interact with the main neutron process through RPC (for example, rabbitmq or qpid) or through the standard Networking API. Further: Networking relies on the Identity service (Keystone) for the authentication and authorization of all API requests. Compute (Nova) interacts with Networking through calls to its standard API.  As part of creating a VM, the nova-compute service communicates with the Networking API to plug each virtual NIC on the VM into a particular network.  The Dashboard (Horizon) integrates with the Networking API, enabling administrators and tenant users to create and manage network services through a web-based GUI.
Place services on physical hosts Like other OpenStack services, Networking enables cloud administrators to run one or more services on one or more physical devices. At one extreme, the cloud administrator can run all service daemons on a single physical host for evaluation purposes. Alternatively the cloud administrator can run each service on its own physical host and, in some cases, can replicate services across multiple hosts for redundancy. For more information, see the OpenStack Configuration Reference. A standard architecture includes a cloud controller host, a network gateway host, and a set of hypervisors that run virtual machines. The cloud controller and network gateway can be on the same host. However, if you expect VMs to send significant traffic to or from the Internet, a dedicated network gateway host helps avoid CPU contention between the neutron-l3-agent and other OpenStack services that forward packets.
Network connectivity for physical hosts A standard Networking set up has one or more of the following distinct physical data center networks: Management network. Provides internal communication between OpenStack Components. IP addresses on this network should be reachable only within the data center.  Data network. Provides VM data communication within the cloud deployment.  The IP addressing requirements of this network depend on the Networking plug-in that is used. External network. Provides VMs with Internet access in some deployment scenarios.  Anyone on the Internet can reach IP addresses on this network. API network. Exposes all OpenStack APIs, including the Networking API, to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network might be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.
Use Networking You can use Networking in the following ways: Expose the Networking API to cloud tenants, which enables them to build rich network topologies. Have the cloud administrator, or an automated administrative tool, create network connectivity on behalf of tenants. A tenant or cloud administrator can both perform the following procedures.
Core Networking API features After you install and run Networking, tenants and administrators can perform create-read-update-delete (CRUD) API networking operations by using the Networking API directly or the neutron command-line interface (CLI). The neutron CLI is a wrapper around the Networking API. Every Networking API call has a corresponding neutron command. The CLI includes a number of options. For details, refer to the OpenStack End User Guide.
API abstractions The Networking v2.0 API provides control over both L2 network topologies and the IP addresses used on those networks (IP Address Management or IPAM). There is also an extension to cover basic L3 forwarding and NAT, which provides capabilities similar to nova-network. In the Networking API: Network. An isolated L2 network segment (similar to a VLAN) that forms the basis for describing the L2 network topology available in an Networking deployment. Subnet. Associates a block of IP addresses and other network configuration, such as, default gateways or dns-servers, with an Networking network. Each subnet represents an IPv4 or IPv6 address block and, if needed, each Networking network can have multiple subnets. Port. Represents an attachment port to a L2 Networking network. When a port is created on the network, by default it is allocated an available fixed IP address out of one of the designated subnets for each IP version (if one exists). When the port is destroyed, its allocated addresses return to the pool of available IPs on the subnet. Users of the Networking API can either choose a specific IP address from the block, or let Networking choose the first available IP address. The following table summarizes the attributes available for each networking abstraction. For information about API abstraction and operations, see the Networking API v2.0 Reference.
Network attributes
Attribute Type Default value Description
bool True Administrative state of the network. If specified as False (down), this network does not forward packets.
uuid-str Generated UUID for this network.
string None Human-readable name for this network; is not required to be unique.
bool False Specifies whether this network resource can be accessed by any tenant. The default policy setting restricts usage of this attribute to administrative users only.
string N/A Indicates whether this network is currently operational.
list(uuid-str) Empty list List of subnets associated with this network.
uuid-str N/A Tenant owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Subnet Attributes
Attribute Type Default Value Description
list(dict) Every address in , excluding (if configured). List of cidr sub-ranges that are available for dynamic allocation to ports. Syntax: [ { "start":"10.0.0.2", "end": "10.0.0.254"} ]
string N/A IP range for this subnet, based on the IP version.
list(string) Empty list List of DNS name servers used by hosts in this subnet.
bool True Specifies whether DHCP is enabled for this subnet.
string First address in Default gateway used by devices in this subnet.
list(dict) Empty list Routes that should be used by devices with IPs from this subnet (not including local subnet route).
uuid-string Generated UUID representing this subnet.
int 4 IP version.
string None Human-readable name for this subnet (might not be unique).
uuid-string N/A Network with which this subnet is associated.
uuid-string N/A Owner of network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Port attributes
Attribute Type Default Value Description
bool true Administrative state of this port. If specified as False (down), this port does not forward packets.
string None Identifies the device using this port (for example, a virtual server's ID).
string None Identifies the entity using this port (for example, a dhcp agent).
list(dict) Automatically allocated from pool Specifies IP addresses for this port; associates the port with the subnets containing the listed IP addresses.
uuid-string Generated UUID for this port.
string Generated Mac address to use on this port.
string None Human-readable name for this port (might not be unique).
uuid-string N/A Network with which this port is associated.
string N/A Indicates whether the network is currently operational.
uuid-string N/A Owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Basic Networking operations To learn about advanced capabilities that are available through the neutron command-line interface (CLI), read the networking section in the OpenStack End User Guide. The following table shows example neutron commands that enable you to complete basic Networking operations:
Basic Networking operations
Operation Command
Creates a network. $ neutron net-create net1
Creates a subnet that is associated with net1. $ neutron subnet-create net1 10.0.0.0/24
Lists ports for a specified tenant. $ neutron port-list
Lists ports for a specified tenant and displays the , , and columns. $ neutron port-list -c id -c fixed_ips -c device_owner
Shows information for a specified port. $ neutron port-show port-id
The field describes who owns the port. A port whose begins with: network is created by Networking. compute is created by Compute.
Administrative operations The cloud administrator can run any neutron command on behalf of tenants by specifying an Identity in the command, as follows: $ neutron net-create --tenant-id=tenant-id network-name For example: $ neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 To view all tenant IDs in Identity, run the following command as an Identity Service admin user: $ keystone tenant-list
Advanced Networking operations The following table shows example neutron commands that enable you to complete advanced Networking operations:
Advanced Networking operations
Operation Command
Creates a network that all tenants can use. $ neutron net-create --shared public-net
Creates a subnet with a specified gateway IP address. $ neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24
Creates a subnet that has no gateway IP address. $ neutron subnet-create --no-gateway net1 10.0.0.0/24
Creates a subnet with DHCP disabled. $ neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False
Creates a subnet with a specified set of host routes. $ neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2
Creates a subnet with a specified set of dns name servers. $ neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8
Displays all ports and IPs allocated on a network. $ neutron port-list --network_id net-id
Use Compute with Networking
Basic Compute and Networking operations The following table shows example neutron and nova commands that enable you to complete basic Compute and Networking operations:
Basic Compute/Networking operations
Action Command
Checks available networks. $ neutron net-list
Boots a VM with a single NIC on a selected Networking network. $ nova boot --image img --flavor flavor --nic net-id=net-id vm-name
Searches for ports with a that matches the Compute instance UUID. See . $ neutron port-list --device_id=vm-id
Searches for ports, but shows only the for the port. $ neutron port-list --field mac_address --device_id=vm-id
Temporarily disables a port from sending traffic. $ neutron port-update port-id --admin_state_up=False
The can also be a logical router ID. VM creation and deletion When you boot a Compute VM, a port on the network that corresponds to the VM NIC is automatically created and associated with the default security group. You can configure security group rules to enable users to access the VM. When you delete a Compute VM, the underlying Networking port is automatically deleted.
Advanced VM creation operations The following table shows example nova and neutron commands that enable you to complete advanced VM creation operations:
Advanced VM creation operations
Operation Command
Boots a VM with multiple NICs. $ nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name
Boots a VM with a specific IP address. First, create an Networking port with a specific IP address. Then, boot a VM specifying a rather than a . $ neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id $ nova boot --image img --flavor flavor --nic port-id=port-id vm-name
Boots a VM that connects to all networks that are accessible to the tenant who submits the request (without the option). $ nova boot --image img --flavor flavor vm-name
Networking does not currently support the v4-fixed-ip parameter of the --nic option for the nova command.
Security groups (enabling ping and SSH on VMs) You must configure security group rules depending on the type of plug-in you are using. If you are using a plug-in that: Implements Networking security groups, you can configure security group rules directly by using neutron security-group-rule-create. The following example allows ping and ssh access to your VMs. $ neutron security-group-rule-create --protocol icmp \ --direction ingress default $ neutron security-group-rule-create --protocol tcp --port-range-min 22 \ --port-range-max 22 --direction ingress default Does not implement Networking security groups, you can configure security group rules by using the nova secgroup-add-rule or euca-authorize command. The following nova commands allow ping and ssh access to your VMs. $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 If your plug-in implements Networking security groups, you can also leverage Compute security groups by setting security_group_api = neutron in the nova.conf file. After you set this option, all Compute security group commands are proxied to Networking.
Authentication and authorization Networking uses the Identity Service as the default authentication service. When the Identity Service is enabled, users who submit requests to the Networking service must provide an authentication token in X-Auth-Token request header. Users obtain this token by authenticating with the Identity Service endpoint. For more information about authentication with the Identity Service, see OpenStack Identity Service API v2.0 Reference. When the Identity Service is enabled, it is not mandatory to specify the tenant ID for resources in create requests because the tenant ID is derived from the authentication token. The default authorization settings only allow administrative users to create resources on behalf of a different tenant. Networking uses information received from Identity to authorize user requests. Networking handles two kind of authorization policies: Operation-based policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes; Resource-based policies specify whether access to specific resource is granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in Networking might vary from deployment to deployment. The policy engine reads entries from the policy.json file. The actual location of this file might vary from distribution to distribution. Entries can be updated while the system is running, and no service restart is required. Every time the policy file is updated, the policies are automatically reloaded. Currently the only way of updating such policies is to edit the policy file. In this section, the terms policy and rule refer to objects that are specified in the same way in the policy file. There are no syntax differences between a rule and a policy. A policy is something that is matched directly from the Networking policy engine. A rule is an element in a policy, which is evaluated. For instance in create_subnet: [["admin_or_network_owner"]], create_subnet is a policy, and admin_or_network_owner is a rule. Policies are triggered by the Networking policy engine whenever one of them matches an Networking API operation or a specific attribute being used in a given operation. For instance the create_subnet policy is triggered every time a POST /v2.0/subnets request is sent to the Networking server; on the other hand create_network:shared is triggered every time the shared attribute is explicitly specified (and set to a value different from its default) in a POST /v2.0/networks request. It is also worth mentioning that policies can be also related to specific API extensions; for instance extension:provider_network:set is be triggered if the attributes defined by the Provider Network extensions are specified in an API request. An authorization policy can be composed by one or more rules. If more rules are specified, evaluation policy succeeds if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached. The Networking policy engine currently defines the following kinds of terminal rules: Role-based rules evaluate successfully if the user who submits the request has the specified role. For instance "role:admin" is successful if the user who submits the request is an administrator. Field-based rules evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance "field:networks:shared=True" is successful if the shared attribute of the network resource is set to true. Generic rules compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance "tenant_id:%(tenant_id)s" is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request. The following is an extract from the default policy.json file: { [1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "shared": [["field:networks:shared=True"]], [2] "default": [["rule:admin_or_owner"]], "create_subnet": [["rule:admin_or_network_owner"]], "get_subnet": [["rule:admin_or_owner"], ["rule:shared"]], "update_subnet": [["rule:admin_or_network_owner"]], "delete_subnet": [["rule:admin_or_network_owner"]], "create_network": [], [3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]], [4] "create_network:shared": [["rule:admin_only"]], "update_network": [["rule:admin_or_owner"]], "delete_network": [["rule:admin_or_owner"]], "create_port": [], [5] "create_port:mac_address": [["rule:admin_or_network_owner"]], "create_port:fixed_ips": [["rule:admin_or_network_owner"]], "get_port": [["rule:admin_or_owner"]], "update_port": [["rule:admin_or_owner"]], "delete_port": [["rule:admin_or_owner"]] } [1] is a rule which evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal). [2] is the default policy which is always evaluated if an API operation does not match any of the policies in policy.json. [3] This policy evaluates successfully if either admin_or_owner, or shared evaluates successfully. [4] This policy restricts the ability to manipulate the shared attribute for a network to administrators only. [5] This policy restricts the ability to manipulate the mac_address attribute for a port only to administrators and the owner of the network where the port is attached. In some cases, some operations should be restricted to administrators only. The following example shows you how to modify a policy file to permit tenants to define networks and see their resources and permit administrative users to perform all other operations: { "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "default": [["rule:admin_only"]], "create_subnet": [["rule:admin_only"]], "get_subnet": [["rule:admin_or_owner"]], "update_subnet": [["rule:admin_only"]], "delete_subnet": [["rule:admin_only"]], "create_network": [], "get_network": [["rule:admin_or_owner"]], "create_network:shared": [["rule:admin_only"]], "update_network": [["rule:admin_or_owner"]], "delete_network": [["rule:admin_or_owner"]], "create_port": [["rule:admin_only"]], "get_port": [["rule:admin_or_owner"]], "update_port": [["rule:admin_only"]], "delete_port": [["rule:admin_only"]] }
High Availability The use of high-availability in a Networking deployment helps prevent individual node failures. In general, you can run neutron-server and neutron-dhcp-agent in an active-active fashion. You can run the neutron-l3-agent service as active/passive, which avoids IP conflicts with respect to gateway IP addresses.
Networking High Availability with Pacemaker You can run some Networking services into a cluster (Active / Passive or Active / Active for Networking Server only) with Pacemaker. Download the latest resources agents: neutron-server: https://github.com/madkiss/openstack-resource-agents neutron-dhcp-agent : https://github.com/madkiss/openstack-resource-agents neutron-l3-agent : https://github.com/madkiss/openstack-resource-agents For information about how to build a cluster, see Pacemaker documentation.
Plug-in pagination and sorting support
Plug-ins that support native pagination and sorting
Plug-in Support Native Pagination Support Native Sorting
ML2 True True
Open vSwitch True True
Linux Bridge True True