Networking Learn OpenStack Networking concepts, architecture, and basic and advanced neutron and nova command-line interface (CLI) commands so that you can administer OpenStack Networking in a cloud.
Introduction to Networking The OpenStack Networking service, code-named neutron, provides an API for defining network connectivity and addressing in the cloud. The OpenStack Networking service enables operators to leverage different networking technologies to power their cloud networking. The OpenStack Networking service also provides an API to configure and manage a variety of network services ranging from L3 forwarding and NAT to load balancing, edge firewalls, and IPSEC VPN. For a detailed description of the OpenStack Networking API abstractions and their attributes, see the OpenStack Networking API v2.0 Reference.
Networking API Networking is a virtual network service that provides a powerful API to define the network connectivity and IP addressing used by devices from other services, such as OpenStack Compute. The Compute API has a virtual server abstraction to describe computing resources. Similarly, the OpenStack Networking API has virtual network, subnet, and port abstractions to describe networking resources. In more detail: Network. An isolated L2 segment, analogous to VLAN in the physical networking world. Subnet. A block of v4 or v6 IP addresses and associated configuration state. Port. A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like OpenStack Compute to attach virtual devices to ports on these networks. In particular, OpenStack Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme (even if those IP addresses overlap with those used by other tenants). The OpenStack Networking service: Enables advanced cloud networking use cases, such as building multi-tiered web applications and allowing applications to be migrated to the cloud without changing IP addresses. Offers flexibility for the cloud administrator to customized network offerings. Provides a mechanism that lets cloud administrators expose additional API capabilities through API extensions. At first, new functionality is introduced as an API extension. Over time, the functionality becomes part of the core OpenStack Networking API.
Plug-in architecture Enhancing traditional networking solutions to provide rich cloud networking is challenging. Traditional networking is not designed to scale to cloud proportions nor to handle automatic configuration. The original OpenStack Compute network implementation assumed a very basic model of performing all isolation through Linux VLANs and IP tables. OpenStack Networking introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some OpenStack Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits. OpenStack Networking includes the following plug-ins: Big Switch Plug-in (Floodlight REST Proxy). http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin Brocade Plug-in. https://github.com/brocade/brocade Cisco. http://wiki.openstack.org/cisco-neutron Cloudbase Hyper-V Plug-in. http://www.cloudbase.it/quantum-hyper-v-plugin/ Linux Bridge Plug-in. Documentation included in this guide at http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin   Mellanox Plug-in. https://wiki.openstack.org/wiki/Mellanox-Neutron/ Midonet Plug-in. http://www.midokura.com/ NEC OpenFlow Plug-in. http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin Nicira NVP Plug-in. Documentation is included in this guide, NVP Product Overview, and NVP Product Support Open vSwitch Plug-in. Documentation included in this guide. PLUMgrid. https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron Ryu Plug-in. https://github.com/osrg/ryu/wiki/OpenStack Plug-ins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because OpenStack Networking supports a large number of plug-ins, the cloud administrator is able to weigh different options and decide which networking technology is right for the deployment. Not all OpenStack networking plug-ins are compatible with all possible OpenStack compute drivers:
Plug-in Compatibility with OpenStack Compute Drivers
Libvirt (KVM/QEMU) XenServer VMware Hyper-V Bare-metal PowerVM
Bigswitch / Floodlight Yes
Brocade Yes
Cisco Yes
Cloudbase Hyper-V Yes
Linux Bridge Yes
Mellanox Yes
Midonet Yes
NEC OpenFlow Yes
Nicira NVP Yes Yes Yes
Open vSwitch Yes
Plumgrid Yes Yes
Ryu Yes
Networking architecture Before you deploy Networking, it helps to understand the Networking components and how these components interact with each other and with other OpenStack services.
Overview OpenStack Networking is a standalone service, just like other OpenStack services such as OpenStack Compute, OpenStack Image service, OpenStack Identity service, or the OpenStack Dashboard. Like those services, a deployment of OpenStack Networking often involves deploying several processes on a variety of hosts. The main process of the OpenStack Networking server is neutron-server, which is a Python daemon that exposes the OpenStack Networking API and passes user requests to the configured OpenStack Networking plug-in for additional processing. Typically, the plug-in requires access to a database for persistent storage (also similar to other OpenStack services). If your deployment uses a controller host to run centralized OpenStack Compute components, you can deploy the OpenStack Networking server on that same host. However, OpenStack Networking is entirely standalone and can be deployed on its own host as well. OpenStack Networking also includes additional agents that might be required, depending on your deployment: plug-in agent (neutron-*-agent). Runs on each hypervisor to perform local vswitch configuration. The agent to be run will depend on which plug-in you are using, because some plug-ins do not actually require an agent. dhcp agent (neutron-dhcp-agent). Provides DHCP services to tenant networks. This agent is the same for all plug-ins. l3 agent (neutron-l3-agent). Provides L3/NAT forwarding to provide external network access for VMs on tenant networks. This agent is the same for all plug-ins. These agents interact with the main neutron process through RPC (for example, rabbitmq or qpid) or through the standard OpenStack Networking API. Further: Networking relies on the OpenStack Identity service (keystone) for the authentication and authorization of all API request.  Compute (nova) interacts with OpenStack Networking through calls to its standard API.  As part of creating a VM, the nova-compute service communicates with the OpenStack Networking API to plug each virtual NIC on the VM into a particular network.    The Dashboard (Horizon) integrates with the OpenStack Networking API, allowing administrators and tenant users to create and manage network services through the Dashboard GUI.
Place services on physical hosts Like other OpenStack services, Networking enables cloud administrators to run one or more services on one or more physical devices. At one extreme, the cloud administrator can run all service daemons on a single physical host for evaluation purposes. Alternatively the cloud administrator can run each service on its own physical host and, in some cases, can replicate services across multiple hosts for redundancy. For more information, see the OpenStack Configuration Reference. A standard architecture includes a cloud controller host, a network gateway host, and a set of hypervisors that run virtual machines. The cloud controller and network gateway can be on the same host. However, if you expect VMs to send significant traffic to or from the Internet, a dedicated network gateway host helps avoid CPU contention between the neutron-l3-agent and other OpenStack services that forward packets.
Network connectivity for physical hosts A standard OpenStack Networking set up has one or more of the following distinct physical data center networks: Management network. Provides internal communication between OpenStack Components. IP addresses on this network should be reachable only within the data center.  Data network. Provides VM data communication within the cloud deployment.  The IP addressing requirements of this network depend on the OpenStack Networking plug-in being used.  External network. Provides VMs with Internet access in some deployment scenarios.  IP addresses on this network should be reachable by anyone on the Internet.  API network. Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants. IP addresses on this network should be reachable by anyone on the Internet. The API network may be the same as the external network, because it is possible to create an external-network subnet that is allocated IP ranges that use less than the full range of IP addresses in an IP block.
Use Networking You can use OpenStack Networking in the following ways: Expose the OpenStack Networking API to cloud tenants, which enables them to build rich network topologies. Have the cloud administrator, or an automated administrative tool, create network connectivity on behalf of tenants. A tenant or cloud administrator can both perform the following procedures.
Core Networking API features After you install and run OpenStack Networking, tenants and administrators can perform create-read-update-delete (CRUD) API networking operations by using either the neutron CLI tool or the API. Like other OpenStack CLI tools, the neutron tool is just a basic wrapper around the OpenStack Networking API. Any operation that can be performed using the CLI has an equivalent API call that can be performed programmatically. The CLI includes a number of options. For details, refer to the OpenStack End User Guide.
API abstractions The OpenStack Networking v2.0 API provides control over both L2 network topologies and the IP addresses used on those networks (IP Address Management or IPAM). There is also an extension to cover basic L3 forwarding and NAT, which provides capabilities similar to nova-network. In the OpenStack Networking API: A 'Network' is an isolated L2 network segment (similar to a VLAN), which forms the basis for describing the L2 network topology available in an OpenStack Networking deployment. A 'Subnet' associates a block of IP addresses and other network configuration (for example, default gateways or dns-servers) with an OpenStack Networking network. Each subnet represents an IPv4 or IPv6 address block and, if needed, each OpenStack Networking network can have multiple subnets. A 'Port' represents an attachment port to a L2 OpenStack Networking network. When a port is created on the network, by default it is allocated an available fixed IP address out of one of the designated subnets for each IP version (if one exists). When the port is destroyed, its allocated addresses return to the pool of available IPs on the subnet. Users of the OpenStack Networking API can either choose a specific IP address from the block, or let OpenStack Networking choose the first available IP address. The following table summarizes the attributes available for each of the previous networking abstractions. For more operations about API abstraction and operations, see the Networking API v2.0 Reference.
Network attributes
Attribute Type Default value Description
admin_state_up bool True Administrative state of the network. If specified as False (down), this network does not forward packets.
id uuid-str Generated UUID for this network.
name string None Human-readable name for this network; is not required to be unique.
shared bool False Specifies whether this network resource can be accessed by any tenant. The default policy setting restricts usage of this attribute to administrative users only.
status string N/A Indicates whether this network is currently operational.
subnets list(uuid-str) Empty list List of subnets associated with this network.
tenant_id uuid-str N/A Tenant owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Subnet Attributes
Attribute Type Default Value Description
allocation_pools list(dict) Every address in cidr, excluding gateway_ip (if configured). List of cidr sub-ranges that are available for dynamic allocation to ports. Syntax: [ { "start":"10.0.0.2", "end": "10.0.0.254"} ]
cidr string N/A IP range for this subnet, based on the IP version.
dns_nameservers list(string) Empty list List of DNS name servers used by hosts in this subnet.
enable_dhcp bool True Specifies whether DHCP is enabled for this subnet.
gateway_ip string First address in cidr Default gateway used by devices in this subnet.
host_routes list(dict) Empty list Routes that should be used by devices with IPs from this subnet (not including local subnet route).
id uuid-string Generated UUID representing this subnet.
ip_version int 4 IP version.
name string None Human-readable name for this subnet (might not be unique).
network_id uuid-string N/A Network with which this subnet is associated.
tenant_id uuid-string N/A Owner of network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Port attributes
Attribute Type Default Value Description
admin_state_up bool true Administrative state of this port. If specified as False (down), this port does not forward packets.
device_id string None Identifies the device using this port (for example, a virtual server's ID).
device_owner string None Identifies the entity using this port (for example, a dhcp agent).
fixed_ips list(dict) Automatically allocated from pool Specifies IP addresses for this port; associates the port with the subnets containing the listed IP addresses.
id uuid-string Generated UUID for this port.
mac_address string Generated Mac address to use on this port.
name string None Human-readable name for this port (might not be unique).
network_id uuid-string N/A Network with which this port is associated.
status string N/A Indicates whether the network is currently operational.
tenant_id uuid-string N/A Owner of the network. Only administrative users can set the tenant identifier; this cannot be changed using authorization policies.
Basic Networking operations To learn about advanced capabilities that are available through the neutron command-line interface (CLI), read the networking section in the OpenStack End User Guide. The following table shows example neutron commands that enable you to complete basic Networking operations:
Basic Networking operations
Operation Command
Creates a network. $ neutron net-create net1
Creates a subnet that is associated with net1. $ neutron subnet-create net1 10.0.0.0/24
Lists ports for a specified tenant. $ neutron port-list
Lists ports for a specified tenant and displays the id, fixed_ips, and device_owner columns. $ neutron port-list -c id -c fixed_ips -c device_owner
Shows information for a specified port. $ neutron port-show port-id
The device_owner field describes who owns the port. A port whose device_owner begins with: network is created by OpenStack Networking. compute is created by OpenStack Compute.
Administrative operations The cloud administrator can perform any neutron call on behalf of tenants by specifying an OpenStack Identity tenant_id in the request, as follows: $ neutron net-create --tenant-id=tenant-id network-name For example: $ neutron net-create --tenant-id=5e4bbe24b67a4410bc4d9fae29ec394e net1 To view all tenant IDs in OpenStack Identity, run the following command as an OpenStack Identity (keystone) admin user: $ keystone tenant-list
Advanced Networking operations The following table shows example neutron commands that enable you to complete advanced Networking operations:
Advanced Networking operations
Operation Command
Creates a network that all tenants can use. $ neutron net-create --shared public-net
Creates a subnet with a specified gateway IP address. $ neutron subnet-create --gateway 10.0.0.254 net1 10.0.0.0/24
Creates a subnet that has no gateway IP address. $ neutron subnet-create --no-gateway net1 10.0.0.0/24
Creates a subnet with DHCP disabled. $ neutron subnet-create net1 10.0.0.0/24 --enable_dhcp False
Creates a subnet with a specified set of host routes. $ neutron subnet-create test-net1 40.0.0.0/24 --host_routes type=dict list=true destination=40.0.1.0/24,nexthop=40.0.0.2
Creates a subnet with a specified set of dns name servers. $ neutron subnet-create test-net1 40.0.0.0/24 --dns_nameservers list=true 8.8.8.7 8.8.8.8
Displays all ports and IPs allocated on a network. $ neutron port-list --network_id net-id
Use Compute with Networking
Basic Compute and Networking operations The following table shows example neutron and nova commands that enable you to complete basic Compute and Networking operations:
Basic Compute/Networking operations
Action Command
Checks available networks. $ neutron net-list
Boots a VM with a single NIC on a selected OpenStack Networking network. $ nova boot --image img --flavor flavor --nic net-id=net-id vm-name
Searches for ports with a device_id that matches the OpenStack Compute instance UUID. The device_id can also be a logical router ID. $ neutron port-list --device_id=vm-id
Searches for ports, but shows only the mac_address for the port. $ neutron port-list --field mac_address --device_id=vm-id
Temporarily disables a port from sending traffic. $ neutron port-update port-id --admin_state_up=False
When you boot a Compute VM, a port on the network is automatically created that corresponds to the VM NIC and is automatically associated with the default security group. You can configure security group rules to enable users to access the VM. When you delete a Compute VM, the underlying OpenStack Networking port is automatically deleted.
Advanced VM creation operations The following table shows example nova and neutron commands that enable you to complete advanced VM creation operations:
Advanced VM creation operations
Operation Command
Boots a VM with multiple NICs. $ nova boot --image img --flavor flavor --nic net-id=net1-id --nic net-id=net2-id vm-name
Boots a VM with a specific IP address. First, create an OpenStack Networking port with a specific IP address. Then, boot a VM specifying a port-id rather than a net-id. $ neutron port-create --fixed-ip subnet_id=subnet-id,ip_address=IP net-id $ nova boot --image img --flavor flavor --nic port-id=port-id vm-name
Boots a VM that connects to all networks that are accessible to the tenant who submits the request (without the --nic option). $ nova boot --image img --flavor flavor vm-name
OpenStack Networking does not currently support the v4-fixed-ip parameter of the --nic option for the nova command.
Security groups (enabling ping and SSH on VMs) You must configure security group rules depending on the type of plug-in you are using. If you are using a plug-in that: Implements Networking security groups, you can configure security group rules directly by using neutron security-group-rule-create. The following example allows ping and ssh access to your VMs. $ neutron security-group-rule-create --protocol icmp --direction ingress default $ neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress default Does not implement Networking security groups, you can configure security group rules by using the nova secgroup-add-rule or euca-authorize command. The following nova commands allow ping and ssh access to your VMs. $ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 $ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 If your plug-in implements OpenStack Networking security groups, you can also leverage Compute security groups by setting security_group_api = neutron in nova.conf. After setting this option, all Compute security group commands are proxied to OpenStack Networking.
Advanced features through API extensions Several plug-ins implement API extensions that provide capabilities similar to what was available in nova-network: These plug-ins are likely to be of interest to the OpenStack community.
Provider networks Provider networks allow cloud administrators to create OpenStack Networking networks that map directly to physical networks in the data center.  This is commonly used to give tenants direct access to a public network that can be used to reach the Internet.  It may also be used to integrate with VLANs in the network that already have a defined meaning (for example, allow a VM from the "marketing" department to be placed on the same VLAN as bare-metal marketing hosts in the same data center). The provider extension allows administrators to explicitly manage the relationship between OpenStack Networking virtual networks and underlying physical mechanisms such as VLANs and tunnels. When this extension is supported, OpenStack Networking client users with administrative privileges see additional provider attributes on all virtual networks, and are able to specify these attributes in order to create provider networks. The provider extension is supported by the openvswitch and linuxbridge plug-ins. Configuration of these plug-ins requires familiarity with this extension.
Terminology A number of terms are used in the provider extension and in the configuration of plug-ins supporting the provider extension: virtual network. An OpenStack Networking L2 network (identified by a UUID and optional name) whose ports can be attached as vNICs to OpenStack Compute instances and to various OpenStack Networking agents. The openvswitch and linuxbridge plug-ins each support several different mechanisms to realize virtual networks. physical network. A network connecting virtualization hosts (such as, OpenStack Compute nodes) with each other and with other network resources. Each physical network may support multiple virtual networks. The provider extension and the plug-in configurations identify physical networks using simple string names. tenant network. A "normal" virtual network created by/for a tenant. The tenant is not aware of how that network is physically realized. provider network. A virtual network administratively created to map to a specific network in the data center, typically to enable direct access to non-OpenStack resources on that network. Tenants can be given access to provider networks. VLAN network. A virtual network realized as packets on a specific physical network containing IEEE 802.1Q headers with a specific VID field value. VLAN networks sharing the same physical network are isolated from each other at L2, and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094. flat network. A virtual network realized as packets on a specific physical network containing no IEEE 802.1Q header. Each physical network can realize at most one flat network. local network. A virtual network that allows communication within each host, but not across a network. Local networks are intended mainly for single-node test scenarios, but may have other uses. GRE network. A virtual network realized as network packets encapsulated using GRE. GRE networks are also referred to as "tunnels". GRE tunnel packets are routed by the host's IP routing table, so GRE networks are not associated by OpenStack Networking with specific physical networks. Both the openvswitch and linuxbridge plug-ins support VLAN networks, flat networks, and local networks. Only the openvswitch plug-in currently supports GRE networks, provided that the host's Linux kernel supports the required Open vSwitch features.
Provider attributes The provider extension extends the OpenStack Networking network resource with the following three additional attributes:
Provider Network Attributes
Attribute name Type Default Value Description
provider:network_type String N/A The physical mechanism by which the virtual network is realized. Possible values are "flat", "vlan", "local", and "gre", corresponding to flat networks, VLAN networks, local networks, and GRE networks as defined above. All types of provider networks can be created by administrators, while tenant networks can be realized as "vlan", "gre", or "local" network types depending on plug-in configuration.
provider:physical_network String If a physical network named "default" has been configured, and if provider:network_type is "flat" or "vlan", then "default" is used. The name of the physical network over which the virtual network is realized for flat and VLAN networks. Not applicable to the "local" or "gre" network types.
provider:segmentation_id Integer N/A For VLAN networks, the VLAN VID on the physical network that realizes the virtual network. Valid VLAN VIDs are 1 through 4094. For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit unsigned integer. Not applicable to the "flat" or "local" network types.
The provider attributes are returned by OpenStack Networking API operations when the client is authorized for the extension:provider_network:view action through the OpenStack Networking policy configuration. The provider attributes are only accepted for network API operations if the client is authorized for the extension:provider_network:set action. The default OpenStack Networking API policy configuration authorizes both actions for users with the admin role. See for details on policy configuration.
Provider Extension API operations To use the provider extension with the default policy settings, you must have the administrative role. The following table shows example neutron commands that enable you to complete basic provider extension API operations:
Basic provider extension API operations
Operation Command
Shows all attributes of a network, including provider attributes. $ neutron net-show <name or net-id>
Creates a local provider network. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type local
Creates a flat provider network. When you create flat networks, <phys-net-name> must be known to the plug-in. See the OpenStack Configuration Reference for details. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type flat --provider:physical_network <phys-net-name>
Creates a VLAN provider network. When you create VLAN networks, <phys-net-name> must be known to the plug-in. See the OpenStack Configuration Reference for details on configuring network_vlan_ranges to identify all physical networks. When you create VLAN networks, <VID> can fall either within or outside any configured ranges of VLAN IDs from which tenant networks are allocated. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type vlan --provider:physical_network <phys-net-name> --provider:segmentation_id <VID>
Creates a GRE provider network. When you create GRE networks, <tunnel-id> can be either inside or outside any tunnel ID ranges from which tenant networks are allocated. After you create provider networks, you can allocate subnets, which you can use in the same way as other virtual networks, subject to authorization policy based on the specified <tenant_id>. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type gre --provider:segmentation_id <tunnel-id>
L3 Routing and NAT Just like the core OpenStack Networking API provides abstract L2 network segments that are decoupled from the technology used to implement the L2 network, OpenStack Networking includes an API extension that provides abstract L3 routers that API users can dynamically provision and configure. These OpenStack Networking routers can connect multiple L2 OpenStack Networking networks, and can also provide a "gateway" that connects one or more private L2 networks to a shared "external" network (for example, a public network for access to the Internet). See the OpenStack Configuration Reference for details on common models of deploying Networking L3 routers. The L3 router provides basic NAT capabilities on "gateway" ports that uplink the router to external networks. This router SNATs all traffic by default, and supports "Floating IPs", which creates a static one-to-one mapping from a public IP on the external network to a private IP on one of the other subnets attached to the router. This allows a tenant to selectively expose VMs on private networks to other hosts on the external network (and often to all hosts on the Internet). Floating IPs can be allocated and then mapped from one OpenStack Networking port to another, as needed.
L3 API abstractions
Router
Attribute name Type Default Value Description
id uuid-str generated UUID for the router.
name String None Human-readable name for the router. Might not be unique.
admin_state_up Bool True The administrative state of router. If false (down), the router does not forward packets.
status String N/A Indicates whether router is currently operational.
tenant_id uuid-str N/A Owner of the router. Only admin users can specify a tenant_id other than its own.
external_gateway_info dict contain 'network_id' key-value pair Null External network that this router connects to for gateway services (for example, NAT)
Floating IP
Attribute name Type Default Value Description
id uuid-str generated UUID for the floating IP.
floating_ip_address string (IP address) allocated by OpenStack Networking The external network IP address available to be mapped to an internal IP address.
floating_network_id uuid-str N/A The network indicating the set of subnets from which the floating IP should be allocated
router_id uuid-str N/A Read-only value indicating the router that connects the external network to the associated internal port, if a port is associated.
port_id uuid-str Null Indicates the internal OpenStack Networking port associated with the external floating IP.
fixed_ip_address string (IP address) Null Indicates the IP address on the internal port that is mapped to by the floating IP (since an OpenStack Networking port might have more than one IP address).
tenant_id uuid-str N/A Owner of the Floating IP. Only admin users can specify a tenant_id other than its own.
Basic L3 operations External networks are visible to all users. However, the default policy settings enable only administrative users to create, update, and delete external networks. The following table shows example neutron commands that enable you to complete basic L3 operations:
Basic L3 operations
Operation Command
Creates external networks. $ neutron net-create public --router:external=True $ neutron subnet-create public 172.16.1.0/24
Lists external networks. $ neutron net-list -- --router:external=True
Creates an internal-only router that connects to multiple L2 networks privately. $ neutron net-create net1 $ neutron subnet-create net1 10.0.0.0/24 $ neutron net-create net2 $ neutron subnet-create net2 10.0.1.0/24 $ neutron router-create router1 $ neutron router-interface-add router1 <subnet1-uuid> $ neutron router-interface-add router1 <subnet2-uuid>
Connects a router to an external network, which enables that router to act as a NAT gateway for external connectivity. $ neutron router-gateway-set router1 <ext-net-id> The router obtains an interface with the gateway_ip address of the subnet, and this interface is attached to a port on the L2 OpenStack Networking network associated with the subnet. The router also gets a gateway interface to the specified external network. This provides SNAT connectivity to the external network as well as support for floating IPs allocated on that external networks. Commonly an external network maps to a network in the provider
Lists routers. $ neutron router-list
Shows information for a specified router. $ neutron router-show <router_id>
Shows all internal interfaces for a router.
Identifies the port-id that represents the VM NIC to which the floating IP should map. $ neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> This port must be on an OpenStack Networking subnet that is attached to a router uplinked to the external network used to create the floating IP.  Conceptually, this is because the router must be able to perform the Destination NAT (DNAT) rewriting of packets from the Floating IP address (chosen from a subnet on the external network) to the internal Fixed IP (chosen from a private subnet that is “behind” the router).
Creates a floating IP address and associates it with a port. $ neutron floatingip-create <ext-net-id> $ neutron floatingip-associate <floatingip-id> <internal VM port-id>
Creates a floating IP address and associates it with a port, in a single step. $ neutron floatingip-create --port_id <internal VM port-id> <ext-net-id>
Lists floating IPs. $ neutron floatingip-list
Finds floating IP for a specified VM port. $ neutron floatingip-list -- --port_id=ZZZ
Disassociates a floating IP address. $ neutron floatingip-disassociate <floatingip-id>
Deletes the floating IP address. $ neutron floatingip-delete <floatingip-id>
Clears the gateway. $ neutron router-gateway-clear router1
Removes the interfaces from the router. $ neutron router-interface-delete router1 <subnet-id>
Deletes the router. $ neutron router-delete router1
Security groups Security groups and security group rules allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a port. A security group is a container for security group rules. When a port is created in OpenStack Networking it is associated with a security group. If a security group is not specified the port is associated with a 'default' security group. By default, this group drops all ingress traffic and allows all egress. Rules can be added to this group in order to change the behaviour. To use the OpenStack Compute security group APIs or use OpenStack Compute to orchestrate the creation of ports for instances on specific security groups, you must complete additional configuration. You must configure the /etc/nova/nova.conf file and set the security_group_api=neutron option on every node that runs nova-compute and nova-api. After you make this change, restart nova-api and nova-compute to pick up this change. Then, you can use both the OpenStack Compute and OpenStack Network security group APIs at the same time. To use the OpenStack Compute security group API with OpenStack Networking, the OpenStack Networking plug-in must implement the security group API. The following plug-ins currently implement this: Nicira NVP, Open vSwitch, Linux Bridge, NEC, and Ryu. You must configure the correct firewall driver in the securitygroup section of the plug-in/agent configuration file. Some plug-ins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation driver as the default, which results in non-working security groups. When using the security group API through OpenStack Compute, security groups are applied to all ports on an instance. The reason for this is that OpenStack Compute security group APIs are instances based and not port based as OpenStack Networking.
Security Group API Abstractions
Security Group Attributes
Attribute name Type Default Value Description
id uuid-str generated UUID for the security group.
name String None Human-readable name for the security group. Might not be unique. Cannot be named default as that is automatically created for a tenant.
description String None Human-readable description of a security group.
tenant_id uuid-str N/A Owner of the security group. Only admin users can specify a tenant_id other than their own.
Security Group Rules
Attribute name Type Default Value Description
id uuid-str generated UUID for the security group rule.
security_group_id uuid-str or Integer allocated by OpenStack Networking The security group to associate rule with.
direction String N/A The direction the traffic is allow (ingress/egress) from a VM.
protocol String None IP Protocol (icmp, tcp, udp, and so on).
port_range_min Integer None Port at start of range
port_range_max Integer None Port at end of range
ethertype String None ethertype in L2 packet (IPv4, IPv6, and so on)
remote_ip_prefix string (IP cidr) None CIDR for address range
remote_group_id uuid-str or Integer allocated by OpenStack Networking or OpenStack Compute Source security group to apply to rule.
tenant_id uuid-str N/A Owner of the security group rule. Only admin users can specify a tenant_id other than its own.
Basic security group operations The following table shows example neutron commands that enable you to complete basic security group operations:
Basic security group operations
Operation Command
Creates a security group for our web servers. $ neutron security-group-create webservers --description "security group for webservers"
Lists security groups. $ neutron security-group-list
Creates a security group rule to allow port 80 ingress. $ neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid>
Lists security group rules. $ neutron security-group-rule-list
Deletes a security group rule. $ neutron security-group-rule-delete <security_group_rule_uuid>
Deletes a security group. $ neutron security-group-delete <security_group_uuid>
Creates a port and associates two security groups. $ neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id>
Removes security groups from a port. $ neutron port-update --no-security-groups <port_id>
Basic Load-Balancer-as-a-Service operations The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The Havana release offers a reference implementation that is based on the HAProxy software load balancer. The following table shows example neutron commands that enable you to complete basic LBaaS operations:
Basic LBaaS operations
Operation Command
Creates a load balancer pool by using specific provider. --provider is an optional argument. If not used, the pool is created with default provider for LBaaS service. You should configure the default provider in the [service_providers] section of neutron.conf file. If no default provider is specified for LBaaS, the --provider option is required for pool creation. $ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> --provider <provider_name>
Associates two web servers with pool. $ neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool $ neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool
Creates a health monitor which checks to make sure our instances are still running on the specified protocol-port. $ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Associates a health monitor with pool. $ neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool
Creates a virtual IP (VIP) address that, when accessed through the load balancer, directs the requests to one of the pool members. $ neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool
Plug-in specific extensions Each vendor may choose to implement additional API extensions to the core API. This section describes the extensions for each plug-in.
Nicira NVP extensions The Nicira NVP plug-in Extensions
Nicira NVP QoS extension The Nicira NVP QoS extension rate-limits network ports to guarantee a specific amount of bandwidth for each port. This extension, by default, is only accessible by a tenant with an admin role but is configurable through the policy.json file. To use this extension, create a queue and specify the min/max bandwidth rates (kbps) and optionally set the QoS Marking and DSCP value (if your network fabric uses these values to make forwarding decisions). Once created, you can associate a queue with a network. Then, when ports are created on that network they are automatically created and associated with the specific queue size that was associated with the network. Because one size queue for a every port on a network may not be optimal, a scaling factor from the nova flavor 'rxtx_factor' is passed in from OpenStack Compute when creating the port to scale the queue. Lastly, if you want to set a specific baseline QoS policy for the amount of bandwidth a single port can use (unless a network queue is specified with the network a port is created on) a default queue can be created in neutron which then causes ports created to be associated with a queue of that size times the rxtx scaling factor. Note that after a network or default queue is specified, queues are added to ports that are subsequently created but are not added to existing ports.
Nicira NVP QoS API abstractions
Nicira NVP QoS Attributes
Attribute name Type Default Value Description
id uuid-str generated UUID for the QoS queue.
default Boolean False by default If True, ports are created with this queue size unless the network port is created or associated with a queue at port creation time.
name String None Name for QoS queue.
min Integer 0 Minimum Bandwidth Rate (kbps).
max Integer N/A Maximum Bandwidth Rate (kbps).
qos_marking String untrusted by default Whether QoS marking should be trusted or untrusted.
dscp Integer 0 DSCP Marking value.
tenant_id uuid-str N/A The owner of the QoS queue.
Common Nicira NVP QoS operations The following table shows example neutron commands that enable you to complete basic queue operations:
Basic Nicira NVP QoS operations
Operation Command
Creates QoS Queue (admin-only). $ neutron queue-create--min 10 --max 1000 myqueue
Associates a queue with a network. $ neutron net-create network --queue_id=<queue_id>
Creates a default system queue. $ neutron queue-create --default True --min 10 --max 2000 default
Lists QoS queues. $ neutron queue-list
Deletes a QoS queue. $ neutron queue-delete <queue_id or name>'
Advanced operational features
Logging settings Networking components use Python logging module to do logging. Logging configuration can be provided in neutron.conf or as command line options. Command options override ones in neutron.conf. To configure logging for OpenStack Networking components, use one of the following methods: Provide logging settings in a logging configuration file. See Python Logging HOWTO for logging configuration file. Provide logging setting in neutron.conf [DEFAULT] # Default log level is WARNING # Show debugging output in logs (sets DEBUG log level output) # debug = False # Show more verbose log output (sets INFO log level output) if debug is False # verbose = False # log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s # log_date_format = %Y-%m-%d %H:%M:%S # use_syslog = False # syslog_log_facility = LOG_USER # if use_syslog is False, we can set log_file and log_dir. # if use_syslog is False and we do not set log_file, # the log will be printed to stdout. # log_file = # log_dir =
Notifications Notifications can be sent when Networking resources such as network, subnet and port are created, updated or deleted.
Notification options To support DHCP agent, rpc_notifier driver must be set. To set up the notification, edit notification options in neutron.conf: # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver # notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver notification_driver = neutron.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic names or to set logging level # default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications
Setting cases
Logging and RPC The following options configure the OpenStack Networking server to send notifications through logging and RPC. The logging options are described in OpenStack Configuration Reference . RPC notifications go to 'notifications.info' queue bound to a topic exchange defined by 'control_exchange' in neutron.conf. # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver notification_driver = neutron.openstack.common.notifier.rpc_notifier # default_notification_level is used to form actual topic names or to set logging level default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications
Multiple RPC topics The following options configure the OpenStack Networking server to send notifications to multiple RPC topics. RPC notifications go to 'notifications_one.info' and 'notifications_two.info' queues bound to a topic exchange defined by 'control_exchange' in neutron.conf. # ============ Notification System Options ===================== # Notifications can be sent when network/subnet/port are create, updated or deleted. # There are three methods of sending notifications: logging (via the # log_file directive), rpc (via a message queue) and # noop (no notifications sent, the default) # Notification_driver can be defined multiple times # Do nothing driver # notification_driver = neutron.openstack.common.notifier.no_op_notifier # Logging driver # notification_driver = neutron.openstack.common.notifier.log_notifier # RPC driver notification_driver = neutron.openstack.common.notifier.rabbit_notifier # default_notification_level is used to form actual topic names or to set logging level default_notification_level = INFO # default_publisher_id is a part of the notification payload # host = myhost.com # default_publisher_id = $host # Defined in rpc_notifier for rpc way, can be comma separated values. # The actual topic names will be %s.%(default_notification_level)s notification_topics = notifications_one,notifications_two
Authentication and authorization OpenStack Networking uses the OpenStack Identity service (project name keystone) as the default authentication service. When OpenStack Identity is enabled Users submitting requests to the OpenStack Networking service must provide an authentication token in X-Auth-Token request header. The aforementioned token should have been obtained by authenticating with the OpenStack Identity endpoint. For more information concerning authentication with OpenStack Identity, please refer to the OpenStack Identity documentation. When OpenStack Identity is enabled, it is not mandatory to specify tenant_id for resources in create requests, as the tenant ID is derived from the Authentication token. The default authorization settings only allow administrative users to create resources on behalf of a different tenant. OpenStack Networking uses information received from OpenStack Identity to authorize user requests. OpenStack Networking handles two kind of authorization policies: Operation-based policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes; Resource-based policies specify whether access to specific resource is granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in OpenStack Networking might vary from deployment to deployment. The policy engine reads entries from the policy.json file. The actual location of this file might vary from distribution to distribution. Entries can be updated while the system is running, and no service restart is required. Every time the policy file is updated, the policies are automatically reloaded. Currently the only way of updating such policies is to edit the policy file. In this section, the terms policy and rule refer to objects that are specified in the same way in the policy file. There are no syntax differences between a rule and a policy. A policy is something that is matched directly from the OpenStack Networking policy engine. A rule is an element in a policy, which is evaluated. For instance in create_subnet: [["admin_or_network_owner"]], create_subnet is a policy, and admin_or_network_owner is a rule. Policies are triggered by the OpenStack Networking policy engine whenever one of them matches an OpenStack Networking API operation or a specific attribute being used in a given operation. For instance the create_subnet policy is triggered every time a POST /v2.0/subnets request is sent to the OpenStack Networking server; on the other hand create_network:shared is triggered every time the shared attribute is explicitly specified (and set to a value different from its default) in a POST /v2.0/networks request. It is also worth mentioning that policies can be also related to specific API extensions; for instance extension:provider_network:set is be triggered if the attributes defined by the Provider Network extensions are specified in an API request. An authorization policy can be composed by one or more rules. If more rules are specified, evaluation policy succeeds if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached. The OpenStack Networking policy engine currently defines the following kinds of terminal rules: Role-based rules evaluate successfully if the user who submits the request has the specified role. For instance "role:admin"is successful if the user submitting the request is an administrator. Field-based rules evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance "field:networks:shared=True" is successful if the attribute shared of the network resource is set to true. Generic rules compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance "tenant_id:%(tenant_id)s" is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request. The following is an extract from the default policy.json file: { [1] "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_or_network_owner": [["role:admin"], ["tenant_id:%(network_tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "shared": [["field:networks:shared=True"]], [2] "default": [["rule:admin_or_owner"]], "create_subnet": [["rule:admin_or_network_owner"]], "get_subnet": [["rule:admin_or_owner"], ["rule:shared"]], "update_subnet": [["rule:admin_or_network_owner"]], "delete_subnet": [["rule:admin_or_network_owner"]], "create_network": [], [3] "get_network": [["rule:admin_or_owner"], ["rule:shared"]], [4] "create_network:shared": [["rule:admin_only"]], "update_network": [["rule:admin_or_owner"]], "delete_network": [["rule:admin_or_owner"]], "create_port": [], [5] "create_port:mac_address": [["rule:admin_or_network_owner"]], "create_port:fixed_ips": [["rule:admin_or_network_owner"]], "get_port": [["rule:admin_or_owner"]], "update_port": [["rule:admin_or_owner"]], "delete_port": [["rule:admin_or_owner"]] } [1] is a rule which evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal). [2] is the default policy which is always evaluated if an API operation does not match any of the policies in policy.json. [3] This policy evaluates successfully if either admin_or_owner, or shared evaluates successfully. [4] This policy restricts the ability to manipulate the shared attribute for a network to administrators only. [5] This policy restricts the ability to manipulate the mac_address attribute for a port only to administrators and the owner of the network where the port is attached. In some cases, some operations should be restricted to administrators only. The following example shows you how to modify a policy file to permit tenants to define networks and see their resources and permit administrative users to perform all other operations: { "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], "admin_only": [["role:admin"]], "regular_user": [], "default": [["rule:admin_only"]], "create_subnet": [["rule:admin_only"]], "get_subnet": [["rule:admin_or_owner"]], "update_subnet": [["rule:admin_only"]], "delete_subnet": [["rule:admin_only"]], "create_network": [], "get_network": [["rule:admin_or_owner"]], "create_network:shared": [["rule:admin_only"]], "update_network": [["rule:admin_or_owner"]], "delete_network": [["rule:admin_or_owner"]], "create_port": [["rule:admin_only"]], "get_port": [["rule:admin_or_owner"]], "update_port": [["rule:admin_only"]], "delete_port": [["rule:admin_only"]] }
High Availability The use of high-availability in a Networking deployment helps prevent individual node failures. In general, you can run neutron-server and neutron-dhcp-agent in an active-active fashion. You can run the neutron-l3-agent service as active/passive, which avoids IP conflicts with respect to gateway IP addresses.
OpenStack Networking High Availability with Pacemaker You can run some OpenStack Networking services into a cluster (Active / Passive or Active / Active for OpenStack Networking Server only) with Pacemaker. Download the latest resources agents: neutron-server: https://github.com/madkiss/openstack-resource-agents neutron-dhcp-agent : https://github.com/madkiss/openstack-resource-agents neutron-l3-agent : https://github.com/madkiss/openstack-resource-agents For information about how to build a cluster, see Pacemaker documentation.
Plug-in pagination and sorting support
Plug-ins that support native pagination and sorting
Plug-in Support Native Pagination Support Native Sorting
Open vSwitch True True
LinuxBridge True True