Advanced Features through API Extensions This section discusses two API extensions implemented by several plugins.  We include them in this guide as they provide capabilities similar to what was available in nova-network and are thus likely to be relevant to a large portion of the OpenStack community. 
Provider Networks Provider networks allow cloud administrators to create OpenStack Networking networks that map directly to physical networks in the data center.  This is commonly used to give tenants direct access to a "public" network that can be used to reach the Internet.  It may also be used to integrate with VLANs in the network that already have a defined meaning (e.g., allow a VM from the "marketing" department to be placed on the same VLAN as bare-metal marketing hosts in the same data center). The provider extension allows administrators to explicitly manage the relationship between OpenStack Networking virtual networks and underlying physical mechanisms such as VLANs and tunnels. When this extension is supported, OpenStack Networking client users with administrative privileges see additional provider attributes on all virtual networks, and are able to specify these attributes in order to create provider networks. The provider extension is supported by the openvswitch and linuxbridge plugins. Configuration of these plugins requires familiarity with this extension.
Terminology A number of terms are used in the provider extension and in the configuration of plugins supporting the provider extension: virtual network - An OpenStack Networking L2 network (identified by a UUID and optional name) whose ports can be attached as vNICs to OpenStack Compute instances and to various OpenStack Networking agents. The openvswitch and linuxbridge plugins each support several different mechanisms to realize virtual networks. physical network - A network connecting virtualization hosts (i.e. OpenStack Compute nodes) with each other and with other network resources. Each physical network may support multiple virtual networks. The provider extension and the plugin configurations identify physical networks using simple string names. tenant network - A "normal" virtual network created by/for a tenant. The tenant is not aware of how that network is physically realized. provider network - A virtual network administratively created to map to a specific network in the data center, typically to enable direct access to non-OpenStack resources on that network. Tenants can be given access to provider networks. VLAN network - A virtual network realized as packets on a specific physical network containing IEEE 802.1Q headers with a specific VID field value. VLAN networks sharing the same physical network are isolated from each other at L2, and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094. flat network - A virtual network realized as packets on a specific physical network containing no IEEE 802.1Q header. Each physical network can realize at most one flat network. local network - A virtual network that allows communication within each host, but not across a network. Local networks are intended mainly for single-node test scenarios, but may have other uses. GRE network - A virtual network realized as network packets encapsulated using GRE. GRE networks are also referred to as "tunnels". GRE tunnel packets are routed by the host's IP routing table, so GRE networks are not associated by OpenStack Networking with specific physical networks. Both the openvswitch and linuxbridge plugins support VLAN networks, flat networks, and local networks. Only the openvswitch plugin currently supports GRE networks, provided that the host's Linux kernel supports the required Open vSwitch features.
Provider Attributes The provider extension extends the OpenStack Networking network resource with the following three additional attributes:
Provider Network Attributes
Attribute name Type Default Value Description
provider:network_type String N/A The physical mechanism by which the virtual network is realized. Possible values are "flat", "vlan", "local", and "gre", corresponding to flat networks, VLAN networks, local networks, and GRE networks as defined above. All types of provider networks can be created by administrators, while tenant networks can be realized as "vlan", "gre", or "local" network types depending on plugin configuration.
provider:physical_network String If a physical network named "default" has been configured, and if provider:network_type is "flat" or "vlan", then "default" is used. The name of the physical network over which the virtual network is realized for flat and VLAN networks. Not applicable to the "local" or "gre" network types.
provider:segmentation_id Integer N/A For VLAN networks, the VLAN VID on the physical network that realizes the virtual network. Valid VLAN VIDs are 1 through 4094. For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit unsigned integer. Not applicable to the "flat" or "local" network types.
The provider attributes are returned by OpenStack Networking API operations when the client is authorized for the extension:provider_network:view action via the OpenStack Networking policy configuration. The provider attributes are only accepted for network API operations if the client is authorized for the extension:provider_network:set action. The default OpenStack Networking API policy configuration authorizes both actions for users with the admin role. See for details on policy configuration.
Provider API Workflow Show all attributes of a network, including provider attributes when invoked with the admin role: neutron net-show <name or net-id> Create a local provider network (admin-only): neutron net-create <name> --tenant_id <tenant-id> --provider:network_type local Create a flat provider network (admin-only): neutron net-create <name> --tenant_id <tenant-id> --provider:network_type flat --provider:physical_network <phys-net-name> Create a VLAN provider network (admin-only): neutron net-create <name> --tenant_id <tenant-id> --provider:network_type vlan --provider:physical_network <phys-net-name> --provider:segmentation_id <VID> Create a GRE provider network (admin-only): neutron net-create <name> --tenant_id <tenant-id> --provider:network_type gre --provider:segmentation_id <tunnel-id> When creating flat networks or VLAN networks, <phys-net-name> must be known to the plugin. See and for details on configuring network_vlan_ranges to identify all physical networks. When creating VLAN networks, <VID> can fall either within or outside any configured ranges of VLAN IDs from which tenant networks are allocated. Similarly, when creating GRE networks, <tunnel-id> can fall either within or outside any tunnel ID ranges from which tenant networks are allocated. Once provider networks have been created, subnets can be allocated and they can be used similarly to other virtual networks, subject to authorization policy based on the specified <tenant_id>.
L3 Routing and NAT Just like the core OpenStack Networking API provides abstract L2 network segments that are decoupled from the technology used to implement the L2 network, OpenStack Networking includes an API extension that provides abstract L3 routers that API users can dynamically provision and configure. These OpenStack Networking routers can connect multiple L2 OpenStack Networking networks, and can also provide a "gateway" that connects one or more private L2 networks to a shared "external" network (e.g., a public network for access to the Internet). See and for details on common models of deploying OpenStack Networking L3 routers. The L3 router provides basic NAT capabilities on "gateway" ports that uplink the router to external networks. This router SNATs all traffic by default, and supports "Floating IPs", which creates a static one-to-one mapping from a public IP on the external network to a private IP on one of the other subnets attached to the router. This allows a tenant to selectively expose VMs on private networks to other hosts on the external network (and often to all hosts on the Internet). Floating IPs can be allocated and then mapped from one OpenStack Networking port to another, as needed.
L3 API Abstractions
Router
Attribute name Type Default Value Description
id uuid-str generated UUID for the router.
name String None Human-readable name for the router. Might not be unique.
admin_state_up Bool True The administrative state of router. If false (down), the router does not forward packets.
status String N/A Indicates whether router is currently operational.
tenant_id uuid-str N/A Owner of the router. Only admin users can specify a tenant_id other than its own.
external_gateway_info dict contain 'network_id' key-value pair Null External network that this router connects to for gateway services (e.g., NAT)
Floating IP
Attribute name Type Default Value Description
id uuid-str generated UUID for the floating IP.
floating_ip_address string (IP address) allocated by OpenStack Networking The external network IP address available to be mapped to an internal IP address.
floating_network_id uuid-str N/A The network indicating the set of subnets from which the floating IP should be allocated
router_id uuid-str N/A Read-only value indicating the router that connects the external network to the associated internal port, if a port is associated.
port_id uuid-str Null Indicates the internal OpenStack Networking port associated with the external floating IP.
fixed_ip_address string (IP address) Null Indicates the IP address on the internal port that is mapped to by the floating IP (since an OpenStack Networking port might have more than one IP address).
tenant_id uuid-str N/A Owner of the Floating IP. Only admin users can specify a tenant_id other than its own.
Common L3 Workflow Create external networks (admin-only) neutron net-create public --router:external=True neutron subnet-create public 172.16.1.0/24 Viewing external networks: neutron net-list -- --router:external=True Creating routers Internal-only router to connect multiple L2 networks privately. neutron net-create net1 neutron subnet-create net1 10.0.0.0/24 neutron net-create net2 neutron subnet-create net2 10.0.1.0/24 neutron router-create router1 neutron router-interface-add router1 <subnet1-uuid> neutron router-interface-add router1 <subnet2-uuid> The router will get an interface with the gateway_ip address of the subnet, and this interface will be attached to a port on the L2 OpenStack Networking network associated with the subnet. The router will also get an gateway interface to the specified external network.  This will provide SNAT connectivity to the external network as well as support for floating IPs allocated on that external networks (see below).  Commonly an external network maps to a network in the provider A router can also be connected to an “external network”, allowing that router to act as a NAT gateway for external connectivity. neutron router-gateway-set router1 <ext-net-id> Viewing routers: List all routers: neutron router-list Show a specific router: neutron router-show <router_id> Show all internal interfaces for a router: neutron port-list -- --device_id=<router_id> Associating / Disassociating Floating IPs: First, identify the port-id representing the VM NIC that the floating IP should map to: neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> This port must be on an OpenStack Networking subnet that is attached to a router uplinked to the external network that will be used to create the floating IP.  Conceptually, this is because the router must be able to perform the Destination NAT (DNAT) rewriting of packets from the Floating IP address (chosen from a subnet on the external network) to the internal Fixed IP (chosen from a private subnet that is “behind” the router).  Create floating IP unassociated, then associate neutron floatingip-create <ext-net-id> neutron floatingip-associate <floatingip-id> <internal VM port-id> create floating IP and associate in a single step neutron floatingip-create --port_id <internal VM port-id> <ext-net-id> Viewing Floating IP State: neutron floatingip-list Find floating IP for a particular VM port: neutron floatingip-list -- --port_id=ZZZ Disassociate a Floating IP: neutron floatingip-disassociate <floatingip-id> L3 Tear Down Delete the Floating IP: neutron floatingip-delete <floatingip-id> Then clear the gateway: neutron router-gateway-clear router1 Then remove the interfaces from the router: neutron router-interface-delete router1 <subnet-id> Finally, delete the router: neutron router-delete router1
Security Groups Security groups and security group rules allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a port. A security group is a container for security group rules. When a port is created in OpenStack Networking it is associated with a security group. If a security group is not specified the port will be associated with a 'default' security group. By default this group will drop all ingress traffic and allow all egress. Rules can be added to this group in order to change the behaviour. If one desires to use the OpenStack Compute security group APIs and/or have OpenStack Compute orchestrate the creation of new ports for instances on specific security groups, additional configuration is needed. To enable this, one must configure the following file /etc/nova/nova.conf and set the config option security_group_api=neutron on every node running nova-compute and nova-api. After this change is made restart nova-api and nova-compute in order to pick up this change. After this change is made one will be able to use both the OpenStack Compute and OpenStack Network security group API at the same time. To use the OpenStack Compute security group API with OpenStack Networking, the OpenStack Networking plugin must implement the security group API. The following plugins currently implement this: Nicira NVP, Open vSwitch, Linux Bridge, NEC, and Ryu. You must configure the correct firewall driver in the securitygroup section of the plugin/agent configuration file. Some plugins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation driver as the default, which results in non-working security groups. When using the security group API through OpenStack Compute, security groups are applied to all ports on an instance. The reason for this is that OpenStack Compute security group APIs are instances based and not port based as OpenStack Networking.
Security Group API Abstractions
Security Group Attributes
Attribute name Type Default Value Description
id uuid-str generated UUID for the security group.
name String None Human-readable name for the security group. Might not be unique. Cannot be named default as that is automatically created for a tenant.
description String None Human-readable description of a security group.
tenant_id uuid-str N/A Owner of the security group. Only admin users can specify a tenant_id other than their own.
Security Group Rules
Attribute name Type Default Value Description
id uuid-str generated UUID for the security group rule.
security_group_id uuid-str or Integer allocated by OpenStack Networking The security group to associate rule with.
direction String N/A The direction the traffic is allow (ingress/egress) from a VM.
protocol String None IP Protocol (icmp, tcp, udp, etc).
port_range_min Integer None Port at start of range
port_range_max Integer None Port at end of range
ethertype String None ethertype in L2 packet (IPv4, IPv6, etc)
remote_ip_prefix string (IP cidr) None CIDR for address range
remote_group_id uuid-str or Integer allocated by OpenStack Networking or OpenStack Compute Source security group to apply to rule.
tenant_id uuid-str N/A Owner of the security group rule. Only admin users can specify a tenant_id other than its own.
Common Security Group Commands Create a security group for our web servers: neutron security-group-create webservers --description "security group for webservers" Viewing security groups: neutron security-group-list Creating security group rule to allow port 80 ingress: neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid> List security group rules: neutron security-group-rule-list Delete a security group rule: neutron security-group-rule-delete <security_group_rule_uuid> Delete security group: neutron security-group-delete <security_group_uuid> Create a port and associated two security groups: neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id> Remove security groups from a port: neutron port-update --no-security-groups <port_id>
Load-Balancer-as-a-Service The Load-Balancer-as-a-Service API is an experimental API meant to give early adopters and vendors a chance to build implementations against. The reference implementation should probably not be run in production environments.
Common Load-Balancer-as-a-Service Workflow Find the correct subnet ID. The load balancer virtual IP (vip) and the instances that provide the balanced service must all be on the same subnet. The first step then is to obtain a list of available subnets and their IDs: neutron subnet-list Create a load balancer pool using the appropriate subnet ID from the list obtained above: neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> Valid options for --lb-method depend on the backend provider. For the reference implementation based on HAProxy valid options are: ROUND_ROBIN, LEAST_CONNECTIONS, or SOURCE_IP Valid options for protocol are: HTTP, HTTPS, or TCP Associate servers with pool: neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool Optionally --weight may be specified as an integer in the range 0..256. The weight of a member determines the portion of requests or connections it services compared to the other members of the pool. A value of 0 means the member will not participate in load-balancing but will still accept persistent connections. Create a health monitor which checks to make sure our instances are still running on the specified protocol-port: neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3 Valid options for --type are: PING, TCP, HTTP, HTTPS. It is also possible to set --url_path which defaults to "/" and if specified must begin with a leading slash Associate health monitor with pool: neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool Create a Virtual IP Address (VIP) that when accessed via the load balancer will direct the requests to one of the pool members: neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool Values for --protocol here are these same as in the pool creation step above. Connection rate limiting can be implemented using the --connection-limit flag and specifying maximum connections per second. As written above the load balancer will not have persistent sessions, to define persistent sessions so that a given client will always connect to the same backend (so long as it is still operational) use the following form: neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> --session-persistence type=dict type=<type>,[cookie_name=<name>] mypool Valid session persistence types are: APP_COOKIE, HTTP_COOKIE or SOURCE_IP. The APP_COOKIE type reuses a cookie from your application to manage persistence and requires the additional option cookie_name=<name> to inform the load balancer of which cookie name to use, this cookie_name is unused with other persistence types.
Plugin Specific Extensions Each vendor may choose to implement additional API extensions to the core API. This section describes the extensions for each plugin.
Nicira NVP Extensions The Nicira NVP plugin Extensions
Nicira NVP QoS Extension The Nicira NVP QoS extension rate-limits network ports to guarantee a specific amount of bandwidth for each port. This extension, by default, is only accessible by a tenant with an admin role but is configurable through the policy.json file. To use this extension, create a queue and specify the min/max bandwidth rates (kbps) and optionally set the QoS Marking and DSCP value (if your network fabric uses these values to make forwarding decisions). Once created, you can associate a queue with a network. Then, when ports are created on that network they are automatically created and associated with the specific queue size that was associated with the network. Because one size queue for a every port on a network may not be optimal, a scaling factor from the nova flavor 'rxtx_factor' is passed in from OpenStack Compute when creating the port to scale the queue. Lastly, if you want to set a specific baseline QoS policy for the amount of bandwidth a single port can use (unless a network queue is specified with the network a port is created on) a default queue can be created in neutron which then causes ports created to be associated with a queue of that size times the rxtx scaling factor. One thing to note is that after a network queue or default queue is specified this will not add queues to ports previously created and will only create queues for ports created thereafter.
Nicira NVP QoS API Abstractions
Nicira NVP QoS Attributes
Attribute name Type Default Value Description
id uuid-str generated UUID for the QoS queue.
default Boolean False by default If True ports will be created with this queue size unless the network port is created or associated with a queue at port creation time.
name String None Name for QoS queue.
min Integer 0 Minimum Bandwidth Rate (kbps).
max Integer N/A Maximum Bandwidth Rate (kbps).
qos_marking String untrusted by default Whether QoS marking should be trusted or untrusted.
dscp Integer 0 DSCP Marking value.
tenant_id uuid-str N/A The owner of the QoS queue.
Nicira NVP QoS Walk Through Create QoS Queue (admin-only) neutron queue-create--min 10 --max 1000 myqueue Associate queue with a network neutron net-create network --queue_id=<queue_id> Create default system queue neutron queue-create --default True --min 10 --max 2000 default List QoS Queues: neutron queue-list Delete QoS Queue: neutron queue-delete <queue_id or name>'