Advanced features through API extensions Several plug-ins implement API extensions that provide capabilities similar to what was available in nova-network: These plug-ins are likely to be of interest to the OpenStack community.
Provider networks Provider networks enable cloud administrators to create Networking networks that map directly to the physical networks in the data center. This is commonly used to give tenants direct access to a public network that can be used to reach the Internet. It might also be used to integrate with VLANs in the network that already have a defined meaning (for example, enable a VM from the "marketing" department to be placed on the same VLAN as bare-metal marketing hosts in the same data center). The provider extension allows administrators to explicitly manage the relationship between Networking virtual networks and underlying physical mechanisms such as VLANs and tunnels. When this extension is supported, Networking client users with administrative privileges see additional provider attributes on all virtual networks, and are able to specify these attributes in order to create provider networks. The provider extension is supported by the Open vSwitch and Linux Bridge plug-ins. Configuration of these plug-ins requires familiarity with this extension.
Terminology A number of terms are used in the provider extension and in the configuration of plug-ins supporting the provider extension:
Provider extension terminology
Term Description
virtual network An Networking L2 network (identified by a UUID and optional name) whose ports can be attached as vNICs to Compute instances and to various Networking agents. The Open vSwitch and Linux Bridge plug-ins each support several different mechanisms to realize virtual networks.
physical network A network connecting virtualization hosts (such as, Compute nodes) with each other and with other network resources. Each physical network might support multiple virtual networks. The provider extension and the plug-in configurations identify physical networks using simple string names.
tenant network A virtual network that a tenant or an administrator creates. The physical details of the network are not exposed to the tenant.
provider network A virtual network administratively created to map to a specific network in the data center, typically to enable direct access to non-OpenStack resources on that network. Tenants can be given access to provider networks.
VLAN network A virtual network implemented as packets on a specific physical network containing IEEE 802.1Q headers with a specific VID field value. VLAN networks sharing the same physical network are isolated from each other at L2, and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094.
flat network A virtual network implemented as packets on a specific physical network containing no IEEE 802.1Q header. Each physical network can realize at most one flat network.
local network A virtual network that allows communication within each host, but not across a network. Local networks are intended mainly for single-node test scenarios, but can have other uses.
GRE network A virtual network implemented as network packets encapsulated using GRE. GRE networks are also referred to as tunnels. GRE tunnel packets are routed by the IP routing table for the host, so GRE networks are not associated by Networking with specific physical networks.
Virtual Extensible LAN (VXLAN) network VXLAN is a proposed encapsulation protocol for running an overlay network on existing Layer 3 infrastructure. An overlay network is a virtual network that is built on top of existing network Layer 2 and Layer 3 technologies to support elastic compute architectures.
The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks, flat networks, and local networks. Only the ML2 and Open vSwitch plug-ins currently support GRE and VXLAN networks, provided that the required features exist in the hosts Linux kernel, Open vSwitch, and iproute2 packages.
Provider attributes The provider extension extends the Networking network resource with these attributes:
Provider network attributes
Attribute name Type Default Value Description
provider:network_type String N/A The physical mechanism by which the virtual network is implemented. Possible values are flat, vlan, local, and gre, corresponding to flat networks, VLAN networks, local networks, and GRE networks as defined above. All types of provider networks can be created by administrators, while tenant networks can be implemented as vlan, gre, or local network types depending on plug-in configuration.
provider:physical_network String If a physical network named "default" has been configured, and if provider:network_type is flat or vlan, then "default" is used. The name of the physical network over which the virtual network is implemented for flat and VLAN networks. Not applicable to the local or gre network types.
provider:segmentation_id Integer N/A For VLAN networks, the VLAN VID on the physical network that realizes the virtual network. Valid VLAN VIDs are 1 through 4094. For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit unsigned integer. Not applicable to the flat or local network types.
To view or set provider extended attributes, a client must be authorized for the extension:provider_network:view and extension:provider_network:set actions in the Networking policy configuration. The default Networking configuration authorizes both actions for users with the admin role. An authorized client or an administrative user can view and set the provider extended attributes through Networking API calls. See for details on policy configuration.
Provider extension API operations To use the provider extension with the default policy settings, you must have the administrative role. This table shows example neutron commands that enable you to complete basic provider extension API operations:
Basic provider extension API operations
Operation Command
Shows all attributes of a network, including provider attributes. $ neutron net-show <name or net-id>
Creates a local provider network. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type local
Creates a flat provider network. When you create flat networks, <phys-net-name> must be known to the plug-in. See the OpenStack Configuration Reference for details. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type flat --provider:physical_network <phys-net-name>
Creates a VLAN provider network. When you create VLAN networks, <phys-net-name> must be known to the plug-in. See the OpenStack Configuration Reference for details on configuring network_vlan_ranges to identify all physical networks. When you create VLAN networks, <VID> can fall either within or outside any configured ranges of VLAN IDs from which tenant networks are allocated. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type vlan --provider:physical_network <phys-net-name> --provider:segmentation_id <VID>
Creates a GRE provider network. When you create GRE networks, <tunnel-id> can be either inside or outside any tunnel ID ranges from which tenant networks are allocated. After you create provider networks, you can allocate subnets, which you can use in the same way as other virtual networks, subject to authorization policy based on the specified <tenant_id>. $ neutron net-create <name> --tenant_id <tenant-id> --provider:network_type gre --provider:segmentation_id <tunnel-id>
L3 routing and NAT The Networking API provides abstract L2 network segments that are decoupled from the technology used to implement the L2 network. Networking includes an API extension that provides abstract L3 routers that API users can dynamically provision and configure. These Networking routers can connect multiple L2 Networking networks, and can also provide a gateway that connects one or more private L2 networks to a shared external network. For example, a public network for access to the Internet. See the OpenStack Configuration Reference for details on common models of deploying Networking L3 routers. The L3 router provides basic NAT capabilities on gateway ports that uplink the router to external networks. This router SNATs all traffic by default, and supports floating IPs, which creates a static one-to-one mapping from a public IP on the external network to a private IP on one of the other subnets attached to the router. This allows a tenant to selectively expose VMs on private networks to other hosts on the external network (and often to all hosts on the Internet). You can allocate and map floating IPs from one port to another, as needed.
L3 API abstractions
Router
Attribute name Type Default Value Description
id uuid-str generated UUID for the router.
name String None Human-readable name for the router. Might not be unique.
admin_state_up Bool True The administrative state of router. If false (down), the router does not forward packets.
status String N/A Indicates whether router is currently operational.
tenant_id uuid-str N/A Owner of the router. Only admin users can specify a tenant_id other than its own.
external_gateway_info dict contain 'network_id' key-value pair Null External network that this router connects to for gateway services (for example, NAT)
Floating IP
Attribute name Type Default Value Description
id uuid-str generated UUID for the floating IP.
floating_ip_address string (IP address) allocated by Networking The external network IP address available to be mapped to an internal IP address.
floating_network_id uuid-str N/A The network indicating the set of subnets from which the floating IP should be allocated
router_id uuid-str N/A Read-only value indicating the router that connects the external network to the associated internal port, if a port is associated.
port_id uuid-str Null Indicates the internal Networking port associated with the external floating IP.
fixed_ip_address string (IP address) Null Indicates the IP address on the internal port that is mapped to by the floating IP (since an Networking port might have more than one IP address).
tenant_id uuid-str N/A Owner of the Floating IP. Only admin users can specify a tenant_id other than its own.
Basic L3 operations External networks are visible to all users. However, the default policy settings enable only administrative users to create, update, and delete external networks. This table shows example neutron commands that enable you to complete basic L3 operations:
Basic L3 operations
Operation Command
Creates external networks. # neutron net-create public --router:external=True # neutron subnet-create public 172.16.1.0/24
Lists external networks. # neutron net-list -- --router:external=True
Creates an internal-only router that connects to multiple L2 networks privately. # neutron net-create net1 # neutron subnet-create net1 10.0.0.0/24 # neutron net-create net2 # neutron subnet-create net2 10.0.1.0/24 # neutron router-create router1 # neutron router-interface-add router1 <subnet1-uuid> # neutron router-interface-add router1 <subnet2-uuid>
Connects a router to an external network, which enables that router to act as a NAT gateway for external connectivity. # neutron router-gateway-set router1 <ext-net-id> The router obtains an interface with the gateway_ip address of the subnet, and this interface is attached to a port on the L2 Networking network associated with the subnet. The router also gets a gateway interface to the specified external network. This provides SNAT connectivity to the external network as well as support for floating IPs allocated on that external networks. Commonly an external network maps to a network in the provider
Lists routers. # neutron router-list
Shows information for a specified router. # neutron router-show <router_id>
Shows all internal interfaces for a router.
Identifies the port-id that represents the VM NIC to which the floating IP should map. # neutron port-list -c id -c fixed_ips -- --device_id=<instance_id> This port must be on an Networking subnet that is attached to a router uplinked to the external network used to create the floating IP.  Conceptually, this is because the router must be able to perform the Destination NAT (DNAT) rewriting of packets from the Floating IP address (chosen from a subnet on the external network) to the internal Fixed IP (chosen from a private subnet that is behind the router).
Creates a floating IP address and associates it with a port. # neutron floatingip-create <ext-net-id> # neutron floatingip-associate <floatingip-id> <internal VM port-id>
Creates a floating IP address and associates it with a port, in a single step. # neutron floatingip-create --port_id <internal VM port-id> <ext-net-id>
Lists floating IPs. # neutron floatingip-list
Finds floating IP for a specified VM port. # neutron floatingip-list -- --port_id=ZZZ
Disassociates a floating IP address. # neutron floatingip-disassociate <floatingip-id>
Deletes the floating IP address. # neutron floatingip-delete <floatingip-id>
Clears the gateway. # neutron router-gateway-clear router1
Removes the interfaces from the router. # neutron router-interface-delete router1 <subnet-id>
Deletes the router. # neutron router-delete router1
Security groups Security groups and security group rules allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a port. A security group is a container for security group rules. When a port is created in Networking it is associated with a security group. If a security group is not specified the port is associated with a 'default' security group. By default, this group drops all ingress traffic and allows all egress. Rules can be added to this group in order to change the behaviour. To use the Compute security group APIs or use Compute to orchestrate the creation of ports for instances on specific security groups, you must complete additional configuration. You must configure the /etc/nova/nova.conf file and set the security_group_api=neutron option on every node that runs nova-compute and nova-api. After you make this change, restart nova-api and nova-compute to pick up this change. Then, you can use both the Compute and OpenStack Network security group APIs at the same time. To use the Compute security group API with Networking, the Networking plug-in must implement the security group API. The following plug-ins currently implement this: ML2, Nicira NVP, Open vSwitch, Linux Bridge, NEC, and Ryu. You must configure the correct firewall driver in the securitygroup section of the plug-in/agent configuration file. Some plug-ins and agents, such as Linux Bridge Agent and Open vSwitch Agent, use the no-operation driver as the default, which results in non-working security groups. When using the security group API through Compute, security groups are applied to all ports on an instance. The reason for this is that Compute security group APIs are instances based and not port based as Networking.
Security group API abstractions
Security group attributes
Attribute name Type Default Value Description
id uuid-str generated UUID for the security group.
name String None Human-readable name for the security group. Might not be unique. Cannot be named default as that is automatically created for a tenant.
description String None Human-readable description of a security group.
tenant_id uuid-str N/A Owner of the security group. Only admin users can specify a tenant_id other than their own.
Security group rules
Attribute name Type Default Value Description
id uuid-str generated UUID for the security group rule.
security_group_id uuid-str or Integer allocated by Networking The security group to associate rule with.
direction String N/A The direction the traffic is allow (ingress/egress) from a VM.
protocol String None IP Protocol (icmp, tcp, udp, and so on).
port_range_min Integer None Port at start of range
port_range_max Integer None Port at end of range
ethertype String None ethertype in L2 packet (IPv4, IPv6, and so on)
remote_ip_prefix string (IP cidr) None CIDR for address range
remote_group_id uuid-str or Integer allocated by Networking or Compute Source security group to apply to rule.
tenant_id uuid-str N/A Owner of the security group rule. Only admin users can specify a tenant_id other than its own.
Basic security group operations This table shows example neutron commands that enable you to complete basic security group operations:
Basic security group operations
Operation Command
Creates a security group for our web servers. # neutron security-group-create webservers --description "security group for webservers"
Lists security groups. # neutron security-group-list
Creates a security group rule to allow port 80 ingress. # neutron security-group-rule-create --direction ingress --protocol tcp --port_range_min 80 --port_range_max 80 <security_group_uuid>
Lists security group rules. # neutron security-group-rule-list
Deletes a security group rule. # neutron security-group-rule-delete <security_group_rule_uuid>
Deletes a security group. # neutron security-group-delete <security_group_uuid>
Creates a port and associates two security groups. # neutron port-create --security-group <security_group_id1> --security-group <security_group_id2> <network_id>
Removes security groups from a port. # neutron port-update --no-security-groups <port_id>
Basic Load-Balancer-as-a-Service operations The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The Havana release offers a reference implementation that is based on the HAProxy software load balancer. This table shows example neutron commands that enable you to complete basic LBaaS operations:
Basic LBaaS operations
Operation Command
Creates a load balancer pool by using specific provider. --provider is an optional argument. If not used, the pool is created with default provider for LBaaS service. You should configure the default provider in the [service_providers] section of neutron.conf file. If no default provider is specified for LBaaS, the --provider option is required for pool creation. # neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool --protocol HTTP --subnet-id <subnet-uuid> --provider <provider_name>
Associates two web servers with pool. # neutron lb-member-create --address <webserver one IP> --protocol-port 80 mypool # neutron lb-member-create --address <webserver two IP> --protocol-port 80 mypool
Creates a health monitor which checks to make sure our instances are still running on the specified protocol-port. # neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 --timeout 3
Associates a health monitor with pool. # neutron lb-healthmonitor-associate <healthmonitor-uuid> mypool
Creates a virtual IP (VIP) address that, when accessed through the load balancer, directs the requests to one of the pool members. # neutron lb-vip-create --name myvip --protocol-port 80 --protocol HTTP --subnet-id <subnet-uuid> mypool
Firewall-as-a-Service The Firewall-as-a-Service (FWaaS) API is an experimental API that enables early adopters and vendors to test their networking implementations. The FWaaS is backed by a reference implementation that works with the Networking OVS plug-in and provides perimeter firewall functionality. It leverages the footprint of the Networking OVS L3 agent and an IPTables driver to apply the firewall rules contained in a particular firewall policy. This reference implementation supports one firewall policy and consequently one logical firewall instance for each tenant. This is not a constraint of the resource model, but of the current reference implementation. The firewall is present on a Networking virtual router. If a tenant has multiple routers, the firewall is present on all the routers. If a tenant does not have any router, the firewall is in PENDING_CREATE state until a router is created and the first interface is added to the router. At that point the firewall policy is immediately applied to the router and the firewall changes to ACTIVE state. Because this is the first iteration of this implementation, it should probably not be run in production environments without adequate testing.
Firewall-as-a-Service API abstractions
Firewall rules
Attribute name Type Default Value Description
id uuid-str generated UUID for the firewall rule.
tenant_id uuid-str N/A Owner of the firewall rule. Only admin users can specify a tenant_id other than its own.
name String None Human readable name for the firewall rule (255 characters limit).
description String None Human readable description for the firewall rule (1024 characters limit).
firewall_policy_id uuid-str or None allocated by Networking This is a read-only attribute that gets populated with the uuid of the firewall policy when this firewall rule is associated with a firewall policy. A firewall rule can be associated with only one firewall policy at a time. However, the association can be changed to a different firewall policy.
shared Boolean False When set to True makes this firewall rule visible to tenants other than its owner, and it can be used in firewall policies not owned by its tenant.
protocol String None IP Protocol (icmp, tcp, udp, None).
ip_version Integer or String 4 IP Version (4, 6).
source_ip_address String (IP address or CIDR) None Source IP address or CIDR.
destination_ip_address String (IP address or CIDR) None Destination IP address or CIDR.
source_port Integer or String (either as a single port number or in the format of a ':' separated range) None Source port number or a range.
destination_port Integer or String (either as a single port number or in the format of a ':' separated range) None Destination port number or a range.
position Integer None This is a read-only attribute that gets assigned to this rule when the rule is associated with a firewall policy. It indicates the position of this rule in that firewall policy.
action String deny Action to be performed on the traffic matching the rule (allow, deny).
enabled Boolean True When set to False, disables this rule in the firewall policy. Facilitates selectively turning off rules without having to disassociate the rule from the firewall policy.
Firewall policies
Attribute name Type Default Value Description
id uuid-str generated UUID for the firewall policy.
tenant_id uuid-str N/A Owner of the firewall policy. Only admin users can specify a tenant_id other their own.
name String None Human readable name for the firewall policy (255 characters limit).
description String None Human readable description for the firewall policy (1024 characters limit).
shared Boolean False When set to True makes this firewall policy visible to tenants other than its owner, and can be used to associate with firewalls not owned by its tenant.
firewall_rules List of uuid-str or None None This is an ordered list of firewall rule uuids. The firewall applies the rules in the order in which they appear in this list.
audited Boolean False When set to True by the policy owner indicates that the firewall policy has been audited. This attribute is meant to aid in the firewall policy audit workflows. Each time the firewall policy or the associated firewall rules are changed, this attribute is set to False and must be explicitly set to True through an update operation.
Firewalls
Attribute name Type Default Value Description
id uuid-str generated UUID for the firewall.
tenant_id uuid-str N/A Owner of the firewall. Only admin users can specify a tenant_id other than its own.
name String None Human readable name for the firewall (255 characters limit).
description String None Human readable description for the firewall (1024 characters limit).
admin_state_up Boolean True The administrative state of the firewall. If False (down), the firewall does not forward any packets.
status String N/A Indicates whether the firewall is currently operational. Possible values include: ACTIVE DOWN PENDING_CREATE PENDING_UPDATE PENDING_DELETE ERROR
firewall_policy_id uuid-str or None None The firewall policy uuid that this firewall is associated with. This firewall implements the rules contained in the firewall policy represented by this uuid.
Basic Firewall-as-a-Service operations Create a firewall rule: # neutron firewall-rule-create --protocol <tcp|udp|icmp|any> --destination-port <port-range> --action <allow|deny> The CLI requires that a protocol value be provided. If the rule is protocol agnostic, the 'any' value can be used. In addition to the protocol attribute, other attributes can be specified in the firewall rule. See the previous section for the supported attributes. Create a firewall policy: # neutron firewall-policy-create --firewall-rules "<firewall-rule ids or names separated by space>" myfirewallpolicy The order of the rules specified above is important. A firewall policy can be created without any rules and rules can be added later either via the update operation (if adding multiple rules) or via the insert-rule operation (if adding a single rule). Please check the CLI help for more details on these operations. The reference implementation always adds a default deny all rule at the end of each policy. This implies that if a firewall policy is created without any rules and is associated with a firewall, that firewall blocks all traffic. Create a firewall: # neutron firewall-create <firewall-policy-uuid> The FWaaS features and the above workflow can also be accessed from the Horizon user interface. This support is disabled by default, but can be enabled by configuring #HORIZON_DIR/openstack_dashboard/local/local_settings.py and setting: 'enable_firewall' = True
Allowed-address-pairs Allowed-address-pairs is an API extension that extends the port attribute. This extension allows one to specify arbitrary mac_address/ip_address(cidr) pairs that are allowed to pass through a port regardless of subnet. The main use case for this is to enable the ability to use protocols such as VRRP which floats an ip address between two instances to enable fast data plane failover. The allowed-address-pairs extension is currently only supported by these plug-ins: ML2, Nicira NVP, and Open vSwitch.
Basic allowed address pairs operations Create a port with a specific allowed-address-pairs: # neutron port-create net1 --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> Update a port adding allowed-address-pairs: # neutron port-update <subnet-uuid> --allowed-address-pairs type=dict list=true mac_address=<mac_address>,ip_address=<ip_cidr> Setting an allowed-address-pair that matches the mac_address and ip_address of a port is prevented. This is because that would have no effect since traffic matching the mac_address and ip_address is already allowed to pass through the port. If your plug-in implements the port-security extension port-security-enabled must be set to True on the port in order to have allowed-address-pairs on a port. The reason for this is because if port-security-enabled is set to False this allows all traffic to be passed through the port thus having allowed-address-pairs would have no effect.
Plug-in specific extensions Each vendor can choose to implement additional API extensions to the core API. This section describes the extensions for each plug-in.
Nicira NVP extensions These sections explain Nicira NVP plug-in extensions.
Nicira NVP QoS extension The Nicira NVP QoS extension rate-limits network ports to guarantee a specific amount of bandwidth for each port. This extension, by default, is only accessible by a tenant with an admin role but is configurable through the policy.json file. To use this extension, create a queue and specify the min/max bandwidth rates (kbps) and optionally set the QoS Marking and DSCP value (if your network fabric uses these values to make forwarding decisions). Once created, you can associate a queue with a network. Then, when ports are created on that network they are automatically created and associated with the specific queue size that was associated with the network. Because one size queue for a every port on a network might not be optimal, a scaling factor from the Nova flavor 'rxtx_factor' is passed in from Compute when creating the port to scale the queue. Lastly, if you want to set a specific baseline QoS policy for the amount of bandwidth a single port can use (unless a network queue is specified with the network a port is created on) a default queue can be created in Networking which then causes ports created to be associated with a queue of that size times the rxtx scaling factor. Note that after a network or default queue is specified, queues are added to ports that are subsequently created but are not added to existing ports.
Nicira NVP QoS API abstractions
Nicira NVP QoS attributes
Attribute name Type Default Value Description
id uuid-str generated UUID for the QoS queue.
default Boolean False by default If True, ports are created with this queue size unless the network port is created or associated with a queue at port creation time.
name String None Name for QoS queue.
min Integer 0 Minimum Bandwidth Rate (kbps).
max Integer N/A Maximum Bandwidth Rate (kbps).
qos_marking String untrusted by default Whether QoS marking should be trusted or untrusted.
dscp Integer 0 DSCP Marking value.
tenant_id uuid-str N/A The owner of the QoS queue.
Basic Nicira NVP QoS operations This table shows example neutron commands that enable you to complete basic queue operations:
Basic Nicira NVP QoS operations
Operation Command
Creates QoS Queue (admin-only). # neutron queue-create--min 10 --max 1000 myqueue
Associates a queue with a network. # neutron net-create network --queue_id=<queue_id>
Creates a default system queue. # neutron queue-create --default True --min 10 --max 2000 default
Lists QoS queues. # neutron queue-list
Deletes a QoS queue. # neutron queue-delete <queue_id or name>'
Nicira NVP provider networks extension Provider networks can be implemented in different ways by the underlying NVP platform. The FLAT and VLAN network types use bridged transport connectors. These network types enable the attachment of large number of ports. To handle the increased scale, the NVP plug-in can back a single Openstack Network with a chain of NVP logical switches. You can specify the maximum number of ports on each logical switch in this chain on the max_lp_per_bridged_ls parameter, which has a default value of 5,000. The recommended value for this parameter varies with the NVP version running in the back-end, as shown in the following table.
Recommended values for max_lp_per_bridged_ls
NVP version Recommended Value
2.x 64
3.0.x 5,000
3.1.x 5,000
3.2.x 10,000
In addition to these network types, the NVP plug-in also supports a special l3_ext network type, which maps external networks to specific NVP gateway services as discussed in the next section.
Nicira NVP L3 extension NVP exposes its L3 capabilities through gateway services which are usually configured out of band from OpenStack. To use NVP with L3 capabilities, first create a L3 gateway service in the NVP Manager. Next, in /etc/neutron/plugins/nicira/nvp.ini set default_l3_gw_service_uuid to this value. By default, routers are mapped to this gateway service.
Nicira NVP L3 extension operations Create external network and map it to a specific NVP gateway service: # neutron net-create public --router:external=True --provider:network_type l3_ext \ --provider:physical_network <L3-Gateway-Service-UUID> Terminate traffic on a specific VLAN from a NVP gateway service: # neutron net-create public --router:external=True --provider:network_type l3_ext \ --provider:physical_network <L3-Gateway-Service-UUID> -provider:segmentation_id <VLAN_ID>
Operational status synchronization in the Nicira NVP plug-in Starting with the Havana release, the Nicira NVP plug-in provides an asynchronous mechanism for retrieving the operational status for neutron resources from the NVP back-end; this applies to network, port, and router resources. The back-end is polled periodically, and the status for every resource is retrieved; then the status in the Networking database is updated only for the resources for which a status change occurred. As operational status is now retrieved asynchronously, performance for GET operations is consistently improved. Data to retrieve from the back-end are divided in chunks in order to avoid expensive API requests; this is achieved leveraging NVP APIs response paging capabilities. The minimum chunk size can be specified using a configuration option; the actual chunk size is then determined dynamically according to: total number of resources to retrieve, interval between two synchronization task runs, minimum delay between two subsequent requests to the NVP back-end. The operational status synchronization can be tuned or disabled using the configuration options reported in this table; it is however worth noting that the default values work fine in most cases.
Configuration options for tuning operational status synchronization in the NVP plug-in
Option name Group Default value Type and constraints Notes
state_sync_interval nvp_sync 120 seconds Integer; no constraint. Interval in seconds between two run of the synchronization task. If the synchronization task takes more than state_sync_interval seconds to execute, a new instance of the task is started as soon as the other is completed. Setting the value for this option to 0 will disable the synchronization task.
max_random_sync_delay nvp_sync 0 seconds Integer. Must not exceed min_sync_req_delay When different from zero, a random delay between 0 and max_random_sync_delay will be added before processing the next chunk.
min_sync_req_delay nvp_sync 10 seconds Integer. Must not exceed state_sync_interval. The value of this option can be tuned according to the observed load on the NVP controllers. Lower values will result in faster synchronization, but might increase the load on the controller cluster.
min_chunk_size nvp_sync 500 resources Integer; no constraint. Minimum number of resources to retrieve from the back-end for each synchronization chunk. The expected number of synchronization chunks is given by the ratio between state_sync_interval and min_sync_req_delay. This size of a chunk might increase if the total number of resources is such that more than min_chunk_size resources must be fetched in one chunk with the current number of chunks.
always_read_status nvp_sync False Boolean; no constraint. When this option is enabled, the operational status will always be retrieved from the NVP back-end ad every GET request. In this case it is advisable to disable the synchronization task.
When running multiple OpenStack Networking server instances, the status synchronization task should not run on every node; doing so sends unnecessary traffic to the NVP back-end and performs unnecessary DB operations. Set the configuration option to a non-zero value exclusively on a node designated for back-end status synchronization. Explicitly specifying the status attribute in Neutron API requests (e.g.: GET /v2.0/networks/<net-id>?fields=status&fields=name) always triggers an explicit query to the NVP back-end, even when asynchronous state synchronization is enabled.
Big Switch plug-in extensions This section explains the Big Switch Neutron plug-in-specific extension.
Big Switch router rules Big Switch allows router rules to be added to each tenant router. These rules can be used to enforce routing policies such as denying traffic between subnets or traffic to external networks. By enforcing these at the router level, network segmentation policies can be enforced across many VMs that have differing security groups.
Router rule attributes Each tenant router has a set of router rules associated with it. Each router rule has the attributes in this table. Router rules and their attributes can be set using the neutron router-update command, through the Horizon interface or the Neutron API.
Big Switch Router rule attributes
Attribute name Required Input Type Description
source Yes A valid CIDR or one of the keywords 'any' or 'external' The network that a packet's source IP must match for the rule to be applied
destination Yes A valid CIDR or one of the keywords 'any' or 'external' The network that a packet's destination IP must match for the rule to be applied
action Yes 'permit' or 'deny' Determines whether or not the matched packets will allowed to cross the router
nexthop No A plus-separated (+) list of next-hop IP addresses (e.g. '1.1.1.1+1.1.1.2') Overrides the default virtual router used to handle traffic for packets that match the rule
Order of rule processing The order of router rules has no effect. Overlapping rules are evaluated using longest prefix matching on the source and destination fields. The source field is matched first so it always takes higher precedence over the destination field. In other words, longest prefix matching is used on the destination field only if there are multiple matching rules with the same source.
Big Switch router rules operations Router rules are configured with a router update operation in OpenStack Networking. The update overrides any previous rules so all rules must be provided at the same time. Update a router with rules to permit traffic by default but block traffic from external networks to the 10.10.10.0/24 subnet: # neutron router-update Router-UUID --router_rules type=dict list=true\ source=any,destination=any,action=permit \ source=external,destination=10.10.10.0/24,action=deny Specify alternate next-hop addresses for a specific subnet: # neutron router-update Router-UUID --router_rules type=dict list=true\ source=any,destination=any,action=permit \ source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253 Block traffic between two subnets while allowing everything else: # neutron router-update Router-UUID --router_rules type=dict list=true\ source=any,destination=any,action=permit \ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
L3 metering The L3 metering API extension enables administrators to configure IP ranges and assign a specified label to them to be able to measure traffic that goes through a virtual router. The L3 metering extension is decoupled from the technology that implements the measurement. Two abstractions have been added: One is the metering label that can contain metering rules. Because a metering label is associated with a tenant, all virtual routers in this tenant are associated with this label.
L3 metering API abstractions
Label
Attribute name Type Default Value Description
id uuid-str generated UUID for the metering label.
name String None Human-readable name for the metering label. Might not be unique.
description String None The optional description for the metering label.
tenant_id uuid-str N/A Owner of the metering label.
Rules
Attribute name Type Default Value Description
id uuid-str generated UUID for the metering rule.
direction String (Either ingress or egress) ingress The direction in which metering rule is applied, either ingress or egress.
metering_label_id uuid-str N/A The metering label ID to associate with this metering rule.
excluded Boolean False Specify whether the remote_ip_prefix will be excluded or not from traffic counters of the metering label, For example to not count the traffic of a specific IP address of a range.
remote_ip_prefix String (CIDR) N/A Indicates remote IP prefix to be associated with this metering rule.
Basic L3 metering operations Only administrators can manage the L3 metering labels and rules. This table shows example neutron commands that enable you to complete basic L3 metering operations:
Basic L3 operations
Operation Command
Creates a metering label. $ neutron meter-label-create label1 --description "description of label1"
Lists metering labels. $ neutron meter-label-list
Shows information for a specified label. $ neutron meter-label-show label-uuid $ neutron meter-label-show label1
Deletes a metering label. $ neutron meter-label-delete label-uuid $ neutron meter-label-delete label1
Creates a metering rule. $ neutron meter-label-rule-create label-uuid cidr --direction direction --excluded $ neutron meter-label-rule-create label1 10.0.0.0/24 --direction ingress $ neutron meter-label-rule-create label1 20.0.0.0/24 --excluded
Lists metering all label rules. $ neutron meter-label-rule-list
Shows information for a specified label rule. $ neutron meter-label-rule-show rule-uuid
Deletes a metering label rule. $ neutron meter-label-rule-delete rule-uuid