Introduction to Networking The Networking service, code-named Neutron, provides an API that lets you define network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their cloud networking. The Networking service also provides an API to configure and manage a variety of network services ranging from L3 forwarding and NAT to load balancing, edge firewalls, and IPSEC VPN. For a detailed description of the Networking API abstractions and their attributes, see the OpenStack Networking API v2.0 Reference.
Networking API Networking is a virtual network service that provides a powerful API to define the network connectivity and IP addressing that devices from other services, such as Compute, use. The Compute API has a virtual server abstraction to describe computing resources. Similarly, the Networking API has virtual network, subnet, and port abstractions to describe networking resources.
Networking resources
Resource Description
Network An isolated L2 segment, analogous to VLAN in the physical networking world.
Subnet A block of v4 or v6 IP addresses and associated configuration state.
Port A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port.
You can configure rich network topologies by creating and configuring networks and subnets, and then instructing other OpenStack services like Compute to attach virtual devices to ports on these networks. In particular, Networking supports each tenant having multiple private networks, and allows tenants to choose their own IP addressing scheme (even if those IP addresses overlap with those that other tenants use). The Networking service: Enables advanced cloud networking use cases, such as building multi-tiered web applications and enabling migration of applications to the cloud without changing IP addresses. Offers flexibility for the cloud administrator to customize network offerings. Enables developers to extend the Networking API. Over time, the extended functionality becomes part of the core Networking API.
Plug-in architecture The original Compute network implementation assumed a basic model of isolation through Linux VLANs and IP tables. Networking introduces support for vendor plug-ins, which offer a custom back-end implementation of the Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-in-L3 tunneling or OpenFlow, to provide similar benefits.
Available networking plug-ins
Plug-in Documentation
Big Switch Plug-in (Floodlight REST Proxy) This guide and http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin
Brocade Plug-in This guide
Cisco http://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V Plug-in http://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge Plug-in http://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox Plug-in https://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet Plug-in http://www.midokura.com/
ML2 (Modular Layer 2) Plug-in https://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow Plug-in http://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin
Open vSwitch Plug-in This guide.
PLUMgrid This guide and https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
Ryu Plug-in This guide and https://github.com/osrg/ryu/wiki/OpenStack
VMware NSX Plug-in This guide and NSX Product Overview, NSX Product Support
Plug-ins can have different properties for hardware requirements, features, performance, scale, or operator tools. Because Networking supports a large number of plug-ins, the cloud administrator can weigh options to decide on the right networking technology for the deployment. In the Havana release, OpenStack Networking introduces the Modular Layer 2 (ML2) plug-in that enables the use of multiple concurrent mechanism drivers. This capability aligns with the complex requirements typically found in large heterogeneous environments. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-v L2 agents. The ML2 framework simplifies the addition of support for new L2 technologies and reduces the effort that is required to add and maintain them compared to earlier large plug-ins. Plug-in deprecation notice The Open vSwitch and Linux Bridge plug-ins are deprecated in the Havana release and will be removed in the Icehouse release. The features in these plug-ins are now part of the ML2 plug-in in the form of mechanism drivers. Not all Networking plug-ins are compatible with all possible Compute drivers:
Plug-in compatibility with Compute drivers
Plug-in Libvirt (KVM/QEMU) XenServer VMware Hyper-V Bare-metal
Big Switch / Floodlight Yes
Brocade Yes
Cisco Yes
Cloudbase Hyper-V Yes
Linux Bridge Yes
Mellanox Yes
Midonet Yes
ML2 Yes Yes
NEC OpenFlow Yes
Open vSwitch Yes
Plumgrid Yes Yes
Ryu Yes
VMware NSX Yes Yes Yes
Plug-in configurations For configurations options, see Networking configuration options in Configuration Reference. These sections explain how to configure specific plug-ins.
Configure Big Switch, Floodlight REST Proxy plug-in To use the REST Proxy plug-in with OpenStack Networking Edit the /etc/neutron/neutron.conf file and add this line: core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 Edit the plug-in configuration file, /etc/neutron/plugins/bigswitch/restproxy.ini, and specify a comma-separated list of controller_ip:port pairs: server = <controller-ip>:<port> For database configuration, see Install Networking Services in the Installation Guide in the OpenStack Documentation index. (The link defaults to the Ubuntu version.) Restart neutron-server to apply the new settings: $ sudo service neutron-server restart
Configure Brocade plug-in To use the Brocade plug-in with OpenStack Networking Install the Brocade-modified Python netconf client (ncclient) library, which is available at https://github.com/brocade/ncclient: $ git clone https://www.github.com/brocade/ncclient $ cd ncclient; sudo python ./setup.py install Edit the /etc/neutron/neutron.conf file and set the following option: core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2 Edit the /etc/neutron/plugins/brocade/brocade.ini configuration file for the Brocade plug-in and specify the admin user name, password, and IP address of the Brocade switch: [SWITCH] username = admin password = password address = switch mgmt ip address ostype = NOS For database configuration, see Install Networking Services in any of the Installation Guides in the OpenStack Documentation index. (The link defaults to the Ubuntu version.) Restart the neutron-server service to apply the new settings: # service neutron-server restart
Configure OVS plug-in If you use the Open vSwitch (OVS) plug-in in a deployment with multiple hosts, you must use either tunneling or vlans to isolate traffic from multiple networks. Tunneling is easier to deploy because it does not require configuring VLANs on network switches. This procedure uses tunneling: To configure OpenStack Networking to use the OVS plug-in Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini to specify these values (for database configuration, see Install Networking Services in Installation Guide): enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=<data-net-IP-address-of-node> If you use the neutron DHCP agent, add these lines to the /etc/neutron/dhcp_agent.ini file: dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf Create /etc/neutron/dnsmasq-neutron.conf, and add these values to lower the MTU size on instances and prevent packet fragmentation over the GRE tunnel: dhcp-option-force=26,1400 Restart to apply the new settings: $ sudo service neutron-server restart
Configure NSX plug-in To configure OpenStack Networking to use the NSX plug-in While the instructions in this section refer to the VMware NSX platform, this is formerly known as Nicira NVP. Install the NSX plug-in, as follows: $ sudo apt-get install neutron-plugin-vmware Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.vmware.NsxPlugin Example neutron.conf file for NSX: core_plugin = neutron.plugins.vmware.NsxPlugin rabbit_host = 192.168.203.10 allow_overlapping_ips = True To configure the NSX controller cluster for the OpenStack Networking Service, locate the [default] section in the /etc/neutron/plugins/vmware/nsx.ini file, and add the following entries (for database configuration, see Install Networking Services in Installation Guide): To establish and configure the connection with the controller cluster you must set some parameters, including NSX API endpoints, access credentials, and settings for HTTP redirects and retries in case of connection failures: nsx_user = <admin user name> nsx_password = <password for nsx_user> req_timeout = <timeout in seconds for NSX_requests> # default 30 seconds http_timeout = <tiemout in seconds for single HTTP request> # default 10 seconds retries = <number of HTTP request retries> # default 2 redirects = <maximum allowed redirects for a HTTP request> # default 3 nsx_controllers = <comma separated list of API endpoints> To ensure correct operations, the nsx_user user must have administrator credentials on the NSX platform. A controller API endpoint consists of the IP address and port for the controller; if you omit the port, port 443 is used. If multiple API endpoints are specified, it is up to the user to ensure that all these endpoints belong to the same controller cluster. The OpenStack Networking VMware NSX plug-in does not perform this check, and results might be unpredictable. When you specify multiple API endpoints, the plug-in load-balances requests on the various API endpoints. The UUID of the NSX Transport Zone that should be used by default when a tenant creates a network. You can get this value from the NSX Manager's Transport Zones page: default_tz_uuid = <uuid_of_the_transport_zone> default_l3_gw_service_uuid = <uuid_of_the_gateway_service> Ubuntu packaging currently does not update the Neutron init script to point to the NSX configuration file. Instead, you must manually update /etc/default/neutron-server to add this line: NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini Restart neutron-server to apply new settings: $ sudo service neutron-server restart Example nsx.ini file: [DEFAULT] default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf nsx_user=admin nsx_password=changeme nsx_controllers=10.127.0.100,10.127.0.200:8888 To debug nsx.ini configuration issues, run this command from the host that runs neutron-server: # neutron-check-nsx-config <path/to/nsx.ini> This command tests whether neutron-server can log into all of the NSX Controllers and the SQL server, and whether all UUID values are correct.
Load Balancer-as-a-Service and Firewall-as-a-Service The NSX LBaaS and FWaaS services use the standard OpenStack API with the exception of requiring routed-insertion extension support. The main differences between the NSX implementation and the community reference implementation of these services are: The NSX LBaaS and FWaaS plug-ins require the routed-insertion extension, which adds the router_id attribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router. The community reference implementation of LBaaS only supports a one-arm model, which restricts the VIP to be on the same subnet as the back-end servers. The NSX LBaaS plug-in only supports a two-arm model between north-south traffic, which means that you can create the VIP on only the external (physical) network. The community reference implementation of FWaaS applies firewall rules to all logical routers in a tenant, while the NSX FWaaS plug-in applies firewall rules only to one logical router according to the router_id of the firewall entity. To configure Load Balancer-as-a-Service and Firewall-as-a-Service with NSX: Edit /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.vmware.NsxServicePlugin # Note: comment out service_plug-ins. LBaaS & FWaaS is supported by core_plugin NsxServicePlugin # service_plugins = Edit /etc/neutron/plugins/vmware/nsx.ini file: In addition to the original NSX configuration, the default_l3_gw_service_uuid is required for the NSX Advanced plug-in and you must add a vcns section: [DEFAULT] nsx_password = admin nsx_user = admin nsx_controllers = 10.37.1.137:443 default_l3_gw_service_uuid = aae63e9b-2e4e-4efe-81a1-92cf32e308bf default_tz_uuid = 2702f27a-869a-49d1-8781-09331a0f6b9e [vcns] # VSM management URL manager_uri = https://10.24.106.219 # VSM admin user name user = admin # VSM admin password password = default # UUID of a logical switch on NSX which has physical network connectivity (currently using bridge transport type) external_network = f2c023cf-76e2-4625-869b-d0dabcfcc638 # ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used # deployment_container_id = # task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec. # task_status_check_interval =
Configure PLUMgrid plug-in To use the PLUMgrid plug-in with OpenStack Networking Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 Edit /etc/neutron/plugins/plumgrid/plumgrid.ini under the [PLUMgridDirector] section, and specify the IP address, port, admin user name, and password of the PLUMgrid Director: [PLUMgridDirector] director_server = "PLUMgrid-director-ip-address" director_server_port = "PLUMgrid-director-port" username = "PLUMgrid-director-admin-username" password = "PLUMgrid-director-admin-password" For database configuration, see Install Networking Services in the Installation Guide. Restart neutron-server to apply the new settings: $ sudo service neutron-server restart
Configure Ryu plug-in To use the Ryu plug-in with OpenStack Networking Install the Ryu plug-in, as follows: $ sudo apt-get install neutron-plugin-ryu Edit /etc/neutron/neutron.conf and set: core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 Edit the /etc/neutron/plugins/ryu/ryu.ini file and update these options in the [ovs] section for the ryu-neutron-agent: openflow_rest_api. Defines where Ryu is listening for REST API. Substitute ip-address and port-no based on your Ryu setup. ovsdb_interface. Enables Ryu to access the ovsdb-server. Substitute eth0 based on your setup. The IP address is derived from the interface name. If you want to change this value irrespective of the interface name, you can specify ovsdb_ip. If you use a non-default port for ovsdb-server, you can specify ovsdb_port. tunnel_interface. Defines which IP address is used for tunneling. If you do not use tunneling, this value is ignored. The IP address is derived from the network interface name. For database configuration, see Install Networking Services in Installation Guide. You can use the same configuration file for many compute nodes by using a network interface name with a different IP address: openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> Restart neutron-server to apply the new settings: $ sudo service neutron-server restart
Configure neutron agents Plug-ins typically have requirements for particular software that must be run on each node that handles data packets. This includes any node that runs nova-compute and nodes that run dedicated OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent, neutron-metering-agent or neutron-lbaas-agent. A data-forwarding node typically has a network interface with an IP address on the “management network” and another interface on the “data network”. This section shows you how to install and configure a subset of the available plug-ins, which might include the installation of switching software (for example, Open vSwitch) and as agents used to communicate with the neutron-server process running elsewhere in the data center.
Configure data-forwarding nodes
Node set up: OVS plug-in This section also applies to the ML2 plug-in when Open vSwitch is used as a mechanism driver. If you use the Open vSwitch plug-in, you must install Open vSwitch and the neutron-plugin-openvswitch-agent agent on each data-forwarding node: Do not install the openvswitch-brcompat package because it prevents the security group functionality from operating correctly. To set up each node for the OVS plug-in Install the OVS agent package. This action also installs the Open vSwitch software as a dependency: $ sudo apt-get install neutron-plugin-openvswitch-agent On each node that runs the neutron-plugin-openvswitch-agent, complete these steps: Replicate the ovs_neutron_plugin.ini file that you created on the node. If you use tunneling, update the ovs_neutron_plugin.ini file for the node with the IP address that is configured on the data network for the node by using the local_ip value. Restart Open vSwitch to properly load the kernel module: $ sudo service openvswitch-switch restart Restart the agent: $ sudo service neutron-plugin-openvswitch-agent restart All nodes that run neutron-plugin-openvswitch-agent must have an OVS br-int bridge. To create the bridge, run: $ sudo ovs-vsctl add-br br-int
Node set up: NSX plug-in If you use the NSX plug-in, you must also install Open vSwitch on each data-forwarding node. However, you do not need to install an additional agent on each node. It is critical that you are running an Open vSwitch version that is compatible with the current version of the NSX Controller software. Do not use the Open vSwitch version that is installed by default on Ubuntu. Instead, use the Open vSwitch version that is provided on the VMware support portal for your NSX Controller version. To set up each node for the NSX plug-in Ensure that each data-forwarding node has an IP address on the management network, and an IP address on the "data network" that is used for tunneling data traffic. For full details on configuring your forwarding node, see the NSX Administrator Guide. Use the NSX Administrator Guide to add the node as a Hypervisor by using the NSX Manager GUI. Even if your forwarding node has no VMs and is only used for services agents like neutron-dhcp-agent or neutron-lbaas-agent, it should still be added to NSX as a Hypervisor. After following the NSX Administrator Guide, use the page for this Hypervisor in the NSX Manager GUI to confirm that the node is properly connected to the NSX Controller Cluster and that the NSX Controller Cluster can see the br-int integration bridge.
Node set up: Ryu plug-in If you use the Ryu plug-in, you must install both Open vSwitch and Ryu, in addition to the Ryu agent package: To set up each node for the Ryu plug-in Install Ryu (there isn't currently an Ryu package for ubuntu): $ sudo pip install ryu Install the Ryu agent and Open vSwitch packages: $ sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms Replicate the ovs_ryu_plugin.ini and neutron.conf files created in the above step on all nodes running neutron-plugin-ryu-agent. Restart Open vSwitch to properly load the kernel module: $ sudo service openvswitch-switch restart Restart the agent: $ sudo service neutron-plugin-ryu-agent restart All nodes running neutron-plugin-ryu-agent also require that an OVS bridge named "br-int" exists on each node. To create the bridge, run: $ sudo ovs-vsctl add-br br-int
Configure DHCP agent The DHCP service agent is compatible with all existing plug-ins and is required for all deployments where VMs should automatically receive IP addresses through DHCP. To install and configure the DHCP agent You must configure the host running the neutron-dhcp-agent as a "data forwarding node" according to the requirements for your plug-in (see ). Install the DHCP agent: $ sudo apt-get install neutron-dhcp-agent Finally, update any options in the /etc/neutron/dhcp_agent.ini file that depend on the plug-in in use (see the sub-sections). If you reboot a node that runs the DHCP agent, you must run the neutron-ovs-cleanup command before the neutron-dhcp-agent service starts. On Red Hat-based systems, the neutron-ovs-cleanup service runs the neutron-ovs-cleanupcommand automatically. However, on Debian-based systems such as Ubuntu, you must manually run this command or write your own system script that runs on boot before the neutron-dhcp-agent service starts.
DHCP agent setup: OVS plug-in These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the OVS plug-in: [DEFAULT] enable_isolated_metadata = True use_namespaces = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
DHCP agent setup: NSX plug-in These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the NSX plug-in: [DEFAULT] enable_metadata_network = True enable_isolated_metadata = True use_namespaces = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
DHCP agent setup: Ryu plug-in These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the Ryu plug-in: [DEFAULT] use_namespace = True interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Configure L3 agent The OpenStack Networking Service has a widely used API extension to allow administrators and tenants to create routers to interconnect L2 networks, and floating IPs to make ports on private networks publicly accessible. Many plug-ins rely on the L3 service agent to implement the L3 functionality. However, the following plug-ins already have built-in L3 capabilities: NSX plug-in Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the proprietary Big Switch controller. Only the proprietary BigSwitch controller implements L3 functionality. When using Floodlight as your OpenFlow controller, L3 functionality is not available. PLUMgrid plug-in Do not configure or use neutron-l3-agent if you use one of these plug-ins. To install the L3 agent for all other plug-ins Install the neutron-l3-agent binary on the network node: $ sudo apt-get install neutron-l3-agent To uplink the node that runs neutron-l3-agent to the external network, create a bridge named "br-ex" and attach the NIC for the external network to this bridge. For example, with Open vSwitch and NIC eth1 connected to the external network, run: $ sudo ovs-vsctl add-br br-ex $ sudo ovs-vsctl add-port br-ex eth1 Do not manually configure an IP address on the NIC connected to the external network for the node running neutron-l3-agent. Rather, you must have a range of IP addresses from the external network that can be used by OpenStack Networking for routers that uplink to the external network. This range must be large enough to have an IP address for each router in the deployment, as well as each floating IP. The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent defaults to using Linux network namespaces to provide isolated forwarding contexts. As a result, the IP addresses of routers are not visible simply by running the ip addr list or ifconfig command on the node. Similarly, you cannot directly ping fixed IPs. To do either of these things, you must run the command within a particular network namespace for the router. The namespace has the name "qrouter-<UUID of the router>. These example commands run in the router namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b: # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip> If you reboot a node that runs the L3 agent, you must run the neutron-ovs-cleanup command before the neutron-l3-agent service starts. On Red Hat-based systems, the neutron-ovs-cleanup service runs the neutron-ovs-cleanup command automatically. However, on Debian-based systems such as Ubuntu, you must manually run this command or write your own system script that runs on boot before the neutron-l3-agent service starts.
Configure metering agent Starting with the Havana release, the Neutron Metering resides beside neutron-l3-agent. To install the metering agent and configure the node Install the agent by running: $ sudo apt-get install neutron-plugin-metering-agent If you use one of the following plugins, you need to configure the metering agent with these lines as well: An OVS-based plug-in such as OVS, NSX, Ryu, NEC, BigSwitch/Floodlight: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver A plug-in that uses LinuxBridge: interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver To use the reference implementation, you must set: driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver Set this parameter in the neutron.conf file on the host that runs neutron-server: service_plugins = neutron.services.metering.metering_plugin.MeteringPlugin
Configure Load Balancing as a Service (LBaaS) Configure Load Balancing as a Service (LBaas) with the Open vSwitch or Linux Bridge plug-in. The Open vSwitch LBaaS driver is required when enabling LBaaS for OVS-based plug-ins, including BigSwitch, Floodlight, NEC, NSX, and Ryu. Install the agent by running: $ sudo apt-get install neutron-lbaas-agent Enable the HAProxy plug-in using the parameter in the /usr/share/neutron/neutron-dist.conf file: service_provider = LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default Enable the load balancer plugin using in the /etc/neutron/neutron.conf file: service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin Enable the HAProxy load balancer in the /etc/neutron/lbaas_agent.ini file: device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver Select the required driver in the /etc/neutron/lbaas_agent.ini file: Enable the Open vSwitch LBaaS driver: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver Or enable the Linux Bridge LBaaS driver: interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver Apply the new settings by restarting the neutron-server and neutron-lbaas-agent services. Enable Load Balancing in the Project section of the Dashboard user interface: Change the option to True in the /etc/openstack-dashboard/local_settings file: OPENSTACK_NEUTRON_NETWORK = { 'enable_lb': True, Apply the new settings by restarting the httpd service. You can now view the Load Balancer management options in dashboard's Project view.
Configure FWaaS agent The Firewall-as-a-Service (FWaaS) agent is co-located with the Neutron L3 agent and does not require any additional packages apart from those required for the Neutron L3 agent. You can enable the FWaaS functionality by setting the configuration, as follows. To configure FWaaS service and agent Set this parameter in the neutron.conf file on the host that runs neutron-server: service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin To use the reference implementation, you must also add a FWaaS driver configuration to the neutron.conf file on every node where the Neutron L3 agent is deployed: [fwaas] driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver enabled = True