From 94a0f25c3736c8ea1d141554078fc94271ba8dfe Mon Sep 17 00:00:00 2001 From: Martin Lopes Date: Fri, 7 Feb 2014 18:49:52 +1000 Subject: [PATCH] Updates ML2 content in Networking intro Rewrote introductory ML2 text for clarity. Made special mention of the use case where ML2 will offer particular benefit. Updated the note to emphasize that the deprecated plug-ins have been ported to ML2. Change-Id: Ie9273c73b265d6afe9a3603348c59806c1a9561c Partial-Bug: #1217503 --- .../section_networking_introduction.xml | 2421 +++++++++-------- 1 file changed, 1217 insertions(+), 1204 deletions(-) diff --git a/doc/admin-guide-cloud/section_networking_introduction.xml b/doc/admin-guide-cloud/section_networking_introduction.xml index 2bcc0c3767..c0242802f7 100644 --- a/doc/admin-guide-cloud/section_networking_introduction.xml +++ b/doc/admin-guide-cloud/section_networking_introduction.xml @@ -1,660 +1,689 @@ -
- Introduction to Networking - The Networking service, code-named Neutron, provides an - API that lets you define network connectivity and addressing in - the cloud. The Networking service enables operators to - leverage different networking technologies to power their - cloud networking. The Networking service also provides an - API to configure and manage a variety of network services - ranging from L3 forwarding and NAT to load balancing, edge - firewalls, and IPSEC VPN. - For a detailed description of the Networking API - abstractions and their attributes, see the OpenStack Networking API v2.0 - Reference. -
- Networking API - Networking is a virtual network service that - provides a powerful API to define the network - connectivity and IP addressing used by devices from - other services, such as Compute. - The Compute API has a virtual server abstraction to - describe computing resources. Similarly, the - Networking API has virtual network, subnet, and port - abstractions to describe networking resources. - - - - - - - - - - - - - - - - - - - - - - - - -
Networking resources
ResourceDescription
NetworkAn isolated L2 segment, analogous to VLAN - in the physical networking world.
SubnetA block of v4 or v6 IP addresses and - associated configuration state.
PortA connection point for attaching a single - device, such as the NIC of a virtual - server, to a virtual network. Also - describes the associated network - configuration, such as the MAC and IP - addresses to be used on that port.
- You can configure rich network topologies by - creating and configuring networks and subnets, and - then instructing other OpenStack services like Compute - to attach virtual devices to ports on these - networks. - In particular, Networking supports each tenant - having multiple private networks, and allows tenants - to choose their own IP addressing scheme (even if - those IP addresses overlap with those used by other - tenants). The Networking service: - - - Enables advanced cloud networking use cases, - such as building multi-tiered web applications - and allowing applications to be migrated to - the cloud without changing IP - addresses. - - - Offers flexibility for the cloud - administrator to customize network - offerings. - - - Enables developers to extend the Networking - API. Over time, the extended functionality - becomes part of the core Networking - API. - - -
-
- Plug-in architecture - The original Compute network implementation assumed - a basic model of isolation through Linux VLANs and IP - tables. Networking introduces the concept of a - plug-in, which - is a back-end implementation of the Networking API. A - plug-in can use a variety of technologies to implement - the logical API requests. Some Networking plug-ins - might use basic Linux VLANs and IP tables, while - others might use more advanced technologies, such as - L2-in-L3 tunneling or OpenFlow, to provide similar - benefits. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Available networking plug-ins
Plug-inDocumentation
Big Switch Plug-in - (Floodlight REST - Proxy)Documentation included in this guide and - http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin -
Brocade - Plug-inDocumentation included in this guide
Ciscohttp://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V - Plug-inhttp://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge - Plug-inhttp://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox - Plug-inhttps://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet - Plug-inhttp://www.midokura.com/
ML2 (Modular Layer - 2) Plug-inhttps://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow - Plug-inhttp://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin
Nicira NVP - Plug-inDocumentation included in this guide as - well as in NVP Product Overview, NVP Product Support
Open vSwitch - Plug-inDocumentation included in this guide.
PLUMgridDocumentation included in this guide as - well as in https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
Ryu - Plug-inDocumentation included in this guide as - well as in https://github.com/osrg/ryu/wiki/OpenStack
- Plug-ins can have different properties for hardware - requirements, features, performance, scale, or - operator tools. Because Networking supports a large - number of plug-ins, the cloud administrator can weigh - options to decide on the right networking technology - for the deployment. - In the Havana release, OpenStack Networking provides - the - Modular Layer 2 (ML2) plug-in that can concurrently - use multiple layer 2 networking technologies that are - found in real-world data centers. It currently works - with the existing Open vSwitch, Linux Bridge, and - Hyper-v L2 agents. The ML2 framework simplifies the - addition of support for new L2 technologies and - reduces the effort that is required to add and - maintain them compared to monolithic plug-ins. - - Plug-in deprecation notice: - The Open vSwitch and Linux Bridge plug-ins are - deprecated in the Havana release and will be - removed in the Icehouse release. All features have - been ported to the ML2 plug-in in the form of - mechanism drivers. ML2 currently provides Linux - Bridge, Open vSwitch and Hyper-v mechanism - drivers. - - Not all Networking plug-ins are compatible with all - possible Compute drivers: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Plug-in compatibility with Compute - drivers
Plug-inLibvirt (KVM/QEMU)XenServerVMwareHyper-VBare-metal
Big Switch / FloodlightYes - - - -
BrocadeYes - - - -
CiscoYes - - - -
Cloudbase Hyper-V - - - Yes -
Linux BridgeYes - - - -
MellanoxYes - - - -
MidonetYes - - - -
ML2Yes - - Yes -
NEC OpenFlowYes - - - -
Nicira NVPYesYesYes - -
Open vSwitchYes - - - -
PlumgridYes - Yes - -
RyuYes - - - -
-
- Plug-in configurations - For configurations options, see Networking configuration options in - Configuration - Reference. These sections explain how - to configure specific plug-ins. -
- Configure Big Switch, Floodlight REST Proxy - plug-in - - To use the REST Proxy plug-in with - OpenStack Networking - - Edit - /etc/neutron/neutron.conf - and set: - core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 - - - Edit the plug-in configuration file, - /etc/neutron/plugins/bigswitch/restproxy.ini, - and specify a comma-separated list of - controller_ip:port - pairs: - server = <controller-ip>:<port> - For database configuration, see Install Networking Services - in any of the Installation - Guides in the OpenStack Documentation - index. (The link defaults to - the Ubuntu version.) - - - To apply the new settings, restart - neutron-server: - # sudo service neutron-server restart - - -
+ Introduction to Networking + The Networking service, code-named Neutron, provides an API + that lets you define network connectivity and addressing in + the cloud. The Networking service enables operators to + leverage different networking technologies to power their + cloud networking. The Networking service also provides an API + to configure and manage a variety of network services ranging + from L3 forwarding and NAT to load balancing, edge firewalls, + and IPSEC VPN. + For a detailed description of the Networking API + abstractions and their attributes, see the OpenStack Networking API v2.0 + Reference. +
+ Networking API + Networking is a virtual network service that provides a + powerful API to define the network connectivity and IP + addressing that devices from other services, such as + Compute, use. + The Compute API has a virtual server abstraction to + describe computing resources. Similarly, the Networking + API has virtual network, subnet, and port abstractions to + describe networking resources. + + + + + + + + + + + + + + + + + + + + + + + + +
Networking resources
ResourceDescription
NetworkAn isolated L2 segment, analogous to VLAN in + the physical networking world.
SubnetA block of v4 or v6 IP addresses and + associated configuration state.
PortA connection point for attaching a single + device, such as the NIC of a virtual server, + to a virtual network. Also describes the + associated network configuration, such as the + MAC and IP addresses to be used on that + port.
+ You can configure rich network topologies by creating + and configuring networks and subnets, and then instructing + other OpenStack services like Compute to attach virtual + devices to ports on these networks. + In particular, Networking supports each tenant having + multiple private networks, and allows tenants to choose + their own IP addressing scheme (even if those IP addresses + overlap with those that other tenants use). The Networking + service: + + + Enables advanced cloud networking use cases, + such as building multi-tiered web applications and + enabling migration of applications to the cloud + without changing IP addresses. + + + Offers flexibility for the cloud administrator + to customize network offerings. + + + Enables developers to extend the Networking API. + Over time, the extended functionality becomes part + of the core Networking API. + + +
+
+ Plug-in architecture + The original Compute network implementation assumed a + basic model of isolation through Linux VLANs and IP + tables. Networking introduces support for vendor + plug-ins, which offer a custom + back-end implementation of the Networking API. A plug-in + can use a variety of technologies to implement the logical + API requests. Some Networking plug-ins might use basic + Linux VLANs and IP tables, while others might use more + advanced technologies, such as L2-in-L3 tunneling or + OpenFlow, to provide similar benefits. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Available networking plug-ins
Plug-inDocumentation
Big Switch Plug-in + (Floodlight REST Proxy)This guide and http://www.openflowhub.org/display/floodlightcontroller/Neutron+REST+Proxy+Plugin +
Brocade + Plug-inThis guide
Ciscohttp://wiki.openstack.org/cisco-neutron
Cloudbase Hyper-V + Plug-inhttp://www.cloudbase.it/quantum-hyper-v-plugin/
Linux Bridge + Plug-inhttp://wiki.openstack.org/Neutron-Linux-Bridge-Plugin
Mellanox + Plug-inhttps://wiki.openstack.org/wiki/Mellanox-Neutron/
Midonet + Plug-inhttp://www.midokura.com/
ML2 (Modular Layer 2) + Plug-inhttps://wiki.openstack.org/wiki/Neutron/ML2
NEC OpenFlow + Plug-inhttp://wiki.openstack.org/Quantum-NEC-OpenFlow-Plugin
Nicira NVP + Plug-inThis guide and NVP Product Overview, NVP Product Support
Open vSwitch + Plug-inThis guide.
PLUMgridThis guide and https://https://wiki.openstack.org/wiki/PLUMgrid-Neutron
Ryu + Plug-inThis guide and https://github.com/osrg/ryu/wiki/OpenStack
+ Plug-ins can have different properties for hardware + requirements, features, performance, scale, or operator + tools. Because Networking supports a large number of + plug-ins, the cloud administrator can weigh options to + decide on the right networking technology for the + deployment. + In the Havana release, OpenStack Networking introduces + the + Modular Layer 2 (ML2) plug-in that enables + the use of multiple concurrent mechanism drivers. This + capability aligns with the complex requirements typically + found in large heterogeneous environments. It currently + works with the existing Open vSwitch, Linux Bridge, and + Hyper-v L2 agents. The ML2 framework simplifies the + addition of support for new L2 technologies and reduces + the effort that is required to add and maintain them + compared to earlier large plug-ins. + + Plug-in deprecation notice + The Open vSwitch and Linux Bridge plug-ins are + deprecated in the Havana release and will be removed + in the Icehouse release. The features in these + plug-ins are now part of the ML2 plug-in in the form + of mechanism drivers. + + Not all Networking plug-ins are compatible with all + possible Compute drivers: + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Plug-in compatibility with Compute + drivers
Plug-inLibvirt (KVM/QEMU)XenServerVMwareHyper-VBare-metal
Big Switch / FloodlightYes + + + +
BrocadeYes + + + +
CiscoYes + + + +
Cloudbase Hyper-V + + + Yes +
Linux BridgeYes + + + +
MellanoxYes + + + +
MidonetYes + + + +
ML2Yes + + Yes +
NEC OpenFlowYes + + + +
Nicira NVPYesYesYes + +
Open vSwitchYes + + + +
PlumgridYes + Yes + +
RyuYes + + + +
+
+ Plug-in configurations + For configurations options, see Networking configuration options in + Configuration Reference. + These sections explain how to configure specific + plug-ins. +
+ Configure Big Switch, Floodlight REST Proxy + plug-in + + To use the REST Proxy plug-in with + OpenStack Networking + + Edit the + /etc/neutron/neutron.conf file and add this line: + core_plugin = neutron.plugins.bigswitch.plugin.NeutronRestProxyV2 + + + Edit the plug-in configuration file, + /etc/neutron/plugins/bigswitch/restproxy.ini, + and specify a comma-separated list of + controller_ip:port + pairs: + server = <controller-ip>:<port> + For database configuration, see Install Networking Services in + the Installation + Guide in the OpenStack Documentation index. + (The link defaults to the Ubuntu + version.) + + + Restart neutron-server to apply + the new settings: + # sudo service neutron-server restart + + +
-
- Configure Brocade plug-in - - To use the Brocade plug-in with - OpenStack Networking - - Install the Brocade modified Python - netconf client (ncclient) library which is available - at https://github.com/brocade/ncclient: - $ git clone https://www.github.com/brocade/ncclient +
+ Configure Brocade plug-in + + To use the Brocade plug-in with OpenStack + Networking + + Install the Brocade-modified Python + netconf client (ncclient) library, which + is available at https://github.com/brocade/ncclient: + $ git clone https://www.github.com/brocade/ncclient $ cd ncclient; sudo python ./setup.py install - - - - Edit the - /etc/neutron/neutron.conf - file and set the following option: - core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2 - - - Edit the - /etc/neutron/plugins/brocade/brocade.ini - configuration file for the Brocade plug-in - and specify the admin user name, password, - and IP address of the Brocade switch: - - [SWITCH] + + + Edit the + /etc/neutron/neutron.conf + file and set the following option: + core_plugin = neutron.plugins.brocade.NeutronPlugin.BrocadePluginV2 + + + Edit the + /etc/neutron/plugins/brocade/brocade.ini + configuration file for the Brocade plug-in + and specify the admin user name, password, + and IP address of the Brocade + switch: + [SWITCH] username = admin password = password address = switch mgmt ip address ostype = NOS - - For database configuration, see Install Networking Services - in any of the Installation - Guides in the OpenStack Documentation - index. (The link defaults to - the Ubuntu version.) - - - To apply the new settings, restart the - neutron-server service: - - # service neutron-server restart - - -
- -
- Configure OVS plug-in - If you use the Open vSwitch (OVS) plug-in in - a deployment with multiple hosts, you will - need to use either tunneling or vlans to - isolate traffic from multiple networks. - Tunneling is easier to deploy because it does - not require configuring VLANs on network - switches. - This procedure uses tunneling: - - To configure OpenStack Networking to - use the OVS plug-in - - Edit - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - to specify these values - (for database configuration, see Install Networking Services - in Installation - Guide): - enable_tunneling=True + For database configuration, see Install Networking Services in + any of the Installation + Guides in the OpenStack Documentation index. + (The link defaults to the Ubuntu + version.) + + + Restart the + neutron-server + service to apply the new settings: + # service neutron-server restart + + +
+
+ Configure OVS plug-in + If you use the Open vSwitch (OVS) plug-in in a + deployment with multiple hosts, you must use + either tunneling or vlans to isolate traffic from + multiple networks. Tunneling is easier to deploy + because it does not require configuring VLANs on + network switches. + This procedure uses tunneling: + + To configure OpenStack Networking to use + the OVS plug-in + + Edit + /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini + to specify these values (for + database configuration, see Install Networking Services in + Installation + Guide): + enable_tunneling=True tenant_network_type=gre tunnel_id_ranges=1:1000 # only required for nodes running agents local_ip=<data-net-IP-address-of-node> - - - If you use the neutron DHCP agent, - add these lines to the - /etc/neutron/dhcp_agent.ini - file: - dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf - - - Create - /etc/neutron/dnsmasq-neutron.conf, - and add these values to lower the MTU - size on instances and prevent packet - fragmentation over the GRE - tunnel: - dhcp-option-force=26,1400 - - - After performing that change on the - node running neutron-server, - restart neutron-server to - apply the new settings: - # sudo service neutron-server restart - - -
-
- Configure Nicira NVP plug-in - - To configure OpenStack Networking to - use the NVP plug-in - While the instructions in this section refer to the Nicira NVP - platform, they also apply to VMware NSX. - - Install the NVP plug-in, as - follows: - # sudo apt-get install neutron-plugin-nicira - - - Edit - /etc/neutron/neutron.conf - and set: - core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 - Example - neutron.conf - file for NVP: - core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 + + + If you use the neutron DHCP agent, add + these lines to the + /etc/neutron/dhcp_agent.ini + file: + dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf + + + Create + /etc/neutron/dnsmasq-neutron.conf, + and add these values to lower the MTU size + on instances and prevent packet + fragmentation over the GRE tunnel: + dhcp-option-force=26,1400 + + + Restart to apply the new + settings: + # sudo service neutron-server restart + + +
+
+ Configure Nicira NVP plug-in + + To configure OpenStack Networking to use + the NVP plug-in + While the instructions in this section refer + to the Nicira NVP platform, they also apply to + VMware NSX. + + Install the NVP plug-in, as + follows: + # sudo apt-get install neutron-plugin-nicira + + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 + Example + neutron.conf file + for NVP: + core_plugin = neutron.plugins.nicira.NeutronPlugin.NvpPluginV2 rabbit_host = 192.168.203.10 allow_overlapping_ips = True - - - To configure the NVP controller cluster for the Openstack - Networking Service, locate the [default] section - in the /etc/neutron/plugins/nicira/nvp.ini - file, and add the following entries (for database configuration, see - Install Networking Services in Installation - Guide): - - A set of parameters need to establish and configure - the connection with the controller cluster. Such - parameters include NVP API endpoints, access - credentials, and settings for HTTP redirects and retries - in case of connection - failuresnvp_user = <admin user name> + + + To configure the NVP controller cluster + for the Openstack Networking Service, + locate the [default] + section in the + /etc/neutron/plugins/nicira/nvp.ini + file, and add the following entries (for + database configuration, see Install Networking Services in + Installation + Guide): + + + To establish and configure the + connection with the controller + cluster you must set some + parameters, including NVP API + endpoints, access credentials, and + settings for HTTP redirects and + retries in case of connection + failures: + nvp_user = <admin user name> nvp_password = <password for nvp_user> req_timeout = <timeout in seconds for NVP_requests> # default 30 seconds http_timeout = <tiemout in seconds for single HTTP request> # default 10 seconds retries = <number of HTTP request retries> # default 2 redirects = <maximum allowed redirects for a HTTP request> # default 3 -nvp_controllers = <comma separated list of API endpoints> - In order to ensure correct operations - nvp_user shoud be a user with - administrator credentials on the NVP platform. - A controller API endpoint consists of the - controller's IP address and port; if the port is - omitted, port 443 will be used. If multiple API - endpoints are specified, it is up to the user to ensure - that all these endpoints belong to the same controller - cluster; The Openstack Networking Nicira NVP plugin does - not perform this check, and results might be - unpredictable. - When multiple API endpoints are specified, the plugin - will load balance requests on the various API - endpoints. - - - The UUID of the NVP Transport Zone that should be used - by default when a tenant creates a network. This value - can be retrieved from the NVP Manager's Transport Zones - page: - default_tz_uuid = <uuid_of_the_transport_zone> - - - default_l3_gw_service_uuid = <uuid_of_the_gateway_service> - - Ubuntu packaging currently does not update the - neutron init script to point to the NVP - configuration file. Instead, you must manually - update - /etc/default/neutron-server - with the following: - NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini - - - - - - To apply the new settings, restart - neutron-server: - # sudo service neutron-server restart - - - Example nvp.ini - file: - [DEFAULT] +nvp_controllers = <comma separated list of API endpoints> + To ensure correct operations, + the nvp_user + user must have administrator + credentials on the NVP + platform. + A controller API endpoint + consists of the IP address and port + for the controller; if you omit the + port, port 443 is used. If multiple + API endpoints are specified, it is + up to the user to ensure that all + these endpoints belong to the same + controller cluster. The Openstack + Networking Nicira NVP plug-in does + not perform this check, and results + might be unpredictable. + When you specify multiple API + endpoints, the plug-in + load-balances requests on the + various API endpoints. + + + The UUID of the NVP Transport + Zone that should be used by default + when a tenant creates a network. + You can get this value from the NVP + Manager's Transport Zones + page: + default_tz_uuid = <uuid_of_the_transport_zone> + + + default_l3_gw_service_uuid = <uuid_of_the_gateway_service> + + Ubuntu packaging currently + does not update the Neutron init + script to point to the NVP + configuration file. Instead, you + must manually update + /etc/default/neutron-server + to add this line: + NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/nicira/nvp.ini + + + + + + Restart neutron-server to apply + new settings: + # sudo service neutron-server restart + + + Example nvp.ini + file: + [DEFAULT] default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf nvp_user=admin nvp_password=changeme nvp_controllers=10.127.0.100,10.127.0.200:8888 - - To debug nvp.ini - configuration issues, run this command - from the host that runs neutron-server: - # check-nvp-config <path/to/nvp.ini> - This command tests whether neutron-server can log - into all of the NVP Controllers and the - SQL server, and whether all UUID values - are correct. - -
- Loadbalancer-as-a-Service and Firewall-as-a-Service - The NVP LBaaS and FWaaS services use the standard OpenStack API with the exception of requiring routed-insertion extension support. - Below are the main differences between the NVP implementation and the community reference implementation of these services: - - - The NVP LBaaS and FWaaS plugins require the routed-insertion extension, which adds the router_id attribute to the VIP (Virtual IP address) and firewall resources and binds these services to a logical router. - - - The community reference implementation of LBaaS only supports a one-arm model, which restricts the VIP to be on the same subnet as the backend servers. The NVP LBaaS plugin only supports a two-arm model between north-south traffic, meaning that the VIP can only be created on the external (physical) network. - - - The community reference implementation of FWaaS applies firewall rules to all logical routers in a tenant, while the NVP FWaaS plugin applies firewall rules only to one logical router according to the router_id of the firewall entity. - - - - To configure Loadbalancer-as-a-Service and Firewall-as-a-Service with NVP: - - Edit /etc/neutron/neutron.conf file: - core_plugin = neutron.plugins.nicira.NeutronServicePlugin.NvpAdvancedPlugin - # Note: comment out service_plugins. LBaaS & FWaaS is supported by core_plugin NvpAdvancedPlugin - # service_plugins = - - - Edit /etc/neutron/plugins/nicira/nvp.ini file: - In addition to the original NVP configuration, the default_l3_gw_service_uuid - is required for the NVP Advanced Plugin and a vcns section must be added as - shown below. - [DEFAULT] + + To debug nvp.ini + configuration issues, run this command from + the host that runs neutron-server: + # check-nvp-config <path/to/nvp.ini> + This command tests whether neutron-server can log into + all of the NVP Controllers and the SQL server, + and whether all UUID values are + correct. + +
+ Load Balancer-as-a-Service and + Firewall-as-a-Service + The NVP LBaaS and FWaaS services use the + standard OpenStack API with the exception of + requiring routed-insertion extension + support. + The main differences between the NVP + implementation and the community reference + implementation of these services are: + + + The NVP LBaaS and FWaaS plug-ins + require the routed-insertion + extension, which adds the + router_id attribute to + the VIP (Virtual IP address) and + firewall resources and binds these + services to a logical router. + + + The community reference + implementation of LBaaS only supports + a one-arm model, which restricts the + VIP to be on the same subnet as the + back-end servers. The NVP LBaaS + plug-in only supports a two-arm model + between north-south traffic, which + means that you can create the VIP on + only the external (physical) + network. + + + The community reference + implementation of FWaaS applies + firewall rules to all logical routers + in a tenant, while the NVP FWaaS + plug-in applies firewall rules only to + one logical router according to the + router_id of the + firewall entity. + + + + To configure Load Balancer-as-a-Service + and Firewall-as-a-Service with + NVP: + + Edit + /etc/neutron/neutron.conf + file: + core_plugin = neutron.plugins.nicira.NeutronServicePlugin.NvpAdvancedPlugin +# Note: comment out service_plug-ins. LBaaS & FWaaS is supported by core_plugin NvpAdvancedPlugin +# service_plugins = + + + Edit + /etc/neutron/plugins/nicira/nvp.ini + file: + In addition to the original NVP + configuration, the + default_l3_gw_service_uuid + is required for the NVP Advanced + plug-in and you must add a vcns + section: + [DEFAULT] nvp_password = admin nvp_user = admin nvp_controllers = 10.37.1.137:443 @@ -675,601 +704,585 @@ nvp_controllers=10.127.0.100,10.127.0.200:8888 # UUID of a logical switch on NVP which has physical network connectivity (currently using bridge transport type) external_network = f2c023cf-76e2-4625-869b-d0dabcfcc638 - # ID of deployment_container on VSM. Optional, if not specified, a default global deployment container will be used + # ID of deployment_container on VSM. Optional, if not specified, a default global deployment container is used # deployment_container_id = # task_status_check_interval configures status check interval for vCNS asynchronous API. Default is 2000 msec. # task_status_check_interval = - - -
-
-
- Configure PLUMgrid plug-in - - To use the PLUMgrid plug-in with - OpenStack Networking - - Edit - /etc/neutron/neutron.conf - and set: - core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 - - Edit - /etc/neutron/plugins/plumgrid/plumgrid.ini - under the - [PLUMgridDirector] - section, and specify the IP address, - port, admin user name, and password of - the PLUMgrid Director: - [PLUMgridDirector] + +
+
+
+ Configure PLUMgrid plug-in + + To use the PLUMgrid plug-in with OpenStack + Networking + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2 + + + Edit + /etc/neutron/plugins/plumgrid/plumgrid.ini + under the + [PLUMgridDirector] + section, and specify the IP address, port, + admin user name, and password of the + PLUMgrid Director: + [PLUMgridDirector] director_server = "PLUMgrid-director-ip-address" director_server_port = "PLUMgrid-director-port" username = "PLUMgrid-director-admin-username" password = "PLUMgrid-director-admin-password" - For database configuration, see Install Networking Services - in Installation - Guide. - - - To apply the settings, restart - neutron-server: - # sudo service neutron-server restart - - -
-
- Configure Ryu plug-in - - To use the Ryu plug-in with OpenStack - Networking - - Install the Ryu plug-in, as - follows: - # sudo apt-get install neutron-plugin-ryu - - - Edit - /etc/neutron/neutron.conf - and set: - core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 - - - Edit - /etc/neutron/plugins/ryu/ryu.ini - (for database configuration, see Install Networking Services - in Installation - Guide), and update the - following in the - [ovs] - section for the - ryu-neutron-agent: - - The - openflow_rest_api - is used to tell where Ryu is - listening for REST API. Substitute + For database configuration, see Install Networking Services in + the Installation + Guide. + + + Restart + neutron-server to apply the new settings: + # sudo service neutron-server restart + + +
+
+ Configure Ryu plug-in + + To use the Ryu plug-in with OpenStack + Networking + + Install the Ryu plug-in, as + follows: + # sudo apt-get install neutron-plugin-ryu + + + Edit + /etc/neutron/neutron.conf + and set: + core_plugin = neutron.plugins.ryu.ryu_neutron_plugin.RyuNeutronPluginV2 + + + Edit the + /etc/neutron/plugins/ryu/ryu.ini + file and update these options in the + [ovs] section + for the + ryu-neutron-agent: + + + openflow_rest_api. + Defines where Ryu is listening for + REST API. Substitute ip-address and port-no based on your Ryu setup. - - - The - ovsdb_interface - is used for Ryu to access the + + + ovsdb_interface. + Enables Ryu to access the ovsdb-server. - Substitute eth0 based on your set - up. The IP address is derived from - the interface name. If you want to - change this value irrespective of - the interface name, - ovsdb_ip - can be specified. If you use a - non-default port for + Substitute eth0 + based on your setup. The IP address + is derived from the interface name. + If you want to change this value + irrespective of the interface name, + you can specify + ovsdb_ip. + If you use a non-default port for ovsdb-server, - it can be specified by + you can specify ovsdb_port. - - - tunnel_interface - needs to be set to tell what IP - address is used for tunneling (if - tunneling isn't used, this value is - ignored). The IP address is derived - from the network interface - name. - - - You can use the same configuration - file for many Compute nodes by using a - network interface name with a - different IP address: - openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> - - - To apply the new settings, restart - neutron-server: - # sudo service neutron-server restart - - -
-
-
-
- Configure neutron agents - Plug-ins typically have requirements for particular - software that must be run on each node that handles - data packets. This includes any node that runs - nova-compute and nodes that run - dedicated OpenStack Networking service agents such as, - neutron-dhcp-agent, - neutron-l3-agent, or - neutron-lbaas-agent (see - below for more information about individual service - agents). - A data-forwarding node typically has a network - interface with an IP address on the “management - network” and another interface on the “data - network”. - This section shows you how to install and configure - a subset of the available plug-ins, which may include - the installation of switching software (for example, - Open vSwitch) as well as agents used to communicate - with the neutron-server process running - elsewhere in the data center. -
- Configure data-forwarding nodes -
- Node set up: OVS plug-in - - - This section also applies to the ML2 plugin when Open vSwitch is - used as a mechanism driver. - If you use the Open vSwitch plug-in, you must install Open vSwitch - and the neutron-plugin-openvswitch-agent agent on - each data-forwarding node: - - Do not install the openvswitch-brcompat - package as it breaks the security groups - functionality. - - - To set up each node for the OVS - plug-in - - Install the OVS agent package (this - pulls in the Open vSwitch software as - a dependency): - # sudo apt-get install neutron-plugin-openvswitch-agent - - - On each node that runs the - neutron-plugin-openvswitch-agent: - - - Replicate the - ovs_neutron_plugin.ini - file created in the first step onto - the node. - - - If using tunneling, the - node's - ovs_neutron_plugin.ini - file must also be updated with the - node's IP address configured on the - data network using the - local_ip - value. - - - - - Restart Open vSwitch to properly - load the kernel module: - # sudo service openvswitch-switch restart - - - Restart the agent: - # sudo service neutron-plugin-openvswitch-agent restart - - - All nodes that run - neutron-plugin-openvswitch-agent - must have an OVS - br-int bridge. . - To create the bridge, run: - # sudo ovs-vsctl add-br br-int - - -
-
- Node set up: Nicira NVP plug-in - If you use the Nicira NVP plug-in, you must - also install Open vSwitch on each - data-forwarding node. However, you do not need - to install an additional agent on each - node. - - It is critical that you are running an - Open vSwitch version that is compatible - with the current version of the NVP - Controller software. Do not use the Open - vSwitch version that is installed by - default on Ubuntu. Instead, use the Open - Vswitch version that is provided on the - Nicira support portal for your NVP - Controller version. - - - To set up each node for the Nicira NVP - plug-in - - Ensure each data-forwarding node has - an IP address on the "management - network," and an IP address on the - "data network" that is used for - tunneling data traffic. For full - details on configuring your forwarding - node, see the NVP - Administrator + + + tunnel_interface. + Defines which IP address is used + for tunneling. If you do not use + tunneling, this value is ignored. + The IP address is derived from the + network interface name. + + + For database configuration, see Install Networking Services in + Installation Guide. - - - Use the NVP Administrator - Guide to add the node - as a "Hypervisor" using the NVP - Manager GUI. Even if your forwarding - node has no VMs and is only used for - services agents like - neutron-dhcp-agent - or - neutron-lbaas-agent, - it should still be added to NVP as a - Hypervisor. - - - After following the NVP - Administrator Guide, - use the page for this Hypervisor in - the NVP Manager GUI to confirm that - the node is properly connected to the - NVP Controller Cluster and that the - NVP Controller Cluster can see the - br-int - integration bridge. - - -
-
- Node set up: Ryu plug-in - If you use the Ryu plug-in, you must install - both Open vSwitch and Ryu, in addition to the - Ryu agent package: - - To set up each node for the Ryu - plug-in - - Install Ryu (there isn't currently - an Ryu package for ubuntu): - # sudo pip install ryu - - - Install the Ryu agent and Open - vSwitch packages: - # sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms - - - Replicate the - ovs_ryu_plugin.ini - and neutron.conf - files created in the above step on all - nodes running - neutron-plugin-ryu-agent. - - - - Restart Open vSwitch to properly - load the kernel module: - # sudo service openvswitch-switch restart - - - Restart the agent: - # sudo service neutron-plugin-ryu-agent restart - - - All nodes running - neutron-plugin-ryu-agent - also require that an OVS bridge named - "br-int" exists on each node. To - create the bridge, run: - # sudo ovs-vsctl add-br br-int - - -
-
-
- Configure DHCP agent - The DHCP service agent is compatible with all - existing plug-ins and is required for all - deployments where VMs should automatically receive - IP addresses through DHCP. - - To install and configure the DHCP - agent - - You must configure the host running the - neutron-dhcp-agent - as a "data forwarding node" according to - the requirements for your plug-in (see - ). + You can use the same configuration file + for many Compute nodes by using a network + interface name with a different IP + address: + openflow_rest_api = <ip-address>:<port-no> ovsdb_interface = <eth0> tunnel_interface = <eth0> - Install the DHCP agent: - # sudo apt-get install neutron-dhcp-agent - - - Finally, update any options in the - /etc/neutron/dhcp_agent.ini - file that depend on the plug-in in use - (see the sub-sections). - - - - If you reboot a node that runs the DHCP agent, you must - run the neutron-ovs-cleanup command before the - neutron-dhcp-agent - service starts. - On Red Hat-based systems, the - neutron-ovs-cleanup service runs the - neutron-ovs-cleanupcommand automatically. - However, on Debian-based systems such as Ubuntu, you must - manually run this command or write your own system script - that runs on boot before the - neutron-dhcp-agent service starts. - - -
- DHCP agent setup: OVS plug-in - These DHCP agent options are required in the - /etc/neutron/dhcp_agent.ini - file for the OVS plug-in: - [DEFAULT] -ovs_use_veth = True -enable_isolated_metadata = True -use_namespaces = True -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -
-
- DHCP agent setup: NVP plug-in - These DHCP agent options are required in the - /etc/neutron/dhcp_agent.ini - file for the NVP plug-in: - [DEFAULT] -ovs_use_veth = True -enable_metadata_network = True -enable_isolated_metadata = True -use_namespaces = True -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -
-
- DHCP agent setup: Ryu plug-in - These DHCP agent options are required in the - /etc/neutron/dhcp_agent.ini - file for the Ryu plug-in: - [DEFAULT] -ovs_use_veth = True -use_namespace = True -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver -
-
-
- Configure L3 agent - The OpenStack Networking Service has a widely used API - extension to allow administrators and tenants to - create routers to interconnect L2 networks, and - floating IPs to make ports on private networks - publicly accessible. - Many plug-ins rely on the L3 service agent to - implement the L3 functionality. However, the - following plug-ins already have built-in L3 - capabilities: - - - - Nicira NVP plug-in - - - Big Switch/Floodlight plug-in, which - supports both the open source Floodlight controller and - the proprietary Big Switch - controller. - - Only the proprietary BigSwitch - controller implements L3 - functionality. When using - Floodlight as your OpenFlow - controller, L3 functionality is not - available. - - - - PLUMgrid plug-in - - - - Do not configure or use - neutron-l3-agent - if you use one of these plug-ins. - - - To install the L3 agent for all other - plug-ins - - Install the - neutron-l3-agent - binary on the network node: - # sudo apt-get install neutron-l3-agent - - - To uplink the node that runs - neutron-l3-agent - to the external network, create a - bridge named "br-ex" and attach the - NIC for the external network to this - bridge. - For example, with Open vSwitch and - NIC eth1 connected to the external - network, run: - # sudo ovs-vsctl add-br br-ex -# sudo ovs-vsctl add-port br-ex eth1 - Do not manually configure an IP - address on the NIC connected to the - external network for the node running - neutron-l3-agent. - Rather, you must have a range of IP - addresses from the external network - that can be used by OpenStack - Networking for routers that uplink to - the external network. This range must - be large enough to have an IP address - for each router in the deployment, as - well as each floating IP. - - - The - neutron-l3-agent - uses the Linux IP stack and iptables - to perform L3 forwarding and NAT. In - order to support multiple routers with - potentially overlapping IP addresses, - neutron-l3-agent - defaults to using Linux network - namespaces to provide isolated - forwarding contexts. As a result, the - IP addresses of routers will not be - visible simply by running ip - addr list or - ifconfig on the - node. Similarly, you will not be able - to directly ping - fixed IPs. - To do either of these things, you - must run the command within a - particular router's network namespace. - The namespace will have the name - "qrouter-<UUID of the router>. - These example commands run in the - router namespace with UUID - 47af3868-0fa8-4447-85f6-1304de32153b: - # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list -# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip> - - - - - If you reboot a node that runs the L3 agent, you must run the - neutron-ovs-cleanup command before the neutron-l3-agent service starts. - On Red Hat-based systems, the neutron-ovs-cleanup service runs the - neutron-ovs-cleanup command automatically. However, - on Debian-based systems such as Ubuntu, you must manually run this command - or write your own system script that runs on boot before the neutron-l3-agent service starts. - - -
-
- Configure LBaaS agent - Starting with the Havana release, the Neutron - Load-Balancer-as-a-Service (LBaaS) supports an - agent scheduling mechanism, so several - neutron-lbaas-agents - can be run on several nodes (one per one). - - To install the LBaas agent and configure - the node - - Install the agent by running: - # sudo apt-get install neutron-lbaas-agent - - - If you are using: - - An OVS-based plug-in (OVS, - NVP, Ryu, NEC, - BigSwitch/Floodlight), you must - set: - interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - - - A plug-in that uses - LinuxBridge, you must set: - interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver - - - - - To use the reference implementation, you - must also set: - device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver - - - Set this parameter in the - neutron.conf file - on the host that runs neutron-server: - service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin - - -
-
- Configure FWaaS agent - The Firewall-as-a-Service (FWaaS) agent is - co-located with the Neutron L3 agent and does not - require any additional packages apart from those - required for the Neutron L3 agent. You can enable - the FWaaS functionality by setting the - configuration, as follows. - - To configure FWaaS service and - agent - - Set this parameter in the - neutron.conf file - on the host that runs neutron-server: - service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin - - - To use the reference implementation, you - must also add a FWaaS driver configuration - to the neutron.conf - file on every node where the Neutron L3 - agent is deployed: - [fwaas] -driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver -enabled = True + Restart neutron-server to apply + the new settings: + # sudo service neutron-server restart
+
+ Configure neutron agents + Plug-ins typically have requirements for particular + software that must be run on each node that handles data + packets. This includes any node that runs nova-compute and nodes + that run dedicated OpenStack Networking service agents + such as neutron-dhcp-agent, + neutron-l3-agent, or + neutron-lbaas-agent. + A data-forwarding node typically has a network interface + with an IP address on the “management network” and another + interface on the “data network”. + This section shows you how to install and configure a + subset of the available plug-ins, which might include the + installation of switching software (for example, Open + vSwitch) and as agents used to communicate with the + neutron-server process running elsewhere + in the data center. +
+ Configure data-forwarding nodes +
+ Node set up: OVS plug-in + + + This section also applies to the ML2 + plug-in when Open vSwitch is used as a + mechanism driver. + If you use the Open vSwitch plug-in, you + must install Open vSwitch and the + neutron-plugin-openvswitch-agent + agent on each data-forwarding node: + + Do not install the + openvswitch-brcompat + package because it prevents the security group + functionality from operating correctly. + + + To set up each node for the OVS + plug-in + + Install the OVS agent package. This + action also installs the Open vSwitch + software as a dependency: + # sudo apt-get install neutron-plugin-openvswitch-agent + + + On each node that runs the + neutron-plugin-openvswitch-agent, complete these steps: + + + Replicate the + ovs_neutron_plugin.ini + file that you created on + the node. + + + If you use tunneling, update the + ovs_neutron_plugin.ini + file for the node with the + IP address that is configured on the + data network for the node by using the + local_ip + value. + + + + + Restart Open vSwitch to properly load + the kernel module: + # sudo service openvswitch-switch restart + + + Restart the agent: + # sudo service neutron-plugin-openvswitch-agent restart + + + All nodes that run + neutron-plugin-openvswitch-agent + must have an OVS br-int + bridge. To create the bridge, + run: + # sudo ovs-vsctl add-br br-int + + +
+
+ Node set up: Nicira NVP plug-in + If you use the Nicira NVP plug-in, you must also + install Open vSwitch on each data-forwarding node. + However, you do not need to install an additional + agent on each node. + + It is critical that you are running an Open + vSwitch version that is compatible with the + current version of the NVP Controller + software. Do not use the Open vSwitch version + that is installed by default on Ubuntu. + Instead, use the Open Vswitch version that is + provided on the Nicira support portal for your + NVP Controller version. + + + To set up each node for the Nicira NVP + plug-in + + Ensure that each data-forwarding node has an + IP address on the management network, + and an IP address on the "data network" + that is used for tunneling data traffic. + For full details on configuring your + forwarding node, see the NVP + Administrator + Guide. + + + Use the NVP Administrator + Guide to add the node as a + Hypervisor by using the NVP Manager GUI. + Even if your forwarding node has no VMs + and is only used for services agents like + neutron-dhcp-agent + or + neutron-lbaas-agent, + it should still be added to NVP as a + Hypervisor. + + + After following the NVP + Administrator Guide, use + the page for this Hypervisor in the NVP + Manager GUI to confirm that the node is + properly connected to the NVP Controller + Cluster and that the NVP Controller + Cluster can see the + br-int integration + bridge. + + +
+
+ Node set up: Ryu plug-in + If you use the Ryu plug-in, you must install + both Open vSwitch and Ryu, in addition to the Ryu + agent package: + + To set up each node for the Ryu + plug-in + + Install Ryu (there isn't currently an + Ryu package for ubuntu): + # sudo pip install ryu + + + Install the Ryu agent and Open vSwitch + packages: + # sudo apt-get install neutron-plugin-ryu-agent openvswitch-switch python-openvswitch openvswitch-datapath-dkms + + + Replicate the + ovs_ryu_plugin.ini + and neutron.conf + files created in the above step on all + nodes running + neutron-plugin-ryu-agent. + + + Restart Open vSwitch to properly load + the kernel module: + # sudo service openvswitch-switch restart + + + Restart the agent: + # sudo service neutron-plugin-ryu-agent restart + + + All nodes running + neutron-plugin-ryu-agent + also require that an OVS bridge named + "br-int" exists on each node. To create + the bridge, run: + # sudo ovs-vsctl add-br br-int + + +
+
+
+ Configure DHCP agent + The DHCP service agent is compatible with all + existing plug-ins and is required for all deployments + where VMs should automatically receive IP addresses + through DHCP. + + To install and configure the DHCP agent + + You must configure the host running the + neutron-dhcp-agent + as a "data forwarding node" according to the + requirements for your plug-in (see ). + + + Install the DHCP agent: + # sudo apt-get install neutron-dhcp-agent + + + Finally, update any options in the + /etc/neutron/dhcp_agent.ini + file that depend on the plug-in in use (see + the sub-sections). + + + + If you reboot a node that runs the DHCP agent, + you must run the + neutron-ovs-cleanup command + before the neutron-dhcp-agent service + starts. + On Red Hat-based systems, the + neutron-ovs-cleanup service runs + the neutron-ovs-cleanupcommand + automatically. However, on Debian-based systems + such as Ubuntu, you must manually run this command + or write your own system script that runs on boot + before the + neutron-dhcp-agent service + starts. + +
+ DHCP agent setup: OVS plug-in + These DHCP agent options are required in the + /etc/neutron/dhcp_agent.ini + file for the OVS plug-in: + [DEFAULT] +ovs_use_veth = True +enable_isolated_metadata = True +use_namespaces = True +interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver +
+
+ DHCP agent setup: NVP plug-in + These DHCP agent options are required in the + /etc/neutron/dhcp_agent.ini + file for the NVP plug-in: + [DEFAULT] +ovs_use_veth = True +enable_metadata_network = True +enable_isolated_metadata = True +use_namespaces = True +interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver +
+
+ DHCP agent setup: Ryu plug-in + These DHCP agent options are required in the + /etc/neutron/dhcp_agent.ini + file for the Ryu plug-in: + [DEFAULT] +ovs_use_veth = True +use_namespace = True +interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver +
+
+
+ Configure L3 agent + The OpenStack Networking Service has a widely used + API extension to allow administrators and tenants to + create routers to interconnect L2 networks, and + floating IPs to make ports on private networks + publicly accessible. + Many plug-ins rely on the L3 service agent to + implement the L3 functionality. However, the following + plug-ins already have built-in L3 capabilities: + + + Nicira NVP plug-in + + + Big Switch/Floodlight plug-in, which + supports both the open source Floodlight controller and the + proprietary Big Switch controller. + + Only the proprietary BigSwitch + controller implements L3 functionality. + When using Floodlight as your OpenFlow + controller, L3 functionality is not + available. + + + + PLUMgrid plug-in + + + + Do not configure or use + neutron-l3-agent if you + use one of these plug-ins. + + + To install the L3 agent for all other + plug-ins + + Install the + neutron-l3-agent + binary on the network node: + # sudo apt-get install neutron-l3-agent + + + To uplink the node that runs + neutron-l3-agent + to the external network, create a bridge named + "br-ex" and attach the NIC for the external + network to this bridge. + For example, with Open vSwitch and NIC eth1 + connected to the external network, run: + # sudo ovs-vsctl add-br br-ex + # sudo ovs-vsctl add-port br-ex eth1 + Do not manually configure an IP address on + the NIC connected to the external network for + the node running + neutron-l3-agent. + Rather, you must have a range of IP addresses + from the external network that can be used by + OpenStack Networking for routers that uplink + to the external network. This range must be + large enough to have an IP address for each + router in the deployment, as well as each + floating IP. + + + The + neutron-l3-agent + uses the Linux IP stack and iptables to + perform L3 forwarding and NAT. In order to + support multiple routers with potentially + overlapping IP addresses, + neutron-l3-agent + defaults to using Linux network namespaces to + provide isolated forwarding contexts. As a + result, the IP addresses of routers are not + visible simply by running the ip addr + list or + ifconfig command on the + node. Similarly, you cannot directly + ping fixed IPs. + To do either of these things, you must run + the command within a particular network + namespace for the router. The namespace has + the name "qrouter-<UUID of the router>. + These example commands run in the router + namespace with UUID + 47af3868-0fa8-4447-85f6-1304de32153b: + # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list + # ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping <fixed-ip> + + + + If you reboot a node that runs the L3 agent, you + must run the + neutron-ovs-cleanup command + before the neutron-l3-agent service + starts. + On Red Hat-based systems, the neutron-ovs-cleanup service runs + the neutron-ovs-cleanup command + automatically. However, on Debian-based systems + such as Ubuntu, you must manually run this command + or write your own system script that runs on boot + before the neutron-l3-agent service + starts. + +
+
+ Configure LBaaS agent + Starting with the Havana release, the Neutron + Load-Balancer-as-a-Service (LBaaS) supports an agent + scheduling mechanism, so several + neutron-lbaas-agents can + be run on several nodes (one per one). + + To install the LBaas agent and configure the + node + + Install the agent by running: + # sudo apt-get install neutron-lbaas-agent + + + If you use one of these plug-ins, you must set these lines: + + + An OVS-based plug-in such as OVS, + NVP, Ryu, NEC, + BigSwitch/Floodlight: + interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver + + + A plug-in that uses LinuxBridge: + interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver + + + + + To use the reference implementation, you + must set: + device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver + + + Set this parameter in the + neutron.conf file on + the host that runs neutron-server: + service_plugins = neutron.services.loadbalancer.plugin.LoadBalancerPlugin + + +
+
+ Configure FWaaS agent + The Firewall-as-a-Service (FWaaS) agent is + co-located with the Neutron L3 agent and does not + require any additional packages apart from those + required for the Neutron L3 agent. You can enable the + FWaaS functionality by setting the configuration, as + follows. + + To configure FWaaS service and agent + + Set this parameter in the + neutron.conf file on + the host that runs neutron-server: + service_plugins = neutron.services.firewall.fwaas_plugin.FirewallPlugin + + + To use the reference implementation, you + must also add a FWaaS driver configuration to + the neutron.conf file on + every node where the Neutron L3 agent is + deployed: + [fwaas] +driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver +enabled = True + + +
+
+