Install Networking Services Before you configure individual nodes for Neutron, you must perform the initial setup required for any OpenStack component: creating a user, a service, endpoint(s), and a database. Once you have completed the steps below, follow the subsections of this guide to set up each of your OpenStack nodes for Neutron. Note for Debian users As for the rest of OpenStack, you must configure Networking Services through the debconf file. You do not need to manually configure the database or create the Keystone endpoint. You can skip the following steps can. If you must reconfigure the Networking Service, run the following command: # dpkg-reconfigure -plow neutron-common Alternatively, edit the configuration files and manually restart the daemons. Remember that if your database server is installed remotely, you must run the following command before you install the Networking Service: # apt-get install dbconfig-common && \ dpkg-reconfigure -plow dbconfig-common Create a neutron database by logging into as root using the password you set previously: # mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ IDENTIFIED BY 'NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS'; Create the required user, service, and endpoint so that Neutron can interface with the Identity Service, Keystone. To list the tenant IDs: # keystone tenant-list To list role IDs: # keystone role-list Create a neutron user: # keystone user-create --name=neutron --pass=NEUTRON_PASS --email=neutron@example.com Add the user role to the neutron user: # keystone user-role-add --user=neutron --tenant=service --role=admin Create the neutron service: # keystone service-create --name=neutron --type=network \ --description="OpenStack Networking Service" Create the neutron endpoint. Note the id property for the service that was returned in the previous step. Use it to create the endpoint: # keystone endpoint-create --region RegionOne \ --service-id the_service_id_above \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ --internalurl http://controller:9696
Install networking services on a dedicated network node Before you start, set up a machine to be a dedicated network node. Dedicated network nodes should have the following NICs: the management NIC (called MGMT_INTERFACE), the data NIC (called DATA_INTERFACE), and the external NIC (called EXTERNAL_INTERFACE). The management network handles communication between nodes. The data network handles communication coming to and from VMs. The external NIC connects the network node (and the controller node, as well, if you so choose) to the outside world, so your VMs can have connectivity to the outside world. All NICs should have static IPs. However, the data and external NICs have some special set up. For details about your chosen Neutron plug-in, see . By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simply launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Install the OpenStack Networking service on the network node: # apt-get install neutron-server neutron-dhcp-agent neutron-plugin-openvswitch-agent neutron-l3-agent # yum install openstack-neutron # zypper install openstack-neutron openstack-neutron-l3-agent openstack-neutron-dhcp-agent Make sure basic Neturon-related service are set to start at boot time: # for s in neutron-{dhcp,l3}-agent; do chkconfig $s on; done Enable packet forwarding and disable packet destination filtering so that the network node can coordinate traffic for the VMs. Edit the /etc/sysctl.conf file, as follows: net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 When dealing with system network-related configurations, you might need to restart the network service to get the configurations to take effect. Do so with the following command: # service networking restart # service network restart Note for Debian users Because this configuration is automated in the Debian packages through debconf, you do not need to manually configure the [keystone_authtoken], the [database] , or the RabbitMQ sections of the Neutron configuration files. Configure the core networking components. Edit the /etc/neutron/neutron.conf file and copying the following under the keystone_authtoken section: [keystone_authtoken] auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Tell Neutron how to connect to the database by editing [database] section in the same file: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Edit the /etc/neutron/api-paste.ini file by copying the following statements under [filter:authtoken] section: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host=controller auth_uri=http://controller:5000 admin_user=neutron admin_tenant_name=service admin_password=NEUTRON_PASS Now, you can install, and then configure, a networking plug-in. The networking plug-in is what Neutron uses to perform the actual software-defined networking. There are several options for this. Choose one, follow the instructions for it in the linked section, and then return here. Now that you've installed and configured a plug-in (you did do that, right?), it is time to configure the remaining parts of Neutron. To perform DHCP on the software-defined networks, Neutron supports several different plug-ins. However, in general, you use the Dnsmasq plug-in. Edit the /etc/neutron/dhcp_agent.ini file: dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq Restart the rest of Neutron: # service neutron-dhcp-agent restart # service neutron-l3-agent restart After you have configured your compute and controller nodes, configure the base networks.
Install and configure the Neutron plug-ins
Install the Open vSwitch (OVS) plug-in Note for Debian users Debian systems do not have specific plug-in packages. Instead, the neutron-common package installs all plug-ins by default. Set an option in the debconf file to choose a plug-in. The package automatically modifies the core_plugin directive to reflect your choice. Depending on the value of the core_plugin directive after you set up the neutron-common package, the init script of the Neutron daemons automatically chooses which plug-in configuration file to load from the /etc/neutron/plugins folder. Also, the OpenStack Networking Service is already configured to be working directly with OVS, so you do not need to modify the /etc/neutron/neutron.conf file to work with it (but you might need to edit it if you wish to use another plug-in). However, you must set up the OVS bridges manually, and install the neutron-openvswitch-agent as follows. Install the Open vSwitch plug-in and its dependencies: # apt-get install neutron-plugin-openvswitch-agent openvswitch-switch # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent Start Open vSwitch and configure it to start when the system boots: # service openvswitch start # chkconfig openvswitch on # service openvswitch-switch start # chkconfig openvswitch-switch on Regardless of which networking technology you decide to use with Open vSwitch, Neutron, there is some common setup that must be done. You must add the br-int integration bridge (this connects to the VMs) and the br-ex external bridge (this connects to the outside world). # ovs-vsctl add-br br-int # ovs-vsctl add-br br-ex Add a port (connection) from the interface EXTERNAL_INTERFACE to br-ex. # ovs-vsctl add-port br-ex EXTERNAL_INTERFACE Configure the EXTERNAL_INTERFACE to not have an IP address and to be in promiscuous mode. Additionally, you must set the newly created br-ex interface to have the IP address that formerly belonged to EXTERNAL_INTERFACE. Edit the /etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE file: DEVICE_INFO_HERE ONBOOT=yes BOOTPROTO=none PROMISC=yes Create and edit the /etc/sysconfig/network-scripts/ifcfg-br-ex file: DEVICE=br-ex TYPE=Bridge ONBOOT=no BOOTPROTO=none IPADDR=EXTERNAL_INTERFACE_IP NETMASK=EXTERNAL_INTERFACE_NETMASK GATEWAY=EXTERNAL_INTERFACE_GATEWAY There are also some common configuration options which must be set, regardless of the networking technology that you decide to use with Open vSwitch. You must tell L3 agent and DHCP agent you are using OVS. Edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files (respectively): interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver Similarly, you must also tell Neutron core to use OVS by editing /etc/neutron/neutron.conf: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Tell the L3 and DHCP agents that you want to use namespaces. To do so, edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files, respectively: use_namespaces = True Additionally, if you a using certain kernels with partial support for namespaces (such as some recent versions of RHEL (not RHOS) and CentOS), you must enable veth support, by editing the above files again: ovs_use_veth = True Tell the OVS plug-in how to connect to the database. To do so, edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Now, you must decide which type of networking technology you wish to use to create the virtual networks. Neutron has support for GRE tunneling, VLANs, and VXLANs. Currently, this guide supports GRE tunneling and VLANs. GRE tunneling is simpler to set up, since it does not require any special configuration from any physical network hardware. However, it is its own type of protocol, and thus is harder to filter, if you are concerned about filtering traffic on the physical network. Additionally, the configuration given here does not use namespaces, meaning you can have only one router per network node (however, this can be overcome by enabling namespacing, and potentially veth, as specified in the section detailing how to use VLANs with OVS). On the other hand, VLAN tagging modifies the ethernet header of packets, meaning that packets can be filtered on the physical network via normal methods. However, not all NICs handle the increased packet size of VLAN-tagged packets well, and you might need to complete additional configuration on physical network hardware to ensure that your Neutron VLANs do not interfere with any other VLANs on your network, and to ensure that any physical network hardware between nodes does not strip VLAN tags. While this guide currently enables network namespaces by default, you can disable them if you have issues or your kernel does not support them. To do so, edit the /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini files (respectively): use_namespaces = False Additionally, edit the /etc/neutron/neutron.conf file to tell Neutron that overlapping IP address should not be enabled: allow_overlapping_ips = False Note that with network namespaces disabled, you will only be able to have one router per network node, and overlapping IP addresses will not be supported. You must complete additional steps after you create the initial Neutron virtual networks and router. You should now configure a firewall plug-in. If you do not wish to enforce firewall rules (called security groups by Neutron), you can use the neutron.agent.firewall.NoopFirewall. Otherwise, you can choose to use one of the Neutron firewall plug-ins. The most common choice is the Hybrid OVS-IPTables driver, but there is also the Firewall-as-a-Service driver. To use the Hybrid OVS-IPTables driver, edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver You must use at least the No-Op firewall. Otherwise, Horizon and other OpenStack services cannot get and set required VM boot options. Restart the OVS plug-in, and make sure it starts on boot: # service neutron-openvswitch-agent restart # chkconfig neutron-openvswitch-agent on # service openstack-neutron-openvswitch-agent restart # chkconfig openstack-neutron-openvswitch-agent on # service neutron-plugin-openvswitch-agent restart # chkconfig neutron-plugin-openvswitch-agent on Now, return whence you came!
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling Tell the OVS plug-in to use GRE tunneling, using an integration bridge of br-int and a tunneling bridge of br-tun, and to use a local IP for the tunnel of DATA_INTERFACE's IP. Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = DATA_INTERFACE_IP Now return to the general OVS instructions
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs Tell OVS to use VLANS. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-DATA_INTERFACE Create the bridge for DATA_INTERFACE and add DATA_INTERFACE to it: # ovs-vsctl add-br br-DATA_INTERFACE # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE Now that you have added DATA_INTERFACE to a bridge, you must transfer its IP address over to the bridge. This is done in a manner similar to the way EXTERNAL_INTERFACE's IP address was transfered to br-ex. However, in this case, you do not need to turn promiscuous mode on. Return to the OVS general instruction.
Creating the Base Neutron Networks In the following sections, the text SPECIAL_OPTIONS may occur. Replace this text with any options specific to your networking plug-in choices. See here to check if your plug-in needs any special options. Create the external network, called ext-net (or something else, your choice). This network represents a slice of the outside world. VMs are not directly linked to this network; instead, they are connected to internal networks. Then, outgoing traffic is routed by Neutron to the external network. Additionally, floating IP addresses from ext-net's subnet may be assigned to VMs so that they may be contacted from the external network. Neutron routes the traffic appropriately. # neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS Next, create the associated subnet. It should have the same gateway as EXTERNAL_INTERFACE would have had, and the same CIDR address as well. It does not have DHCP, because it represents a slice of the external world: # neutron subnet-create ext-net \ --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END \ --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False \ EXTERNAL_INTERFACE_CIDR Create one or more initial tenants. Choose one (we'll call it DEMO_TENANT) to use for the following steps. Create the router attached to the external network. This router routes traffic to the internal subnets as appropriate (you may wish to create it under the a given tenant, in which case you should append --tenant-id option with a value of DEMO_TENANT_ID to the command). # neutron router-create ext-to-int Connect the router to ext-net by setting the router's gateway as ext-net: # neutron router-gateway-set EXT_TO_INT_ID EXT_NET_ID Create an internal network for DEMO_TENANT (and associated subnet over an arbitrary internal IP range, such as, 10.5.5.0/24), and connect it to the router by setting it as a port: # neutron net-create --tenant-id DEMO_TENANT_ID demo-net SPECIAL_OPTIONS # neutron subnet-create --tenant-id DEMO_TENANT_ID demo-net 10.5.5.0/24 --gateway 10.5.5.1 # neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID Check your plug-ins special options page for remaining steps. Then, return whence you came.
Plug-in-specific Neutron Network Options
Open vSwitch Network configuration options
GRE Tunneling Network Options While this guide currently enables network namespaces by default, you can disable them if you have issues or your kernel does not support them. If you disabled namespaces, you must perform some additional configuration for the L3 agent. After you create all the networks, tell the L3 agent what the external network ID is, as well as the ID of the router associated with this machine (because you are not using namespaces, there can be only one router for each machine). To do this, edit the /etc/neutron/l3_agent.ini file: gateway_external_network_id = EXT_NET_ID router_id = EXT_TO_INT_ID Then, restart the L3 agent # service neutron-l3-agent restart When creating networks, you should use the options: --provider:network_type gre --provider:segmentation_id SEG_ID SEG_ID should be 2 for the external network, and just any unique number inside the tunnel range specified before for any other network. These options are not needed beyond the first network, as Neutron automatically increments the segmentation id and copy the network type option for any additional networks. Return whence you came.
VLAN Network Options When creating networks, use the following options: --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID SEG_ID should be 2 for the external network, and just any unique number inside the vlan range specified above for any other network. These options are not needed beyond the first network, as Neutron automatically increments the segmentation ID and copies the network type and physical network options for any additional networks. They are only needed if you wish to modify those values in any way. Some NICs have Linux drivers that do not handle VLANs properly. See the ovs-vlan-bug-workaround and ovs-vlan-test man pages for more information. Additionally, you might try turning off rx-vlan-offload and tx-vlan-offload by using ethtool on the DATA_INTERFACE. Another potential caveat to VLAN functionality is that VLAN tags add an additional 4 bytes to the packet size. If your NICs cannot handle large packets, make sure to set the MTU to a value that is 4 bytes less than the normal value on the DATA_INTERFACE. If you run OpenStack inside a virtualized environment (for testing purposes), switching to the virtio NIC type (or a similar technology if you are not using KVM/QEMU to run your host VMs) might solve the issue.
Install networking support on a dedicated compute node This section details set up for any node that runs the nova-compute component but does not run the full network stack. By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simple launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Disable packet destination filtering (route verification) to let the networking services route traffic to the VMs. Edit the /etc/sysctl.conf file and then restart networking: net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 Install and configure your networking plug-in components. To install and configure the network plug-in that you chose when you set up your network node, see . Configure the core components of Neutron. Edit the /etc/neutron/neutron.conf file: auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS auth_url = http://controller:35357/v2.0 auth_strategy = keystone rpc_backend = YOUR_RPC_BACKEND PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO Edit the database URL under the [database] section in the above file, to tell Neutron how to connect to the database: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Edit the /etc/neutron/api-paste.ini file and copying the following statements under [filter:authtoken] section: [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host=controller admin_user=neutron admin_tenant_name=service admin_password=NEUTRON_PASS You must configure the networking plug-in.
Install and configure the Neutron plug-ins on a dedicated compute node
Install the Open vSwitch (OVS) plug-in on a dedicated compute node Install the Open vSwitch plug-in and its dependencies. # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent Start Open vSwitch and configure it to start when the system boots: # service openvswitch start # chkconfig openvswitch on # service openvswitch-switch start # chkconfig openvswitch-switch on Regardless of which networking technology you chose to use with Open vSwitch, there is some common setup. You must add the br-int integration bridge, which connects to the VMs. # ovs-vsctl add-br br-int Similarly, there are some common configuration options to be set. You must tell Neutron core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Tell the OVS plug-in how to connect to the database. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure the networking type that you chose when you set up the network node: either GRE tunneling or VLANs. You must configure a firewall as well. You should use the same firewall plug-in that you chose to use when you set up the network node. To do this, edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file and set the firewall_driver value under the securitygroup to the same value used on the network node. For instance, if you chose to use the Hybrid OVS-IPTables plug-in, your configuration looks like this: [securitygroup] # Firewall driver for realizing neutron security group function. firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver You must use at least the No-Op firewall. Otherwise, Horizon and other OpenStack services cannot get and set required VM boot options. After you complete OVS configuration and the core Neutron configuration after this section, restart the Neutron Open vSwitch agent, and set it to start at boot: # service neutron-openvswitch-agent restart # chkconfig neutron-openvswitch-agent on # service openstack-neutron-openvswitch-agent restart # chkconfig openstack-neutron-openvswitch-agent on # service neutron-plugin-openvswitch-agent restart # chkconfig neutron-plugin-openvswitch-agent on Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling on a dedicated compute node Tell the OVS plug-in to use GRE tunneling with a br-int integration bridge, a br-tun tunneling bridge, and a local IP for the tunnel of DATA_INTERFACE's IP Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = DATA_INTERFACE_IP Now, return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs on a dedicated compute node Tell OVS to use VLANs. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-DATA_INTERFACE Create the bridge for the DATA_INTERFACE and add DATA_INTERFACE to it, the same way you did on the network node: # ovs-vsctl add-br br-DATA_INTERFACE # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE Return to the general OVS instructions.
Install networking support on a dedicated controller node This is for a node which runs the control components of Neutron, but does not run any of the components that provide the underlying functionality (such as the plug-in agent or the L3 agent). If you wish to have a combined controller/compute node follow these instructions, and then those for the compute node. By default, the system-config-firewall automated firewall configuration tool is in place on RHEL. This graphical interface (and a curses-style interface with -tui on the end of the name) enables you to configure IP tables as a basic firewall. You should disable it when you work with Neutron unless you are familiar with the underlying network technologies, as, by default, it blocks various types of network traffic that are important to Neutron. To disable it, simple launch the program and clear the Enabled check box. After you successfully set up OpenStack with Neutron, you can re-enable and configure the tool. However, during Neutron set up, disable the tool to make it easier to debug network issues. Install the main Neutron server, Neutron libraries for Python, and the Neutron command-line interface (CLI): # yum install openstack-neutron python-neutron python-neutronclient # zypper install openstack-neutron python-neutron python-neutronclient Configure the core components of Neutron. Edit the /etc/neutron/neutron.conf file: auth_host = controller admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS auth_url = http://controller:35357/v2.0 auth_strategy = keystone rpc_backend = YOUR_RPC_BACKEND PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO Edit the database URL under the [database] section in the above file, to tell Neutron how to connect to the database: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure the Neutron copy of the api-paste.ini at /etc/neutron/api-paste.ini file: [filter:authtoken] EXISTING_STUFF_HERE admin_tenant_name = service admin_user = neutron admin_password = NEUTRON_PASS Configure the plug-in you chose when you set up the network node. Follow the instructions and return here. Tell Nova about Neutron. Specifically, you must tell Nova that Neutron will be handling networking and the firewall. Edit the /etc/nova/nova.conf file: network_api_class=nova.network.neutronv2.api.API neutron_url=http://controller:9696 neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_admin_username=neutron neutron_admin_password=NEUTRON_PASS neutron_admin_auth_url=http://controller:35357/v2.0 firewall_driver=nova.virt.firewall.NoopFirewallDriver security_group_api=neutron Regardless of which firewall driver you chose when you configure the network and compute nodes, set this driver as the No-Op firewall. The difference is that this is a Nova firewall, and because Neutron handles the Firewall, you must tell Nova not to use one. Start neutron-server and set it to start at boot: # service neutron-server start # chkconfig neutron-server on Make sure that the plug-in restarted successfully. If you get errors about a missing plugin.ini file, make a symlink that points to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini with the name /etc/neutron/plugins.ini.
Install and configure the Neutron plug-ins on a dedicated controller node
Install the Open vSwitch (OVS) plug-in on a dedicated controller node Install the Open vSwitch plug-in: # yum install openstack-neutron-openvswitch # zypper install openstack-neutron-openvswitch-agent Regardless of which networking technology you chose to use with Open vSwitch, there are some common configuration options which must be set. You must tell Neutron core to use OVS. Edit the /etc/neutron/neutron.conf file: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 Tell the OVS plug-in how to connect to the database. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure the OVS plug-in for the networking type that you chose when you configured the network node: GRE tunneling or VLANs. Notice that the dedicated controller node does not actually need to run the Open vSwitch agent or run Open vSwitch itself. Now, return whence you came.
Configure the Neutron <acronym>OVS</acronym> plug-in for GRE tunneling on a dedicated controller node Tell the OVS plug-in to use GRE tunneling. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True Return to the general OVS instructions.
Configure the Neutron <acronym>OVS</acronym> plug-in for VLANs on a dedicated controller node Tell OVS to use VLANS. Edit the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini file, as follows: [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 Return to the general OVS instructions.