diff --git a/doc/install-guide/ch_neutron.xml b/doc/install-guide/ch_neutron.xml index 6f5140d259..0b744de528 100644 --- a/doc/install-guide/ch_neutron.xml +++ b/doc/install-guide/ch_neutron.xml @@ -4,25 +4,28 @@ xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="ch_neutron"> Installing OpenStack Networking Service -
- Considerations for OpenStack Networking - There are many different drivers for OpenStack Networking, - that range from software bridges to full control of certain - switching hardware. This guide focuses on Open vSwitch. However, - the theories presented here should be mostly applicable to other - mechanisms, and the OpenStack Configuration - Reference offers additional information. - Please see OpenStack - Packages for specific OpenStack installation - instructions to prepare for installation. - If you have followed the previous section on - setting up networking for your compute node using - nova-network, this configuration will override those - settings. -
- - - - +
+ Considerations for OpenStack Networking + Drivers for OpenStack Networking range from software + bridges to full control of certain switching hardware. + This guide focuses on the Open vSwitch driver. However, + the theories presented here should be mostly applicable to + other mechanisms, and the OpenStack Configuration + Reference offers additional + information. + For specific OpenStack installation instructions to + prepare for installation, see . + + If you followed the previous chapter to set up + networking for your compute node using nova-network, this + configuration overrides those settings. + +
+ + + + diff --git a/doc/install-guide/section_neutron-concepts.xml b/doc/install-guide/section_neutron-concepts.xml index 45a9ede8b1..37ed2a7e61 100644 --- a/doc/install-guide/section_neutron-concepts.xml +++ b/doc/install-guide/section_neutron-concepts.xml @@ -4,81 +4,105 @@ xmlns:xi="http://www.w3.org/2001/XInclude" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:svg="http://www.w3.org/2000/svg" - xmlns:html="http://www.w3.org/1999/xhtml" - version="5.0"> - Neutron concepts - Like Nova Networking, Neutron manages software-defined networking for your OpenStack - installation. However, unlike Nova Networking, Neutron can be configured for advanced virtual - network topologies, such as per-tenant private networks, and more. - Neutron has three main object abstractions: networks, subnets, and routers. Each has - functionality that mimics its physical counterpart: networks contain subnets, and routers route + xmlns:html="http://www.w3.org/1999/xhtml" version="5.0"> + Neutron Concepts + Like Nova Networking, Neutron manages software-defined + networking for your OpenStack installation. However, unlike Nova + Networking, you can configure Neutron for advanced virtual network + topologies, such as per-tenant private networks and more. + Neutron has the following object abstractions: networks, + subnets, and routers. Each has functionality that mimics its + physical counterpart: networks contain subnets, and routers route traffic between different subnet and networks. - In any given Neutron setup, there is at least one external network. This network, unlike the - other networks, is not merely an virtually defined network. Instead, it represents the view into - a slice of the external network, accessible outside the OpenStack installation. IP addresses on - Neutron's external network are in fact accessible by anybody physically on the outside network. - Because this network merely represents a slice of the outside network, DHCP is disabled on this - network. - In addition external networks, any Neutron setup will have one or more internal networks. - These software-defined networks connect directly to the VMs. Only the VMs on any given internal - network, or those on subnets connected via interfaces to a similar router, can access VMs - connected to that network directly. - In order for the outside network to be able to access VMs, and vice versa, routers between - the networks are needed. Each router has one gateway, connected to a network, and many - interfaces, connected to subnets. Like a physical router, subnets can access machines on other - subnets connected to the same router, and machines can access the outside network through the - router's gateway. - Additionally, IP addresses on an external networks can be allocated to ports on the internal - network. Whenever something is connected to a subnet, that connection is called a port. External - network IP addresses can be associated with ports to VMs. This way, entities on the outside - network can access VMs. - Neutron also supports "security groups." Security groups allow administrators to define - firewall rules in groups. Then, a given VM can have one or more security groups to which it - belongs, and Neutron will apply those rules to block or unblock ports, port ranges, or traffic - types for that VM. - Each of the plugins that Neutron uses has its own concepts as well. While not vital to - operating Neutron, these concepts can be useful to help with setting up Neutron. All Neutron - installations use a core plugin, as well as a security group plugin (or just the No-Op security - group plugin). Additionally, Firewall-as-a-service (FWaaS) and Load-balancing-as-a-service - (LBaaS) plugins are available. + Any given Neutron set up has at least one external network. + This network, unlike the other networks, is not merely a virtually + defined network. Instead, it represents the view into a slice of + the external network that is accessible outside the OpenStack + installation. IP addresses on the Neutron external network are + accessible by anybody physically on the outside network. Because + this network merely represents a slice of the outside network, + DHCP is disabled on this network. + In addition to external networks, any Neutron set up has one + or more internal networks. These software-defined networks connect + directly to the VMs. Only the VMs on any given internal network, + or those on subnets connected through interfaces to a similar + router, can access VMs connected to that network directly. + For the outside network to access VMs, and vice versa, routers + between the networks are needed. Each router has one gateway that + is connected to a network and many interfaces that are connected + to subnets. Like a physical router, subnets can access machines on + other subnets that are connected to the same router, and machines + can access the outside network through the gateway for the + router. + Additionally, you can allocate IP addresses on an external + networks to ports on the internal network. Whenever something is + connected to a subnet, that connection is called a port. You can + associate external network IP addresses with ports to VMs. This + way, entities on the outside network can access VMs. + Neutron also supports security + groups. Security groups enable administrators to + define firewall rules in groups. A VM can belong to one or more + security groups, and Neutron applies the rules in those security + groups to block or unblock ports, port ranges, or traffic types + for that VM. + Each plug-in that Neutron uses has its own concepts. While not + vital to operating Neutron, understanding these concepts can help + you set up Neutron. All Neutron installations use a core plug-in + and a security group plug-in (or just the No-Op security group + plug-in). Additionally, Firewall-as-a-service (FWaaS) and + Load-balancing-as-a-service (LBaaS) plug-ins are available.
Open vSwitch Concepts - The Open vSwitch plugin is one of the most popular core plugins. Open vSwitch - configurations consists of bridges and ports. Ports represent connections to other things, - such as physical interfaces and patch cables. Packets from any given port on a bridge is - shared with all other ports on that bridge. Bridges can be connected through Open vSwitch - virtual patch cables, or through Linux virtual Ethernet cables (veth). - Additionally, bridges appear as network interfaces to Linux, so they can be assigned IP - addresses. - In Neutron, there are several main bridges. The integration bridge, called - br-int, connects directly to the VMs and associated services. The - external bridge, called br-ex, connects to the external network. Finally, - the VLAN configuration of the Open vSwitch plugin uses bridges associated with each physical + The Open vSwitch plug-in is one of the most popular core + plug-ins. Open vSwitch configurations consists of bridges and + ports. Ports represent connections to other things, such as + physical interfaces and patch cables. Packets from any given + port on a bridge is shared with all other ports on that bridge. + Bridges can be connected through Open vSwitch virtual patch + cables or through Linux virtual Ethernet cables + (veth). Additionally, bridges appear as + network interfaces to Linux, so you can assign IP addresses to + them. + In Neutron, the integration bridge, called + br-int, connects directly to the VMs and + associated services. The external bridge, called + br-ex, connects to the external network. + Finally, the VLAN configuration of the Open vSwitch plug-in uses + bridges associated with each physical network. + In addition to defining bridges, Open vSwitch has OpenFlow, + which enables you to define networking flow rules. Certain + configurations use these rules to transfer packets between + VLANs. + Finally, some configurations of Open vSwitch use network + namespaces that enable Linux to group adapters into unique + namespaces that are not visible to other namespaces, which + allows the same network node to manage multiple Neutron + routers. + With Open vSwitch, you can use two different technologies to + create the virtual networks: GRE or VLANs. + Generic Routing Encapsulation (GRE) is the technology used + in many VPNs. It wraps IP packets to create entirely new packets + with different routing information. When the new packet reaches + its destination, it is unwrapped, and the underlying packet is + routed. To use GRE with Open vSwitch, Neutron creates GRE + tunnels. These tunnels are ports on a bridge and enable bridges + on different systems to act as though they were one bridge, + which allows the compute and network nodes to act as one for the + purposes of routing. + Virtual LANs (VLANs), on the other hand, use a special + modification to the Ethernet header. They add a 4-byte VLAN tag + that ranges from 1 to 4094 (the 0 tag is special, and the 4095 + tag, made of all ones, is equivalent to an untagged packet). + Special NICs, switches, and routers know how to interpret the + VLAN tags, as does Open vSwitch. Packets tagged for one VLAN are + only shared with other devices configured to be on that VLAN, + even through all devices are on the same physical network. - In addition to defining bridges, Open vSwitch has OpenFlow, which allows you to define - networking flow rules. These rules are used in certain configurations to transfer packets - between VLANs. - Finally, some configurations of Open vSwitch use network namespaces. This allows linux to - group adapters into unique namespaces that are not visible to other namespaces, allowing - multiple Neutron routers to be managed by the same network node. - With Open vSwitch, there are two different technologies that can be used to create the - virtual networks: GRE or VLANs. - Generic Routing Encapsulation, or GRE for short, is the technology used in many VPNs. In - essence, it works by wrapping IP packets and creating entirely new packets with different routing - information. When the new packet reaches its destination, it is unwrapped, and the underlying - packet is routed. To use GRE with Open vSwitch, Neutron creates GRE Tunnels. This tunnels are - ports on a bridge, and allow bridges on different systems to act as though they were in fact - one bridge, allowing the compute node and network node to act as one for the purposes of - routing. - Virtual LANs, or VLANs for short, on the other hand, use a special modification to the - Ethernet header. They add a 4-byte VLAN tag that ranges between 1 and 4094 (the 0 tag is - special, and the 4095 tag, made of all ones, is equivalent to an untagged packet). Special - NICs, switches, and routers know how to interpret the VLAN tags, as does Open vSwitch. Packets - tagged for one VLAN will only be shared with other devices configured to be on that VLAN, - despite the fact that all of the devices are on the same physical network. - The most common security group driver used with Open vSwitch is the Hybrid IPTables/Open - vSwitch plugin. It uses a combination for IPTables and OpenFlow rules. IPTables is a tool used - for creating firewalls and setting up NATs on Linux. It uses a complex rule system and - "chains" of rules to allow for the complex rules required by Neutron's security groups. + The most common security group driver used with Open vSwitch + is the Hybrid IPTables/Open vSwitch plug-in. It uses a + combination for IPTables and OpenFlow rules. Use the IPTables + tool to create firewalls and set up NATs on Linux. This tool + uses a complex rule system and chains of rules to accommodate + the complex rules required by Neutron security groups.
diff --git a/doc/install-guide/section_neutron-install.xml b/doc/install-guide/section_neutron-install.xml index 9e9e656d7c..bc13f173a9 100644 --- a/doc/install-guide/section_neutron-install.xml +++ b/doc/install-guide/section_neutron-install.xml @@ -1,716 +1,959 @@
- Install Networking Services on the network node - - Before we start, you need to make sure that your machine is properly set up - to be a dedicated network node. Dedicated network nodes should have three NICs: - the management NIC (called MGMT_INTERFACE), the data - NIC (called DATA_INTERFACE), and the external NIC - (called EXTERNAL_INTERFACE). - The management network is responsible for communication between - nodes, the data network is responsible for communication coming to and - from VMs, and the external NIC connects the network node to the outside - world, so your VMs can have connectivity to the outside world. - All three NICs should have static IPs. However, the data and external NICs - have some special setup. See the Neutron - plugin section for your chosen Neutron plugin for details. - - - By default, an automated firewall configuration tool called - system-config-firewall in place on RHEL. This tool is - a graphical interface (and a curses-style interface with - -tui on the end of the name) for configuring IP - tables as a basic firewall. You should disable it when working with - Neutron unless you are familiar with the underlying network technologies, - as, by default, it will block various types of network traffic that are - important to Neutron. To disable it, simple launch the program and uncheck - the "Enabled" check box. - Once you have successfully set up OpenStack with Neutron, you can - re-enable it if you wish and figure out exactly how you need to configure - it. For the duration of the setup, however, it will make finding network - issues easier if you don't have it blocking all unrecognized - traffic. - - First, we must install the OpenStack Networking service on the controller node: - - # sudo apt-get install neutron-server neutron-dhcp-agent neutron-plugin-openvswitch neutron-l3-agent - - -# sudo yum install openstack-neutron - - -# zypper install openstack-neutron - - Next, we must enable packet forwarding and disable packet destination - filtering, so that the network node can coordinate traffic for the VMs. We - do this by editing the file /etc/sysctl.conf. - -net.ipv4.ip_forward=1 + xmlns="http://docbook.org/ns/docbook" + xmlns:xi="http://www.w3.org/2001/XInclude" + xmlns:xlink="http://www.w3.org/1999/xlink" + xmlns:svg="http://www.w3.org/2000/svg" + xmlns:html="http://www.w3.org/1999/xhtml" version="5.0"> + Install Networking Services on the network node + + Before you start, set up your machine to be a dedicated + network node. Dedicated network nodes should have the following + NICs: the management NIC (called + MGMT_INTERFACE), the data NIC + (called DATA_INTERFACE), and the + external NIC (called + EXTERNAL_INTERFACE). + The management network handles communication between nodes. + The data network handles communication coming to and from VMs. + The external NIC connects the network node to the outside world, + so your VMs can have connectivity to the outside world. + All NICs should have static IPs. However, the data and + external NICs have some special set up. For details about your + chosen Neutron plug-in, see . + + + By default, the system-config-firewall + automated firewall configuration tool is in place on RHEL. This + graphical interface (and a curses-style interface with + -tui on the end of the name) enables you to + configure IP tables as a basic firewall. You should disable it + when you work with Neutron unless you are familiar with the + underlying network technologies, as, by default, it blocks + various types of network traffic that are important to Neutron. + To disable it, simple launch the program and clear the + Enabled check box. + After you successfully set up OpenStack with Neutron, you + can re-enable and configure the tool. However, during Neutron + set up, disable the tool to make it easier to debug network + issues. + + + + Install the OpenStack Networking service on the controller + node: + # sudo apt-get install neutron-server + # sudo yum install openstack-neutron + # zypper install openstack-neutron + + + Enable packet forwarding and disable packet destination + filtering so that the network node can coordinate traffic for + the VMs. Edit the /etc/sysctl.conf file, + as follows: + net.ipv4.ip_forward=1 net.ipv4.conf.all.rp_filter=0 -net.ipv4.conf.default.rp_filter=0 - - - When dealing with system network-related configurations, it may be necessary to - restart the network service to get them to take effect. This can be done with the - following command: - -# sudo service networking restart - -# sudo service network restart - - - First, we need to create a database user called neutron, by logging into - as root using the password we set earlier. - # mysql -u root -p -mysql> CREATE DATABASE neutron; -mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ -IDENTIFIED BY 'NEUTRON_DBPASS'; -mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'neutron'@'%' \ +net.ipv4.conf.default.rp_filter=0 + + When dealing with system network-related configurations, + you might need to restart the network service to get the + configurations to take effect. Do so with the following + command: + # sudo service networking restart + # sudo service network restart + + + + Create a neutron database by logging + into as root using the password you set earlier: + # mysql -u root -pmysql> CREATE DATABASE neutron;mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ +IDENTIFIED BY 'NEUTRON_DBPASS';mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'neutron'@'%' \ IDENTIFIED BY 'NEUTRON_DBPASS'; - Before continuing, we must create the required user, service, and - endpoint so that Neutron can interface with the Identity Service, - Keystone. - To list the Tenant ID's use the command: - -# keystone tenant-list - - To list the Role ID's use the command: - -# keystone role-list - - Type in the following commands: - Create Neutron User: - -# keystone user-create --name=neutron --pass=NEUTRON_PASS --tenant-id SERVICE_TENANT_ID --email=neutron@example.com - - Add User Role to Neutron User: - -# keystone user-role-add --tenant-id SERVICE_TENANT_ID --user-id NEUTRON_USER_ID --role-id ADMIN_ROLE_ID - - Create Neutron Service: - -# keystone service-create --name=neutron --type=network \ - --description="OpenStack Networking Service" - - To Create Neutron Endpoint, please note the service's id property returned in the previous step and use it when - creating the endpoint. - # keystone endpoint-create --region RegionOne \ + + + Create the required user, service, and endpoint so that + Neutron can interface with the Identity Service, + Keystone. + To list the tenant IDs: + # keystone tenant-list + To list role IDs: + # keystone role-list + Create a neutron user: + # keystone user-create --name=neutron --pass=NEUTRON_PASS --tenant-id SERVICE_TENANT_ID --email=neutron@example.com + Add the user role to the neutron user: + # keystone user-role-add --tenant-id SERVICE_TENANT_ID --user-id NEUTRON_USER_ID --role-id ADMIN_ROLE_ID + Create the neutron service: + # keystone service-create --name=neutron --type=network \ + --description="OpenStack Networking Service" + Create the neutron endpoint. Note the + id property for the service returned in + the previous step and use it to create the endpoint: + # keystone endpoint-create --region RegionOne \ --service-id NEUTRON_SERVICE_ID \ --publicurl http://controller:9696 \ --adminurl http://controller:9696 \ - --internalurl http://controller:9696 - - First, we configure networking core by editing /etc/neutron/neutron.conf - by copying the following under keystone_authtoken section: - [keystone_authtoken] + --internalurl http://controller:9696 + + + Configure networking core. Edit the + /etc/neutron/neutron.conf file by + copying the following under + keystone_authtoken section: + [keystone_authtoken] auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron -admin_password = NEUTRON_PASS - - Also edit the database URL under the [database] section: - [database] -connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron - - Edit the file /etc/neutron/api-paste.ini by copying the following under - [filter:authtoken] section: - [filter:authtoken] +admin_password = NEUTRON_PASS + + + Edit the database URL under the + [database] section: + [database] +connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron + + + Edit the /etc/neutron/api-paste.ini + file by copying the following statements under + [filter:authtoken] section: + [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory auth_host=controller admin_user=neutron admin_tenant_name=service -admin_password=NEUTRON_PASS - - Edit the file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - under the [database] section: - [DATABASE] +admin_password=NEUTRON_PASS + + + Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + file under the [database] section: + [DATABASE] connection = mysql://neutronUser:NEUTRON_DBPass@10.10.10.51/neutron - Also Edit the [OVS] section: - [OVS] + + + Also edit the [OVS] section: + [OVS] tenant_network_type = gre tunnel_id_ranges = 1:1000 -enable_tunneling = True - - Do not forget to edit the [securitygroup] for changing the firewall driver - [SECURITYGROUP] -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - -
- Installing and configuring the Neutron plugins -
- Installing the Open vSwitch (OVS) plugin - Now, we can install, and then configure, our networking plugin. The networking plugin is - what Neutron uses to perform the actual software-defined networking. There are several - options for this. - Switch Over to the Network Node and continue with the following instructions for - installing the OVS Plugin, if you want to use any other plugin, follow the instructions in the linked section, and - skip the OVS section or else continue with the OVS section. - First, we must install the Open vSwitch plugin and its - dependencies. - # sudo apt-get install neutron-plugin-openvswitch - -# sudo yum install openstack-neutron-openvswitch - - # zypper install openstack-neutron-openvswitch - Now, we start up Open vSwitch. - -# service openvswitch start - - Next, we must do some initial configuration for Open vSwitch, no - matter whether we are using VLANs or GRE tunneling. We need to add the - integration bridge (this connects to the VMs) and the external bridge - (this connects to the outside world), called br-int - and br-ex, respectively. - -# ovs-vsctl add-br br-int -# ovs-vsctl add-br br-ex - - Then, we add a "port" (connection) from the interface - EXTERNAL_INTERFACE to br-ex. - -# ovs-vsctl add-port br-ex EXTERNAL_INTERFACE - - In order for things to work correctly, we must also - configure EXTERNAL_INTERFACE to not have an IP address and - to be in promiscuous mode. Additionally, we need to set the newly - created br-ex interface to have the IP address that formerly - belonged to EXTERNAL_INTERFACE. - Do this by first editing - the /etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE file: - -DEVICE_INFO_HERE +enable_tunneling = True + + + Edit the [securitygroup] to change the + firewall driver: + [SECURITYGROUP] +firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver + + +
+ Installing and configuring the Neutron plug-ins +
+ Installing the Open vSwitch (OVS) plug-in + Now, you can install, and then configure, our networking + plug-in. The networking plug-in is what Neutron uses to + perform the actual software-defined networking. There are + several options for this. + Switch Over to the Network Node and continue with the + following instructions for installing the OVS plug-in, if you + want to use any other plug-in, follow the instructions in the linked section, and skip the OVS + section or else continue with the OVS section. + + + Install the Open vSwitch plug-in and its + dependencies. + # sudo apt-get install neutron-plug-in-openvswitch + # sudo yum install openstack-neutron-openvswitch + # zypper install openstack-neutron-openvswitch + + + Start Open vSwitch: + # service openvswitch start + + + You must configure Open vSwitch whether you use VLANs + or GRE tunneling. You must add the + br-int integration bridge (this + connects to the VMs) and the br-ex + external bridge (this connects to the outside + world). + # ovs-vsctl add-br br-int# ovs-vsctl add-br br-ex + + + Add a port + (connection) from the interface + EXTERNAL_INTERFACE to + br-ex. + # ovs-vsctl add-port br-ex EXTERNAL_INTERFACE + + + Configure + EXTERNAL_INTERFACE to not + have an IP address and to be in promiscuous mode. + Additionally, you must set the newly created + br-ex interface to have the IP + address that formerly belonged to + EXTERNAL_INTERFACE. + Edit the + /etc/sysconfig/network-scripts/ifcfg-EXTERNAL_INTERFACE + file: + DEVICE_INFO_HERE ONBOOT=yes BOOTPROTO=none -PROMISC=yes - - Then, edit the /etc/sysconfig/network-scripts/ifcfg-br-ex file: - -DEVICE=br-ex +PROMISC=yes + + + Edit the + /etc/sysconfig/network-scripts/ifcfg-br-ex + file: + DEVICE=br-ex TYPE=Bridge ONBOOT=no BOOTPROTO=none IPADDR=EXTERNAL_INTERFACE_IP NETMASK=EXTERNAL_INTERFACE_NETMASK -GATEWAY=EXTERNAL_INTERFACE_GATEWAY - +GATEWAY=EXTERNAL_INTERFACE_GATEWAY + - Finally, we can now configure the settings for the particular plugins. - First, there are some general OVS configuration options to set, - no matter whether you use VLANs or GRE tunneling. We need to tell L3 agent and DHCP - agent we are using OVS by editing /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini (respectively): - -interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver - - Now, we can install, and then configure, our networking plugin. The networking - plugin is what Neutron uses to perform the actual software-defined networking. There - are several options for this. Choose one, follow - the instructions in the linked - section, and then return here. - Now that you've installed and configured a plugin (you did do that, right?), it is time to - configure the main part of Neutron. - First, we configure Neutron core by editing /etc/neutron/neutron.conf - by copying the following under keystone_authtoken section: - -[keystone_authtoken] + + Configure the settings for the particular plug-ins. + You must set some general OVS + configuration options whether you use VLANs or GRE + tunneling. You must tell L3 agent and DHCP agent you are + using OVS. Edit the + /etc/neutron/l3_agent.ini and + /etc/neutron/dhcp_agent.ini files + (respectively): + interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver + + + Now, you can install, and then configure, our + networking plug-in. The networking plug-in is what Neutron + uses to perform the actual software-defined networking. + There are several options for this. Choose one, follow the + instructions in the linked section, and then + return here. + + + Now that you've installed and configured a plug-in (you + did do that, right?), it is time to configure the main part of + Neutron. + + + Configure Neutron core. Edit the + /etc/neutron/neutron.conf file by + copying the following statements under + keystone_authtoken section: + [keystone_authtoken] auth_host = controller auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = neutron -admin_password = NEUTRON_DBPASS - - Also edit the database URL under the [database] section: - [database] -connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron - - Edit the file /etc/neutron/api-paste.ini by copying the following under - [filter:authtoken] section: - [filter:authtoken] -paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory -auth_host=controller -admin_user=neutron -admin_tenant_name=service -admin_password=NEUTRON_PASS - - Install DHCP Agent, Metadata Agent, - dnsmasq neutron-dhcp-agent neutron-l3-agent - Then, we just need to tell the DHCP agent by typing the following command: - -# service neutron-dhcp-agent restart -# service neutron-l3-agent restart - - Neutron has support for plugins for this purpose, but in general we just use the Dnsmasq - plugin. Edit /etc/neutron/dhcp_agent.ini: - -dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - - Now, restart the rest of Neutron: - -# service neutron-dhcp-agent restart -# service neutron-l3-agent restart - - - Next, configure the - base networks and return here. - Similarly, we need to also tell Neutron core to use OVS by - editing /etc/neutron/neutron.conf: - -core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 - - Finally, we need to tell the OVS plugin how to connect to - the database by editing /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - -[database] -connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron - - Now, we must decide which networking type we want. We can either use GRE tunneling - or VLANs. GRE tunneling - can be easier and simpler to set up, but is less flexible in certain regards. VLANs are more flexible, but can be harder to set up and have more issues. - - Now, you have the option of configuring a firewall. If you do not wish to enforce firewall rules - (called Security Groups by Neutron), you may use - the neutron.agent.firewall.NoopFirewall. Otherwise, you may choose one of the Neutron - firewall plugins to use. To use the Hybrid OVS-IPTables driver (the most common choice), - edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - -[securitygroup] +admin_password = NEUTRON_PASS + + + Edit the database URL under the + [database] section: + [database] + connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron + + + Edit the + /etc/neutron/api-paste.ini file by + copying the following statements under + [filter:authtoken] section: + [filter:authtoken] + paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory + auth_host=controller + admin_user=neutron + admin_tenant_name=service + admin_password=NEUTRON_PASS + + + Install DHCP Agent, Metadata Agent, dnsmasq + neutron-dhcp-agent + neutron-l3-agent. + + + Tell the DHCP agent. Enter the following + command: + # service neutron-dhcp-agent restart# service neutron-l3-agent restart + + + Neutron has support for plug-ins for this purpose, but + in general you just use the Dnsmasq plug-in. Edit the + /etc/neutron/dhcp_agent.ini + file: + dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq + + + Now, restart the rest of Neutron: + # service neutron-dhcp-agent restart# service neutron-l3-agent restart + + + + Next, configure + the base networks and return here. + + + Similarly, you must also tell Neutron core to use + OVS by editing + /etc/neutron/neutron.conf: + + core_plug-in = neutron.plug-ins.openvswitch.ovs_neutron_plug-in.OVSNeutronplug-inV2 + + + + Finally, you must tell the OVS + plug-in how to connect to the database by editing + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini: + + [database] + connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron + + + + Now, you must decide which networking type you want. + you can either use GRE tunneling or VLANs. GRE + tunneling can be easier and simpler to set up, + but is less flexible in certain regards. VLANs are more flexible, but can be harder to + set up and have more issues. + + + + You can configure a firewall. If you do not wish to + enforce firewall rules (called security + groups by Neutron), you can use the + neutron.agent.firewall.NoopFirewall. + Otherwise, you can choose to use one of the Neutron + firewall plug-ins. To use the Hybrid OVS-IPTables driver + (the most common choice), edit + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini: + [securitygroup] # Firewall driver for realizing neutron security group function. -firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - You must use at least the No-Op firewall mentioned above. Otherwise, Horizon and - other OpenStack services will not be able to get and set required VM boot options. - +firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver + + You must use at least the No-Op firewall. Otherwise, + Horizon and other OpenStack services cannot get and set + required VM boot options. + + - After having configured OVS, restart the OVS plugin: - -# service neutron-openvswitch-agent restart - - Now, return whence you came! -
- Configuring the Neutron <acronym>OVS</acronym> plugin for GRE Tunneling - First, we must configure the L3 agent and the DHCP agent to not use namespaces by editing /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini (respectively): - -use_namespaces = False - - Then, we tell the OVS plugin to use GRE tunneling, using an integration bridge of br-int and a tunneling bridge of br-tun, and to use a local IP for the tunnel of DATA_INTERFACE's IP. Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - -[ovs] + + Restart the OVS plug-in: + # service neutron-openvswitch-agent restart + + + Now, return whence you came! + + +
+ Configuring the Neutron <acronym>OVS</acronym> plug-in + for GRE Tunneling + + + Configure the L3 agent and the DHCP agent to not use + namespaces. Edit the + /etc/neutron/l3_agent.ini and + /etc/neutron/dhcp_agent.ini files + (respectively): + use_namespaces = False + + + Tell the OVS plug-in to use GRE + tunneling, using an integration bridge of + br-int and a tunneling bridge of + br-tun, and to use a local IP for + the tunnel of + DATA_INTERFACE's IP. Edit + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini: + [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun -local_ip = DATA_INTERFACE_IP - - Now, return to the OVS general instruction -
-
- Configuring the Neutron <acronym>OVS</acronym> plugin for VLANs - First, we must tell OVS that we want to use VLANS by editing /etc/neutron/plugins/openvswitch/ovs_neutron_plugin: - -[ovs] -tenant_network_type = vlan -network_vlan_ranges = physnet1:1:4094 -bridge_mappings = physnet1:br-DATA_INTERFACE - - Then, create the bridge for DATA_INTERFACE and add DATA_INTERFACE to it: - -# ovs-vsctl add-br br-DATA_INTERFACE -# ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE - - - Now that we have added DATA_INTERFACE to a bridge, we need to transfer its IP address over to the bridge. This is done in a manner similar to the way EXTERNAL_INTERFACE's IP address was transfered to br-ex. However, in this case, we do not need to turn promiscuous mode on. - Next, we must tell the L3 and DHCP agents that we want to use namespaces, by editing /etc/neutron/l3_agent.ini and /etc/neutron/dhcp_agent.ini, respectively: - -use_namespaces = True - - Additionally, if you a using certain kernels with partial support for namespaces, you need to enable veth support, by editing the above files again: - -ovs_use_veth = True - - Now, return to the OVS general instruction -
+local_ip = DATA_INTERFACE_IP
+ + + Return to the OVS general + instruction + + +
+
+ Configuring the Neutron <acronym>OVS</acronym> plug-in + for VLANs + + + Tell OVS to use VLANS. Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plug-in + file: + [ovs] + tenant_network_type = vlan + network_vlan_ranges = physnet1:1:4094 + bridge_mappings = physnet1:br-DATA_INTERFACE + + + Create the bridge for + DATA_INTERFACE and add + DATA_INTERFACE to + it: + # ovs-vsctl add-br br-DATA_INTERFACE + # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE + + + + Now that you have added + DATA_INTERFACE to a bridge, + you must transfer its IP address over to the bridge. + This is done in a manner similar to the way + EXTERNAL_INTERFACE's IP + address was transfered to br-ex. + However, in this case, you do not need to turn + promiscuous mode on. + + + Tell the L3 and DHCP agents to use namespaces. Edit + the /etc/neutron/l3_agent.ini and + /etc/neutron/dhcp_agent.ini + files, respectively: + use_namespaces = True + + + Additionally, if you a using certain kernels with + partial support for namespaces, you must enable veth + support,. Edit the above files again: + + ovs_use_veth = True + + + + Return to the OVS general + instruction. + +
+
Creating the base Neutron networks - In the upcoming sections, the text - SPECIAL_OPTIONS may occur. This should be - replaced with any options specific to your networking plugin choices. - See here to check if your plugin needs any special options. + In the following sections, the text + SPECIAL_OPTIONS may occur. + Replace this text with any options specific to your networking + plug-in choices. See here to check if your plug-in needs any special + options. - First, we will create the external network, called - ext-net (or something else, your choice). This - network represents a slice of the outside world. VMs will not be directly - linked to this network; instead, they will be on sub-networks and be - assigned floating IPs from this network's subnet's pool of floating IPs. - Neutron will then route the traffic appropriately. - - # neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS - - Next, we will create the associated subnet. It should have the same gateway - as EXTERNAL_INTERFAE would have had, and the same CIDR - details as well. It will not have DHCP, since it represents a slice of the external - world: - - # neutron subnet-create ext-net --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False EXTERNAL_INTERFACE_CIDR - - Now, create one or more initial tenants. Choose one (we'll call it - DEMO_TENANT) to use for the following - parts. - Then, we will create the router attached to the external network. This - router will route traffic to the internal subnets as appropriate (you may - wish to create it under the a given tenant, in which case you should - append --tenant-id DEMO_TENANT_ID to the - command). - - # neutron router-create ext-to-int - - Now, we'll connect the router to ext-net by setting the - router's gateway as ext-net: - - # neutron router-gateway-set EXT_TO_INT_ID EXT_NET_ID - - Then, we'll create an internal network for DEMO_TENANT - (and associated subnet over an arbitrary interal IP range, say, - 10.5.5.0/24), and connect it to the router by setting it as a port: - - # neutron net-create --tenant-id DEMO_TENANT_ID demo-net SPECIAL_OPTIONS - # neutron subnet-create --tenant-id DEMO_TENANT_ID demo-net 10.5.5.0/24 --gateway 10.5.5.1 - # neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID - - Now, check your plugin's special options page to see if there are steps left to - perform, and then return whence you came. -
- Plugin-specific Neutron networks options -
+ + + + + + Create the external network, called + ext-net (or something else, your + choice). This network represents a slice of the outside + world. VMs are not directly linked to this network; instead, + they are on sub-networks and are assigned floating IPs from + this network's subnet's pool of floating IPs. Neutron routes + the traffic appropriately. + # neutron net-create ext-net -- --router:external=True SPECIAL_OPTIONS + + + Next, you create the associated subnet. It should have + the same gateway as + EXTERNAL_INTERFAE would have + had, and the same CIDR details as well. It does not have + DHCP, because it represents a slice of the external + world: + # neutron subnet-create ext-net --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --gateway=EXTERNAL_INTERFACE_GATEWAY --enable_dhcp=False EXTERNAL_INTERFACE_CIDR + + + Create one or more initial tenants. Choose one (you'll + call it DEMO_TENANT) to use for + the following parts. + Create the router attached to the external network. This + router routes traffic to the internal subnets as appropriate + (you may wish to create it under the a given tenant, in + which case you should append --tenant-id + DEMO_TENANT_ID to the command). + # neutron router-create ext-to-int + + + Connect the router to ext-net by + setting the router's gateway as + ext-net: + # neutron router-gateway-set EXT_TO_INT_ID EXT_NET_ID + + + Create an internal network for + DEMO_TENANT (and associated + subnet over an arbitrary internal IP range, say, + 10.5.5.0/24), and connect it to the + router by setting it as a port: + # neutron net-create --tenant-id DEMO_TENANT_ID demo-net SPECIAL_OPTIONS +# neutron subnet-create --tenant-id DEMO_TENANT_ID demo-net 10.5.5.0/24 --gateway 10.5.5.1 +# neutron router-interface-add EXT_TO_INT_ID DEMO_NET_SUBNET_ID + + + Check your plug-in special options page for remaining + steps. Then, return whence you came. + + +
+ plug-in-specific Neutron networks options +
Open vSwitch Network Configuration Options -
+
GRE Tunneling network options - When creating networks, you should use the options: - - --provider:network_type gre --provider:segmentation_id SEG_ID - - SEG_ID should be 2 - for the external network, and just any unique number inside the - tunnel range specified before for any other network. + When creating networks, you should use the + options: + --provider:network_type gre --provider:segmentation_id SEG_ID + SEG_ID should be + 2 for the external network, and just + any unique number inside the tunnel range specified before + for any other network. - These options are not needed beyond the first network, as - Neutron will automatically increment the segmentation id and copy - the network type option for any additional networks. + These options are not needed beyond the first + network, as Neutron automatically increments the + segmentation id and copy the network type option for any + additional networks. - After you have finished creating all the networks, we need to - specify which some more details for the L3 agent. We need to tell it - what the external network's ID is, as well as the ID of the router - associated with this machine (because we are not using namespaces, - there can be only one router per machine). To do this, edit + After you have finished creating all the networks, you + must specify which some more details for the L3 agent. you + must tell it what the external network's ID is, as well as + the ID of the router associated with this machine (because + you are not using namespaces, there can be only one router + per machine). To do this, edit /etc/neutron/l3_agent.ini: - - gateway_external_network_id = EXT_NET_ID - router_id = EXT_TO_INT_ID - + gateway_external_network_id = EXT_NET_ID +router_id = EXT_TO_INT_ID Then, restart the L3 agent. - - # service neutron-l3-agent restart - + # service neutron-l3-agent restart Return to the starting point.
-
+
VLAN network options FIXME - When creating networks, you should use the options: - - --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID - - SEG_ID should be 2 for the external network, and just any unique number - inside the vlan range specified before for any other network. + When creating networks, use the following + options: + --provider:network_type vlan --provider:physical_network physnet1 --provider:segmentation_id SEG_ID + SEG_ID should be + 2 for the external network, and just + any unique number inside the vlan range specified before + for any other network. - These options are not needed beyond the first network, as - Neutron will automatically increment the segmentation id and copy - the network type and physical network options for any additional - networks. + These options are not needed beyond the first + network, as Neutron automatically increments the + segmentation ID and copies the network type and physical + network options for any additional networks. - Some NICs have linux drivers that do not handle VLANs properly. - See the ovs-vlan-bug-workaround and ovs-vlan-test - man pages for more information. Additionally, you may try turning off - rx-vlan-offload and tx-vlan-offload using ethtool on - the DATA_INTERFACE. Additionally, VLAN tags add an additonal 4 bytes on to the packet size. If your NICs cannot handle large packets, make sure to set the MTU 4 lower than normal on the DATA_INTERFACE. - If you are running OpenStack inside a virtualized environment (for testing purposes), - switching to the virtio NIC type (or a similar technology if - you are not using KVM/QEMU) may solve the issue. + Some NICs have Linux drivers that do not handle + VLANs properly. See the + ovs-vlan-bug-workaround and + ovs-vlan-test man pages for more + information. Additionally, you may try turning off + rx-vlan-offload and + tx-vlan-offload using + ethtool on the + DATA_INTERFACE. + Additionally, VLAN tags add an additional 4 bytes on to + the packet size. If your NICs cannot handle large + packets, make sure to set the MTU 4 lower than normal on + the DATA_INTERFACE. + If you run OpenStack inside a virtualized + environment (for testing purposes), switching to the + virtio NIC type (or a similar + technology if you are not using KVM/QEMU) may solve the + issue.
- Install Required Networking Support on a Dedicated Compute Node + Install Required Networking Support on a Dedicated Compute + Node - This is for any node which is running compute services but is not running the full - network stack. + This is for any node that runs compute services but does + not run the full network stack. - - By default, an automated firewall configuration tool called system-config-firewall in place on RHEL. This tool is a graphical interface (and a curses-style interface with -tui on the end of the name) for configuring IP tables as a basic firewall. You should disable it when working with Neutron unless you are familiar with the underlying network technologies, as, by default, it will block various types of network traffic that are important to Neutron. To disable it, simple launch the program and uncheck the "Enabled" checkbox. - Once you have succesfully set up OpenStack with Neutron, you can - reenable it if you wish and figure out exactly how you need to configure - it. For the duration of the setup, however, it will make finding network - issues easier if you don't have it blocking all unrecognized - traffic. + By default, the system-config-firewall + automated firewall configuration tool is in place on RHEL. + This graphical interface (and a curses-style interface with + -tui on the end of the name) enables you + to configure IP tables as a basic firewall. You should disable + it when you work with Neutron unless you are familiar with the + underlying network technologies, as, by default, it blocks + various types of network traffic that are important to + Neutron. To disable it, simple launch the program and clear + the Enabled check box. + After you successfully set up OpenStack with Neutron, you + can re-enable and configure the tool. However, during Neutron + set up, disable the tool to make it easier to debug network + issues. - - - To start out, we need to disable packet destination filtering (route verification) in order to let the networking services route traffic to the VMs. Edit /etc/sysctl.conf (and then restart networking): - net.ipv4.conf.all.rp_filter=0 + + + Disable packet destination filtering (route + verification) to let the networking services route traffic + to the VMs. Edit the /etc/sysctl.conf + file and restart networking: + net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0 - - Next, we need to install and configure plugin components. Follow the instructions for configuring and - installing your plugin of choice. - - Now that you've installed and configured a plugin (you did do that, right?), it is time to configure the main part of Neutron by editing /etc/neutron/neutron.conf: - - auth_host = CONTROLLER_NODE_MGMT_IP - admin_tenant_name = service - admin_user = neutron - admin_password = ADMIN_PASSWORD - auth_url = http://CONTROLLER_NODE_MGMT_IP:35357/v2.0 - auth_strategy = keystone - rpc_backend = YOUR_RPC_BACKEND - PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO + + + Install and configure plug-in components. To install and + configure your plug-in, see . + + + Configure the main part of Neutron. Edit the + /etc/neutron/neutron.conf + file: + auth_host = CONTROLLER_NODE_MGMT_IP +admin_tenant_name = service +admin_user = neutron +admin_password = ADMIN_PASSWORD +auth_url = http://CONTROLLER_NODE_MGMT_IP:35357/v2.0 +auth_strategy = keystone +rpc_backend = YOUR_RPC_BACKEND +PUT_YOUR_RPC_BACKEND_SETTINGS_HERE_TOO + +
- Installing and configuring the Neutron plugins on the dedicated compute Node + Installing and configuring the Neutron plug-ins on the + dedicated compute Node
- Installing the Open vSwitch (OVS) plugin on the dedicated compute node - First, we must install the Open vSwitch plugin and its - dependencies. - - # sudo yum install openstack-neutron-openvswitch - - # zypper install openstack-neutron-openvswitch - - Now, we start up Open vSwitch. - - # service openvswitch start - - Next, we must do some initial configuration for Open vSwitch, no - matter whether we are using VLANs or GRE tunneling. We need to add the - integration bridge (this connects to the VMs), called - br-int. - # ovs-vsctl add-br br-int - Finally, we can now configure the settings for the particular plugins. First, - there are some general OVS configuration options to set, no matter - whether you use VLANs or GRE tunneling. We need to tell Neutron core to - use OVS by editing /etc/neutron/neutron.conf: - - core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 - - We also need to tell the OVS plugin how to connect to the - database by editing /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - - [database] - sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron - - Now, we must perform the configuration for the network type we chose when - configuring the network node. GRE tunneling or VLANs. - - Now, you have the option of configuring a firewall. If you do not wish to enforce - firewall rules (called Security Groups by Neutron), you may use the - neutron.agent.firewall.NoopFirewall. Otherwise, you may choose one of - the Neutron firewall plugins to use. To use the Hybrid OVS-IPTables driver (the most - common choice), edit - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - - [securitygroup] - # Firewall driver for realizing neutron security group function. - firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver - - - You must use at least the No-Op firewall mentioned above. - Otherwise, Horizon and other OpenStack services will not be able to - get and set required VM boot options. - - - After you have finished the above OVS configuration as - well as the core Neutron configuration after this - section, restart the Neutron Open vSwitch agent: - - # service neutron-openvswitch-agent restart - - Now, return where you started. -
- Configuring the Neutron <acronym>OVS</acronym> plugin for GRE Tunneling on the dedicated compute node - We must tell the OVS plugin to use GRE tunneling, - using an integration bridge of br-int and a tunneling bridge of br-tun, and to use a local IP for the tunnel of DATA_INTERFACE's IP. Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - - [ovs] - tenant_network_type = gre - tunnel_id_ranges = 1:1000 - enable_tunneling = True - integration_bridge = br-int - tunnel_bridge = br-tun - local_ip = DATA_INTERFACE_IP - - - Now, return to the OVS general - instructions. + Installing the Open vSwitch (OVS) plug-in on the + dedicated compute node + + + Install the Open vSwitch plug-in and its + dependencies. + # sudo yum install openstack-neutron-openvswitch + # zypper install openstack-neutron-openvswitch + + + + Start Open vSwitch: + # service openvswitch start + + + You must configure Open vSwitch whether you use + VLANs or GRE tunneling. You must add the + br-int integration bridge, which + connects to the VMs. + # ovs-vsctl add-br br-int + + + Configure the settings for the particular plug-ins. + You must set some general OVS + configuration options whether you use VLANs or GRE + tunneling. You must tell Neutron core to use + OVS. Edit the + /etc/neutron/neutron.conf + file: + core_plug-in = neutron.plug-ins.openvswitch.ovs_neutron_plug-in.OVSNeutronplug-inV2 + + + Tell the OVS plug-in how to + connect to the database. Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + file: + [database] +sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron + + + Configure the network type that you chose when you + configured the network node, which is GRE tunneling or VLANs. + + + + You can configure a firewall. If you do not wish to + enforce firewall rules (called security + groups by Neutron), you can use the + neutron.agent.firewall.NoopFirewall. + Otherwise, you can choose to use one of the Neutron + firewall plug-ins. To use the Hybrid OVS-IPTables driver + (the most common choice), edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + file: + [securitygroup] +# Firewall driver for realizing neutron security group function. +firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver + + You must use at least the No-Op firewall. + Otherwise, Horizon and other OpenStack services cannot + get and set required VM boot options. + + + + + After you complete OVS configuration and + the core Neutron configuration after this + section, restart the Neutron Open vSwitch + agent: + # service neutron-openvswitch-agent restart + + + Return where you started. + + +
+ Configuring the Neutron <acronym>OVS</acronym> + plug-in for GRE Tunneling on the dedicated compute + node + + + Tell the OVS plug-in to use GRE + tunneling with a br-int integration + bridge, a br-tun tunneling bridge, + and a local IP for the tunnel of + DATA_INTERFACE's IP. Edit + the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + file: + [ovs] +tenant_network_type = gre +tunnel_id_ranges = 1:1000 +enable_tunneling = True +integration_bridge = br-int +tunnel_bridge = br-tun +local_ip = DATA_INTERFACE_IP + + + Now, return to the OVS general + instructions. + +
- -
- Configuring the Neutron <acronym>OVS</acronym> plugin for VLANs - (work in progress) - - First, we must tell OVS that we want to use VLANS by editing /etc/neutron/plugins/openvswitch/ovs_neutron_plugin: - - [ovs] - tenant_network_type = vlan - network_vlan_ranges = physnet1:1:4094 - bridge_mappings = physnet1:br-DATA_INTERFACE - - - Then, create the bridge for DATA_INTERFACE and add DATA_INTERFACE to it: - - # ovs-vsctl add-br br-DATA_INTERFACE - # ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE - - - Now, return to the OVS general - instruction. +
+ Configuring the Neutron <acronym>OVS</acronym> + plug-in for VLANs (work in progress) + + + + Tell OVS to use VLANS. Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plug-in + file: + [ovs] +tenant_network_type = vlan +network_vlan_ranges = physnet1:1:4094 +bridge_mappings = physnet1:br-DATA_INTERFACE + + + Create the bridge for + DATA_INTERFACE and add + DATA_INTERFACE to + it: + # ovs-vsctl add-br br-DATA_INTERFACE +# ovs-vsctl add-port br-DATA_INTERFACE DATA_INTERFACE + + + Return to the OVS general + instruction. + +
- Install required Networking support on a dedicated controller node - + Install required Networking support on a dedicated + controller node - By default, an automated firewall configuration tool called - system-config-firewall in place on RHEL. This tool is a - graphical interface (and a curses-style interface with -tui on - the end of the name) for configuring IP tables as a basic firewall. You should - disable it when working with Neutron unless you are familiar with the underlying - network technologies, as, by default, it will block various types of network traffic - that are important to Neutron. To disable it, simple launch the program and uncheck - the "Enabled" checkbox. - Once you have successfully set up OpenStack with Neutron, you can - re-enable it if you wish and figure out exactly how you need to - configure it. For the duration of the setup, however, it will make - finding network issues easier if you don't have it blocking all - unrecognized traffic. + By default, the system-config-firewall + automated firewall configuration tool is in place on RHEL. + This graphical interface (and a curses-style interface with + -tui on the end of the name) enables you + to configure IP tables as a basic firewall. You should disable + it when you work with Neutron unless you are familiar with the + underlying network technologies, as, by default, it blocks + various types of network traffic that are important to + Neutron. To disable it, simple launch the program and clear + the Enabled check box. + After you successfully set up OpenStack with Neutron, you + can re-enable and configure the tool. However, during Neutron + set up, disable the tool to make it easier to debug network + issues. - First, we need to install the main Neutron server, the Neutron libraries for python, and the Neutron CLI: - - # yum install openstack-neutron python-neutron python-neutronclient - - - # zypper install openstack-neutron python-neutron python-neutronclient - - - Now, we need to set up the Neutron server, as usual. Make sure to do the core - server component setup (RPC backend config, auth_strategy, and so on). Then, we'll - need to configure Neutron's copy of api-paste.ini at /etc/neutron/api-paste.ini: - - [filter:authtoken] - EXISTING_STUFF_HERE - admin_tenant_name = service - admin_user = neutron - admin_password = ADMIN_PASSWORD - - Now, we need to configure the plugin you chose when we configured the Network node. Follow the instructions and return. - Next, we need to tell Nova about Neutron. Specifically, we need to tell Nova about Neutron's endpoint, and that it will handle firewall issues, so don't use a firewall though Nova. We can do this by editing /etc/nova/nova.conf: - - network_api_class=nova.network.neutronv2.api.API - neutron_url=http://CONTROLLER_MGMT_IP:9696 - neutron_auth_strategy=keystone - neutron_admin_tenant_name=service - neutron_admin_username=neutron - neutron_admin_password=password - neutron_admin_auth_url=http://CONTROLLER_MGMT_IP:35357/v2.0 - firewall_driver=nova.virt.firewall.NoopFirewallDriver - security_group_api=neutron - - Finally, we just need to start neutron-server: - - # service neutron-server start - - - Make sure to check that the plugin restarted successfully. If you - get errors about missing the file plugin.ini, - simply make a symlink pointing at - /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini - with the "name" /etc/neutron/plugins.ini. - -
- Installing and configuring the Neutron plugins on the dedicated controller Node -
- Installing the Open vSwitch (OVS) plugin on the dedicated controller node - First, we must install the Open vSwitch plugin: - - # sudo yum install openstack-neutron-openvswitch - - - # zypper install openstack-neutron-openvswitch - + + + Install the main Neutron server, Neutron libraries for + python, and the neutron command-line interface (CLI): + # yum install openstack-neutron python-neutron python-neutronclient + # zypper install openstack-neutron python-neutron python-neutronclient - Then, we can now configure the settings for the particular plugins. First, there are some general OVS configuration options to set, no matter whether you use VLANs or GRE tunneling. We need to tell Neutron core to use OVS by editing /etc/neutron/neutron.conf: - - core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 - - We also need to tell the OVS plugin how to connect to the database by editing /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - - [database] - sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron - - Now, we must perform the configuration for the network type we chose when configuring the network node. GRE tunneling or VLANs. - - + + + Set up the Neutron server, as usual. Make sure to do the + core server component set up (RPC back end configuration, + auth_strategy, and so on). Configure the Neutron copy of the + api-paste.ini at + /etc/neutron/api-paste.ini + file: + [filter:authtoken] +EXISTING_STUFF_HERE +admin_tenant_name = service +admin_user = neutron +admin_password = ADMIN_PASSWORD + + + Configure the plug-in you chose when you configured the + Network node. Follow the instructions and return. + + + Tell Nova about Neutron. Specifically, you must tell + Nova about the Neutron endpoint and that it handles firewall + issues, so a firewall is not required through Nova. Edit the + /etc/nova/nova.conf file: + network_api_class=nova.network.neutronv2.api.API +neutron_url=http://CONTROLLER_MGMT_IP:9696 +neutron_auth_strategy=keystone +neutron_admin_tenant_name=service +neutron_admin_username=neutron +neutron_admin_password=password +neutron_admin_auth_url=http://CONTROLLER_MGMT_IP:35357/v2.0 +firewall_driver=nova.virt.firewall.NoopFirewallDriver +security_group_api=neutron + + + Start neutron-server: + # service neutron-server start - Notice that the dedicated controller node does not actually need - to run the Open vSwitch agent, nor does it need to run Open vSwitch - itself. + Make sure that the plug-in restarted successfully. If + you get errors about a missing + plugin.ini file, make a symlink + that points to + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + with the name + /etc/neutron/plug-ins.ini. - Now, return where you started. -
- Configuring the Neutron <acronym>OVS</acronym> plugin for GRE Tunneling on the dedicated compute node - We must tell the OVS plugin to use GRE tunneling. - Edit /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: - - [ovs] - tenant_network_type = gre - tunnel_id_ranges = 1:1000 - enable_tunneling = True - - Now, return to the OVS general instructions. + + +
+ Installing and configuring the Neutron plug-ins on the + dedicated controller Node +
+ Installing the Open vSwitch (OVS) plug-in on the + dedicated controller node + + + Install the Open vSwitch plug-in: + # sudo yum install openstack-neutron-openvswitch + # zypper install openstack-neutron-openvswitch + + + + Configure the settings for the particular plug-ins. + You must set some general OVS + configuration options whether you use VLANs or GRE + tunneling. You must tell Neutron core to use + OVS. Edit the + /etc/neutron/neutron.conf + file: + core_plug-in = neutron.plug-ins.openvswitch.ovs_neutron_plug-in.OVSNeutronplug-inV2 + + + Tell the OVS plug-in how to + connect to the database. Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + file: + [database] +sql_connection = DATABASE_TYPE://neutron:NETURON_PASSWORD@CONTROLLER_NODE_HOSTNAME/neutron + + + Configure the network type that you chose when you + configured the network node. GRE tunneling or VLANs. + + + + Notice that the dedicated controller node does not + actually need to run the Open vSwitch agent or run + Open vSwitch itself. + + + + Return where you started. + + +
+ Configuring the Neutron <acronym>OVS</acronym> + plug-in for GRE Tunneling on the dedicated compute + node + + + Tell the OVS plug-in to use GRE + tunneling. Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plugin.ini + file: + [ovs] +tenant_network_type = gre +tunnel_id_ranges = 1:1000 +enable_tunneling = True + + + Return to the OVS general + instructions. + +
-
- Configuring the Neutron <acronym>OVS</acronym> plugin for VLANs - - First, we must tell OVS that we want to use VLANS by - editing /etc/neutron/plugins/openvswitch/ovs_neutron_plugin: - - [ovs] - tenant_network_type = vlan - network_vlan_ranges = physnet1:1:4094 - - Now, return to the OVS general instructions. +
+ Configuring the Neutron <acronym>OVS</acronym> + plug-in for VLANs + + + + Tell OVS to use VLANS. Edit the + /etc/neutron/plug-ins/openvswitch/ovs_neutron_plug-in + file, as follows: + [ovs] +tenant_network_type = vlan +network_vlan_ranges = physnet1:1:4094 + + + Return to the OVS general + instructions. + +