Single flat network This section describes how to install the OpenStack Networking service and its components for a single flat network use case. The following diagram shows the set up. For simplicity, all nodes should have one interface for management traffic and one or more interfaces for traffic to and from VMs. The management network is 100.1.1.0/24 with controller node at 100.1.1.2. The example uses the Open vSwitch plugin and agent. You can modify this set up to make use of another supported plug-in and its agent. The following table describes some nodes in the set up: Node Description Controller Node Runs the Networking service, Identity Service, and Compute services that are required to deploy VMs (nova-api, nova-scheduler, for example). The node must have at least one network interface, which is connected to the Management Network. The host name is controller, which every other node resolves to the IP of the controller node. The nova-network service should not be running. This is replaced by the OpenStack Networking component, neutron. To delete a network, run this command: # nova-manage network delete --help Usage: nova-manage network delete <args> [options] Options: -h, --help show this help message and exit --fixed_range=<x.x.x.x/yy> Network to delete --uuid=<uuid> UUID of network to delete Note that a network must first be disassociated from a project using the nova network-disassociate command before it can be deleted. Compute Node Runs the Networking L2 agent and the Compute services that run VMs (nova-compute specifically, and optionally other nova-* services depending on configuration). The node must have at least two network interfaces. The first communicates with the controller node through the management network. The second interface handles the VM traffic on the data network. The VM can receive its IP address from the DHCP agent on this network. Network Node Runs Networking L2 agent and the DHCP agent. The DHCP agent allocates IP addresses to the VMs on the network. The node must have at least two network interfaces. The first communicates with the controller node through the management network. The second interface handles the VM traffic on the data network. Router Router has IP 30.0.0.1, which is the default gateway for all VMs. The router must be able to access public networks. The demo assumes the following prerequisites: Controller node Relevant Compute services are installed, configured, and running. Glance is installed, configured, and running. Additionally, an image must be available. OpenStack Identity is installed, configured, and running. A Networking user neutron is in place on tenant service with password NEUTRON_PASS. Additional services: RabbitMQ is running with the default guest user and password RABBIT_PASS. Qpid is running with the default guest user and password. MySQL server (user is root). Compute node Compute is installed and configured.
Install Controller node - Networking server Install the Networking server. Install the Networking server and respond to debconf prompts to configure the database, the keystone_authtoken, and the RabbitMQ credentials. See for installation instructions. Create database ovs_neutron. See for database creation details. If not already configured, update the Networking /etc/neutron/neutron.conf configuration file to use the Identity Service, the plug-in, and database configuration: # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_port 35357 # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ auth_protocol http # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_tenant_name service # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_user neutron # openstack-config --set /etc/neutron/neutron.conf keystone_authtoken \ admin_password NEUTRON_PASS Configure Networking to connect to the database: # openstack-config --set /etc/neutron/neutron.conf database connection \ mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure Networking to use your chosen plug-in: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ control_exchange neutron Configure access to the RabbitMQ service: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS Configure access to the Qpid message queue: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_port 5672 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_username guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_password guest If not already configured, update the Networking /etc/neutron/neutron.conf configuration file to choose a plug-in and Identity Service user as necessary: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron [keystone_authtoken] admin_tenant_name=service admin_user=neutron admin_password=NEUTRON_PASS Update the plug-in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini configuration file with the bridge mappings: [ovs] network_vlan_ranges = physnet1 bridge_mappings = physnet1:br-eth0 # openstack-config --set \ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS \ network_vlan_ranges physnet1 # openstack-config --set \ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS \ bridge_mappings physnet1:br-eth0 Restart the Networking service: # service neutron-server restart # service openstack-neutron restart Compute node - Compute Install the nova-compute service. See for installation instructions. Update the Compute /etc/nova/nova.conf configuration file to make use of OpenStack Networking: network_api_class=nova.network.neutronv2.api.API neutron_admin_username=neutron neutron_admin_password=NEUTRON_PASS neutron_admin_auth_url=http://controller:35357/v2.0/ neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_url=http://controller:9696/ # openstack-config --set /etc/nova/nova.conf DEFAULT \ network_api_class nova.network.neutronv2.api.API # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_url http://controller:9696 # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_auth_strategy keystone # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_tenant_name service # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_username neutron # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_password NEUTRON_PASS # openstack-config --set /etc/nova/nova.conf DEFAULT \ neutron_admin_auth_url http://controller:35357/v2.0 Restart the Compute services # service openstack-nova-compute restart # service nova-compute restart Compute and Network node - L2 agent Install and start Open vSwitch. Then, configure neutron accordingly. See for detailed instructions. Add the integration bridge to Open vSwitch: # ovs-vsctl add-br br-int Update the Networking /etc/neutron/neutron.conf configuration file: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier [database] connection = mysql://neutron:NEUTRON_DBPASS@controller/neutron If not already configured, update the Networking /etc/neutron/neutron.conf configuration file to use the plug-in, message queue, and database configuration: # openstack-config --set /etc/neutron/neutron.conf database connection \ mysql://neutron:NEUTRON_DBPASS@controller/neutron Configure Networking to use your chosen plug-in: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ control_exchange neutron Configure access to the RabbitMQ service: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS Configure access to the Qpid message queue: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_port 5672 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_username guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_password guest Update the plug-in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini configuration file: [ovs] network_vlan_ranges = physnet1 bridge_mappings = physnet1:br-eth0 # openstack-config --set \ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS \ network_vlan_ranges physnet1 # openstack-config --set \ /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini OVS \ bridge_mappings physnet1:br-eth0 Create a symbolic link from /etc/neutron/plugin.ini to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini or neutron-server will not run: # ln -s /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini /etc/neutron/plugin.ini Create the br-eth0 network bridge to handle communication between nodes using eth0: # ovs-vsctl add-br br-eth0 # ovs-vsctl add-port br-eth0 eth0 Restart the OpenStack Networking L2 agent: # service openstack-neutron-openvswitch-agent restart # service neutron-openvswitch-agent restart Network node - DHCP agent Install the DHCP agent. See for generic installation instructions. Update the Networking /etc/neutron/neutron.conf configuration file: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier If not already configured, update the Networking /etc/neutron/neutron.conf configuration file to use the plug-in and message queue. # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ core_plugin neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ control_exchange neutron Configure access to the RabbitMQ service: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_kombu # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_host controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_userid guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rabbit_password RABBIT_PASS Configure access to the Qpid message queue: # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ rpc_backend neutron.openstack.common.rpc.impl_qpid # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_hostname controller # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_port 5672 # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_username guest # openstack-config --set /etc/neutron/neutron.conf DEFAULT \ qpid_password guest Ensure that the DHCP agent is using the correct plug-in my changing the configuration in /etc/neutron/dhcp_agent.ini: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver # openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT \ interface_driver neutron.agent.linux.interface.OVSInterfaceDriver Restart the DHCP agent: # service openstack-neutron-dhcp-agent restart # service neutron-dhcp-agent restart
Configure logical network Use the following commands on the network node. Ensure that the following environment variables are set. Various clients use these variables to access the Identity Service. export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:5000/v2.0/ Get the tenant ID (Used as $TENANT_ID later): # keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | 247e478c599f45b5bd297e8ddbbc9b6a | TenantA | True | | 2b4fec24e62e4ff28a8445ad83150f9d | TenantC | True | | 3719a4940bf24b5a8124b58c9b0a6ee6 | TenantB | True | | 5fcfbc3283a142a5bb6978b549a511ac | demo | True | | b7445f221cda4f4a8ac7db6b218b1339 | admin | True | +----------------------------------+---------+---------+ Get the user information: # keystone user-list +----------------------------------+-------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------------------+ | 5a9149ed991744fa85f71e4aa92eb7ec | demo | True | | | 5b419c74980d46a1ab184e7571a8154e | admin | True | admin@example.com | | 8e37cb8193cb4873a35802d257348431 | UserC | True | | | c11f6b09ed3c45c09c21cbbc23e93066 | UserB | True | | | ca567c4f6c0942bdac0e011e97bddbe3 | UserA | True | | +----------------------------------+-------+---------+-------------------+ Create a internal shared network on the demo tenant ($TENANT_ID is b7445f221cda4f4a8ac7db6b218b1339): $ neutron net-create --tenant-id $TENANT_ID sharednet1 --shared --provider:network_type flat \ --provider:physical_network physnet1 Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 04457b44-e22a-4a5c-be54-a53a9b2818e7 | | name | sharednet1 | | provider:network_type | flat | | provider:physical_network | physnet1 | | provider:segmentation_id | | | router:external | False | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | b7445f221cda4f4a8ac7db6b218b1339 | +---------------------------+--------------------------------------+ Create a subnet on the network: $ neutron subnet-create --tenant-id $TENANT_ID sharednet1 30.0.0.0/24 Created a new subnet: +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} | | cidr | 30.0.0.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 30.0.0.1 | | host_routes | | | id | b8e9a88e-ded0-4e57-9474-e25fa87c5937 | | ip_version | 4 | | name | | | network_id | 04457b44-e22a-4a5c-be54-a53a9b2818e7 | | tenant_id | 5fcfbc3283a142a5bb6978b549a511ac | +------------------+--------------------------------------------+ Create a server for tenant A: $ nova --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \ --nic net-id=04457b44-e22a-4a5c-be54-a53a9b2818e7 TenantA_VM1 $ nova --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 list +--------------------------------------+-------------+--------+---------------------+ | ID | Name | Status | Networks | +--------------------------------------+-------------+--------+---------------------+ | 09923b39-050d-4400-99c7-e4b021cdc7c4 | TenantA_VM1 | ACTIVE | sharednet1=30.0.0.3 | +--------------------------------------+-------------+--------+---------------------+ Ping the server of tenant A: # ip addr flush eth0 # ip addr add 30.0.0.201/24 dev br-eth0 $ ping 30.0.0.3 Ping the public network within the server of tenant A: $ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=1.74 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=1.50 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=1.23 ms ^C --- 192.168.1.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms The 192.168.1.1 is an IP on public network to which the router connects. Create servers for other tenants with similar commands. Because all VMs share the same subnet, they can access each other.
Use case: single flat network The simplest use case is a single network. This is a "shared" network, meaning it is visible to all tenants via the Networking API. Tenant VMs have a single NIC, and receive a fixed IP address from the subnet(s) associated with that network. This use case essentially maps to the FlatManager and FlatDHCPManager models provided by Compute. Floating IPs are not supported. This network type is often created by the OpenStack administrator to map directly to an existing physical network in the data center (called a "provider network"). This allows the provider to use a physical router on that data center network as the gateway for VMs to reach the outside world. For each subnet on an external network, the gateway configuration on the physical router must be manually configured outside of OpenStack.
Use case: multiple flat network This use case is similar to the above single flat network use case, except that tenants can see multiple shared networks via the Networking API and can choose which network (or networks) to plug into.
Use case: mixed flat and private network This use case is an extension of the above Flat Network use cases. In addition to being able to see one or more shared networks via the OpenStack Networking API, tenants can also have access to private per-tenant networks (only visible to tenant users). Created VMs can have NICs on any of the shared or private networks that the tenant owns. This enables the creation of multi-tier topologies that use VMs with multiple NICs. It also enables a VM to act as a gateway so that it can provide services such as routing, NAT, and load balancing.