Single Flat Network This section describes how to install the OpenStack Networking service and its components for the "Use Case: Single Flat Network ". The diagram below shows the setup. For simplicity all of the nodes should have one interface for management traffic and one or more interfaces for traffic to and from VMs. The management network is 100.1.1.0/24 with controller node at 100.1.1.2. The example uses the Open vSwitch plugin and agent. Note the setup can be tweaked to make use of another supported plugin and its agent. Here are some nodes in the setup. Node Description Controller Node Runs the OpenStack Networking service, OpenStack Identity and all of the OpenStack Compute services that are required to deploy VMs (nova-api, nova-scheduler, for example). The node must have at least one network interface, which is connected to the "Management Network". The hostname is 'controlnode', which every other node resolve to the controller node's IP. Note The nova-network service should not be running. This is replaced by OpenStack Networking. Compute Node Runs the OpenStack Networking L2 agent and the OpenStack Compute services that run VMs (nova-compute specifically, and optionally other nova-* services depending on configuration). The node must have at least two network interfaces. The first is used to communicate with the controller node via the management network. The second interface is used for the VM traffic on the Data network. The VM will be able to receive its IP address from the DHCP agent on this network. Network Node Runs OpenStack Networking L2 agent and the DHCP agent. The DHCP agent will allocate IP addresses to the VMs on the network. The node must have at least two network interfaces. The first is used to communicate with the controller node via the management network. The second interface will be used for the VM traffic on the data network. Router Router has IP 30.0.0.1, which is the default gateway for all VMs. The router should have ability to access public networks. The demo assumes the following: Controller Node Relevant OpenStack Compute services are installed, configured and running. Glance is installed, configured and running. In addition to this there should be an image. OpenStack Identity is installed, configured and running. An OpenStack Networking user neutron should be created on tenant servicetenant with password servicepassword. Additional services RabbitMQ is running with default guest and its password MySQL server (user is root and password is root) Compute Node OpenStack Compute compute is installed and configured
Installation Controller Node - OpenStack Networking Server Install the OpenStack Networking server. Create database ovs_neutron. See the section on the Core Plugins for the exact details. Update the OpenStack Networking configuration file, /etc/neutron/neutron.conf setting plugin choice and Identity Service user as necessary: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controlnode notification_driver = neutron.openstack.common.notifier.rabbit_notifier [keystone_authtoken] admin_tenant_name=servicetenant admin_user=neutron admin_password=servicepassword Update the plugin configuration file, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: [database] sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8 [ovs] network_vlan_ranges = physnet1 bridge_mappings = physnet1:br-eth0 Start the OpenStack Networking service Compute Node - OpenStack Compute Install the nova-compute service. Update the OpenStack Compute configuration file, /etc/nova/nova.conf. Make sure the following is at the end of this file: network_api_class=nova.network.neutronv2.api.API neutron_admin_username=neutron neutron_admin_password=servicepassword neutron_admin_auth_url=http://controlnode:35357/v2.0/ neutron_auth_strategy=keystone neutron_admin_tenant_name=servicetenant neutron_url=http://controlnode:9696/ libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver Restart the OpenStack Compute service Compute and Network Node - L2 Agent Install and start Open vSwitch. Install the L2 agent(Neutron Open vSwitch agent). Add the integration bridge to the Open vSwitch: $ sudo ovs-vsctl add-br br-int Update the OpenStack Networking configuration file, /etc/neutron/neutron.conf: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controlnode notification_driver = neutron.openstack.common.notifier.rabbit_notifier Update the plugin configuration file, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: [database] sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8 [ovs] network_vlan_ranges = physnet1 bridge_mappings = physnet1:br-eth0 Create the network bridge br-eth0 (All VM communication between the nodes will be done via eth0): $ sudo ovs-vsctl add-br br-eth0 $ sudo ovs-vsctl add-port br-eth0 eth0 Start the OpenStack Networking L2 agent Network Node - DHCP Agent Install the DHCP agent. Update the OpenStack Networking configuration file, /etc/neutron/neutron.conf: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controlnode notification_driver = neutron.openstack.common.notifier.rabbit_notifier Update the DHCP configuration file /etc/neutron/dhcp_agent.ini: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver Start the DHCP agent
Logical Network Configuration All of the commands below can be executed on the network node. Note please ensure that the following environment variables are set. These are used by the various clients to access OpenStack Identity. export OS_USERNAME=admin export OS_PASSWORD=adminpassword export OS_TENANT_NAME=admin export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/ Get the tenant ID (Used as $TENANT_ID later): $ keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | 247e478c599f45b5bd297e8ddbbc9b6a | TenantA | True | | 2b4fec24e62e4ff28a8445ad83150f9d | TenantC | True | | 3719a4940bf24b5a8124b58c9b0a6ee6 | TenantB | True | | 5fcfbc3283a142a5bb6978b549a511ac | demo | True | | b7445f221cda4f4a8ac7db6b218b1339 | admin | True | +----------------------------------+---------+---------+ Get the User information: $ keystone user-list +----------------------------------+-------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------------------+ | 5a9149ed991744fa85f71e4aa92eb7ec | demo | True | | | 5b419c74980d46a1ab184e7571a8154e | admin | True | admin@example.com | | 8e37cb8193cb4873a35802d257348431 | UserC | True | | | c11f6b09ed3c45c09c21cbbc23e93066 | UserB | True | | | ca567c4f6c0942bdac0e011e97bddbe3 | UserA | True | | +----------------------------------+-------+---------+-------------------+ Create a internal shared network on the demo tenant ($TENANT_ID will be b7445f221cda4f4a8ac7db6b218b1339): $ neutron net-create --tenant-id $TENANT_ID sharednet1 --shared --provider:network_type flat \ --provider:physical_network physnet1 Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 04457b44-e22a-4a5c-be54-a53a9b2818e7 | | name | sharednet1 | | provider:network_type | flat | | provider:physical_network | physnet1 | | provider:segmentation_id | | | router:external | False | | shared | True | | status | ACTIVE | | subnets | | | tenant_id | b7445f221cda4f4a8ac7db6b218b1339 | +---------------------------+--------------------------------------+ Create a subnet on the network: $ neutron subnet-create --tenant-id $TENANT_ID sharednet1 30.0.0.0/24 Created a new subnet: +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} | | cidr | 30.0.0.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 30.0.0.1 | | host_routes | | | id | b8e9a88e-ded0-4e57-9474-e25fa87c5937 | | ip_version | 4 | | name | | | network_id | 04457b44-e22a-4a5c-be54-a53a9b2818e7 | | tenant_id | 5fcfbc3283a142a5bb6978b549a511ac | +------------------+--------------------------------------------+ Create a server for tenant A: $ nova --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \ --nic net-id=04457b44-e22a-4a5c-be54-a53a9b2818e7 TenantA_VM1 $ nova --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 list +--------------------------------------+-------------+--------+---------------------+ | ID | Name | Status | Networks | +--------------------------------------+-------------+--------+---------------------+ | 09923b39-050d-4400-99c7-e4b021cdc7c4 | TenantA_VM1 | ACTIVE | sharednet1=30.0.0.3 | +--------------------------------------+-------------+--------+---------------------+ Ping the server of tenant A: $ sudo ip addr flush eth0 $ sudo ip addr add 30.0.0.201/24 dev br-eth0 $ ping 30.0.0.3 Ping the public network within the server of tenant A: $ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=1.74 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=1.50 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=1.23 ms ^C --- 192.168.1.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms Note: The 192.168.1.1 is an IP on public network that the router is connecting. Create servers for other tenants We can create servers for other tenants with similar commands. Since all these VMs share the same subnet, they will be able to access each other.