Provider Router with Private Networks This section describes how to install the OpenStack Networking service and its components for the "Use Case: Provider Router with Private Networks." We will follow the Basic Install Guide except for the Neutron, Open-vSwitch, and Virtual Networking sections on each of the nodes. The Basic Install Guide document uses gre tunnels. This document describes how to use vlans for separation instead. The following figure shows the setup: Because you run the DHCP agent and L3 agent on one node, you must set use_namespaces to True (which is the default) in both agents' configuration files. See Limitations. The following nodes are in the setup:
Nodes for Demo
Node Description
Controller Runs the OpenStack Networking service, OpenStack Identity and all of the OpenStack Compute services that are required to deploy a VM. The service must have at least two network interfaces. The first should be connected to the "Management Network" to communicate with the compute and network nodes. The second interface should be connected to the API/public network.
Compute Runs OpenStack Compute and the OpenStack Networking L2 agent. This node will not have access the public network. The node must have at least two network interfaces. The first is used to communicate with the controller node, through the management network. The VM will receive its IP address from the DHCP agent on this network.
Network Runs OpenStack Networking L2 agent, DHCP agent, and L3 agent. This node will have access to the public network. The DHCP agent will allocate IP addresses to the VMs on the network. The L3 agent will perform NAT and enable the VMs to access the public network. The node must have at least three network interfaces. The first communicates with the controller node through the management network. The second interface is used for the VM traffic and is on the data network. The third interface connects to the external gateway on the network.
Installations
Controller To install and configure the controller node Run the following command: # apt-get install neutron-server Configure Neutron services: Edit file /etc/neutron/neutron.conf and modify: core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 auth_strategy = keystone fake_rabbit = False rabbit_password = password Edit file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini and modify: [database] sql_connection = mysql://neutron:password@localhost:3306/neutron [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:100:2999 Edit file /etc/neutron/api-paste.ini and modify: admin_tenant_name = service admin_user = neutron admin_password = password Start the services: # service neutron-server restart
Network Node To install and configure the network node Install the packages: # apt-get install neutron-plugin-openvswitch-agent \ neutron-dhcp-agent neutron-l3-agent Start Open vSwitch: # service openvswitch-switch start Add the integration bridge to the Open vSwitch: # ovs-vsctl add-br br-int Update the OpenStack Networking configuration file, /etc/neutron/neutron.conf: rabbit_password = password rabbit_host = 192.168.0.1 Update the plugin configuration file, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini : [database] sql_connection = mysql://neutron:password@192.168.0.1:3306/neutron [ovs] tenant_network_type=vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-eth1 Create the network bridge br-eth1 (All VM communication between the nodes occurs through eth1): $ sudo ovs-vsctl add-br br-eth1 $ sudo ovs-vsctl add-port br-eth1 eth1 Create the external network bridge to the Open vSwitch: $ sudo ovs-vsctl add-br br-ex $ sudo ovs-vsctl add-port br-ex eth2 Edit the file /etc/neutron/l3_agent.ini and modify: [DEFAULT] auth_url = http://192.168.0.1:35357/v2.0 admin_tenant_name = service admin_user = neutron admin_password = password metadata_ip = 192.168.0.1 use_namespaces = True Edit the file /etc/neutron/api-paste.ini and modify: [DEFAULT] auth_host = 192.168.0.1 admin_tenant_name = service admin_user = neutron admin_password = password Edit the file /etc/neutron/dhcp_agent.ini and modify: use_namespaces = True Restart networking services: # service neutron-plugin-openvswitch-agent start # service neutron-dhcp-agent restart # service neutron-l3-agent restart
Compute Node To install and configure the compute node Install the packages:# apt-get install openvswitch-switch neutron-plugin-openvswitch-agent Start open vSwitch Service:# service openvswitch-switch start Configure Virtual Bridging:# ovs-vsctl add-br br-int Update the OpenStack Networking configuration file, /etc/neutron/neutron.conf: rabbit_password = password rabbit_host = 192.168.0.1 Update the file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: [database] sql_connection = mysql://neutron:password@192.168.0.1:3306/neutron [ovs] tenant_network_type = vlan network_vlan_ranges = physnet1:1:4094 bridge_mappings = physnet1:br-eth1 Start the DHCP agent: # service neutron-plugin-openvswitch-agent restart
Logical Network Configuration You can run the commands in the following procedures on the network node. Ensure that the following environment variables are set. Various clients use these variables to access OpenStack Identity. Create a novarc file: export OS_TENANT_NAME=provider_tenant export OS_USERNAME=admin export OS_PASSWORD=password export OS_AUTH_URL="http://192.168.0.1:5000/v2.0/" export SERVICE_ENDPOINT="http://192.168.0.1:35357/v2.0" export SERVICE_TOKEN=password Export the variables:$ source novarc echo "source novarc">>.bashrc The admin user creates a network and subnet on behalf of tenant_A. A user from tenant_A can also complete these steps. To configure internal networking Get the tenant ID (Used as $TENANT_ID later). $ keystone tenant-list +----------------------------------+--------------------+---------+ | id | name | enabled | +----------------------------------+--------------------+---------+ | 48fb81ab2f6b409bafac8961a594980f | provider_tenant | True | | cbb574ac1e654a0a992bfc0554237abf | service | True | | e371436fe2854ed89cca6c33ae7a83cd | invisible_to_admin | True | | e40fa60181524f9f9ee7aa1038748f08 | tenant_A | True | +----------------------------------+--------------------+---------+ Create an internal network named net1 for tenant_A ($TENANT_ID will be e40fa60181524f9f9ee7aa1038748f08): $ neutron net-create --tenant-id $TENANT_ID net1 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | e99a361c-0af8-4163-9feb-8554d4c37e4f | | name | net1 | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 1024 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +---------------------------+--------------------------------------+ Create a subnet on the network net1 (ID field below is used as $SUBNET_ID later): $ neutron subnet-create --tenant-id $TENANT_ID net1 10.5.5.0/24 +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "10.5.5.2", "end": "10.5.5.254"} | | cidr | 10.5.5.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 10.5.5.1 | | host_routes | | | id | c395cb5d-ba03-41ee-8a12-7e792d51a167 | | ip_version | 4 | | name | | | network_id | e99a361c-0af8-4163-9feb-8554d4c37e4f | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +------------------+--------------------------------------------+ A user with the admin role must complete the following steps. In this procedure, the user is admin from provider_tenant. To configure the router and external networking Create a router named router1 (ID is used as $ROUTER_ID later): $ neutron router-create router1 +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 685f64e7-a020-4fdf-a8ad-e41194ae124b | | name | router1 | | status | ACTIVE | | tenant_id | 48fb81ab2f6b409bafac8961a594980f | +-----------------------+--------------------------------------+ The --tenant-id parameter is not specified, so this router is assigned to the provider_tenant tenant. Add an interface to router1 and attach it to the subnet from net1: $ neutron router-interface-add $ROUTER_ID $SUBNET_ID Added interface to router 685f64e7-a020-4fdf-a8ad-e41194ae124b You can repeat this step to add more interfaces for other networks that belong to other tenants. Create the external network named ext_net: $ neutron net-create ext_net --router:external=True +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | name | ext_net | | provider:network_type | vlan | | provider:physical_network | physnet1 | | provider:segmentation_id | 1 | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 48fb81ab2f6b409bafac8961a594980f | +---------------------------+--------------------------------------+ Create the subnet for floating IPs. The DHCP service is disabled for this subnet. $ neutron subnet-create ext_net \ --allocation-pool start=7.7.7.130,end=7.7.7.150 \ --gateway 7.7.7.1 7.7.7.0/24 -- --enable_dhcp=False +------------------+--------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------+ | allocation_pools | {"start": "7.7.7.130", "end": "7.7.7.150"} | | cidr | 7.7.7.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 7.7.7.1 | | host_routes | | | id | aef60b55-cbff-405d-a81d-406283ac6cff | | ip_version | 4 | | name | | | network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | tenant_id | 48fb81ab2f6b409bafac8961a594980f | +------------------+--------------------------------------------------+ Set the router's gateway to be the external network: $ neutron router-gateway-set $ROUTER_ID $EXTERNAL_NETWORK_ID Set gateway for router 685f64e7-a020-4fdf-a8ad-e41194ae124b A user from tenant_A completes the following steps, so the credentials in the environment variables are different than those in the previous procedure. To allocate floating IP addresses A floating IP address can be associated with a VM after it starts. The ID of the port ($PORT_ID) that was allocated for the VM is required and can be found as follows: $ nova list +--------------------------------------+--------+--------+---------------+ | ID | Name | Status | Networks | +--------------------------------------+--------+--------+---------------+ | 1cdc671d-a296-4476-9a75-f9ca1d92fd26 | testvm | ACTIVE | net1=10.5.5.3 | +--------------------------------------+--------+--------+---------------+ neutron port-list -- --device_id 1cdc671d-a296-4476-9a75-f9ca1d92fd26 +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | 9aa47099-b87b-488c-8c1d-32f993626a30 | | fa:16:3e:b4:d6:6c | {"subnet_id": "c395cb5d-ba03-41ee-8a12-7e792d51a167", "ip_address": "10.5.5.3"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ Allocate a floating IP (Used as $FLOATING_ID): $ neutron floatingip-create ext_net +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 7.7.7.131 | | floating_network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | id | 40952c83-2541-4d0c-b58e-812c835079a5 | | port_id | | | router_id | | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +---------------------+--------------------------------------+ Associate the floating IP with the VM's port: $ neutron floatingip-associate $FLOATING_ID $PORT_ID Associated floatingip 40952c83-2541-4d0c-b58e-812c835079a5 Show the floating IP: $ neutron floatingip-show $FLOATING_ID +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | 10.5.5.3 | | floating_ip_address | 7.7.7.131 | | floating_network_id | 8858732b-0400-41f6-8e5c-25590e67ffeb | | id | 40952c83-2541-4d0c-b58e-812c835079a5 | | port_id | 9aa47099-b87b-488c-8c1d-32f993626a30 | | router_id | 685f64e7-a020-4fdf-a8ad-e41194ae124b | | tenant_id | e40fa60181524f9f9ee7aa1038748f08 | +---------------------+--------------------------------------+ Test the floating IP: $ ping 7.7.7.131 PING 7.7.7.131 (7.7.7.131) 56(84) bytes of data. 64 bytes from 7.7.7.131: icmp_req=2 ttl=64 time=0.152 ms 64 bytes from 7.7.7.131: icmp_req=3 ttl=64 time=0.049 ms