Per-tenant routers with private networks This section describes how to install the OpenStack Networking service and its components for a use case that has per-tenant routers with private networks. The following figure shows the setup: As shown in the figure, the setup includes: An interface for management traffic on each node. Use of the Open vSwitch plug-in. GRE tunnels for data transport on all agents. Floating IPs and router gateway ports that are configured in an external network, and a physical router that connects the floating IPs and router gateway ports to the outside world. Because this example runs a DHCP agent and L3 agent on one node, you must set the use_namespace option to True in the configuration file for each agent. The default is True. This table describes the nodes: Node Description Controller Node Runs Networking, Identity Service, and all Compute services that are required to deploy VMs (nova-api, nova-scheduler, for example). The node must have at least one network interface, which connects to the Management Network. The host name is controlnode, which other nodes resolve to the IP of the controller node. The nova-network service should not be running. This is replaced by Networking. Compute Node Runs the Networking L2 agent and the Compute services that run VMs (nova-compute specifically, and optionally other nova-* services depending on configuration). The node must have at least two network interfaces. One interface communicates with the controller node through the management network. The other node is used for the VM traffic on the data network. The VM receives its IP address from the DHCP agent on this network. Network Node Runs Networking L2 agent, DHCP agent and L3 agent. This node has access to the external network. The DHCP agent allocates IP addresses to the VMs on data network. (Technically, the addresses are allocated by the Networking server, and distributed by the dhcp agent.) The node must have at least two network interfaces. One interface communicates with the controller node through the management network. The other interface is used as external network. GRE tunnels are set up as data networks. Router Router has IP 30.0.0.1, which is the default gateway for all VMs. The router must be able to access public networks. The use case assumes the following: Controller node Relevant Compute services are installed, configured, and running. Glance is installed, configured, and running. In addition, an image named tty must be present. Identity is installed, configured, and running. A Networking user named neutron should be created on tenant service with password NEUTRON_PASS. Additional services: RabbitMQ is running with default guest and password RABBIT_PASS. MySQL server (user is root). Compute node Install and configure Compute.
Install Controller node - Networking server Install the Networking server. Create database ovs_neutron. Update the Networking configuration file, /etc/neutron/neutron.conf, with plug-in choice and Identity Service user as necessary: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier [database] connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron [keystone_authtoken] admin_tenant_name=service admin_user=neutron admin_password=NEUTRON_PASS [DEFAULT] control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier [database] connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron Update the plug-in configuration file, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True Start the Networking server. The Networking server can be a service of the operating system. The command to start the service depends on your operating system. The following command runs the Networking server directly: # neutron-server --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \ --config-file /etc/neutron/neutron.conf Compute node—Compute Install Compute services. Update the Compute /etc/nova/nova.conf configuration file. Make sure the following line appears at the end of this file: network_api_class=nova.network.neutronv2.api.API neutron_admin_username=neutron neutron_admin_password=NEUTRON_PASS neutron_admin_auth_url=http://controlnode:35357/v2.0/ neutron_auth_strategy=keystone neutron_admin_tenant_name=service neutron_url=http://controlnode:9696/ Restart relevant Compute services. Compute and Networking node—L2 agent Install and start Open vSwitch. Install the L2 agent (Neutron Open vSwitch agent). Add the integration bridge to the Open vSwitch: # ovs-vsctl add-br br-int Update the Networking configuration file, /etc/neutron/neutron.conf: [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier [database] connection = mysql://neutron:NEUTRON_DBPASS@controller:3306/neutron Update the plug-in configuration file, /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Compute node: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True local_ip = 9.181.89.202 Network node: [ovs] tenant_network_type = gre tunnel_id_ranges = 1:1000 enable_tunneling = True local_ip = 9.181.89.203 Create the integration bridge br-int: # ovs-vsctl --may-exist add-br br-int Start the Networking L2 agent The Networking Open vSwitch L2 agent can be a service of the operating system. The command to start depends on your operating system. The following command runs the service directly: # neutron-openvswitch-agent --config-file /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini \ --config-file /etc/neutron/neutron.conf Network node—DHCP agent Install the DHCP agent. Update the Networking configuration file, /etc/neutron/neutron.conf [DEFAULT] core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2 control_exchange = neutron rabbit_host = controller rabbit_password = RABBIT_PASS notification_driver = neutron.openstack.common.notifier.rabbit_notifier allow_overlapping_ips = True Set allow_overlapping_ips because TenantA and TenantC use overlapping subnets. Update the DHCP /etc/neutron/dhcp_agent.ini configuration file: interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver Start the DHCP agent. The Networking DHCP agent can be a service of operating system. The command to start the service depends on your operating system. The following command runs the service directly: # neutron-dhcp-agent --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/dhcp_agent.ini Network node—L3 agent Install the L3 agent. Add the external network bridge # ovs-vsctl add-br br-ex Add the physical interface, for example eth0, that is connected to the outside network to this bridge: # ovs-vsctl add-port br-ex eth0 Update the L3 configuration file /etc/neutron/l3_agent.ini: [DEFAULT] interface_driver=neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces=True Set the use_namespaces option (it is True by default) because TenantA and TenantC have overlapping subnets, and the routers are hosted on one l3 agent network node. Start the L3 agent The Networking L3 agent can be a service of the operating system. The command to start the service depends on your operating system. The following command starts the agent directly: # neutron-l3-agent --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/l3_agent.ini
Configure logical network You can run these commands on the network node. Ensure that the following environment variables are set. Various clients use these to access the Identity Service. export OS_USERNAME=admin export OS_PASSWORD=ADMIN_PASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:5000/v2.0/ Get the tenant ID (Used as $TENANT_ID later): # keystone tenant-list +----------------------------------+---------+---------+ | id | name | enabled | +----------------------------------+---------+---------+ | 247e478c599f45b5bd297e8ddbbc9b6a | TenantA | True | | 2b4fec24e62e4ff28a8445ad83150f9d | TenantC | True | | 3719a4940bf24b5a8124b58c9b0a6ee6 | TenantB | True | | 5fcfbc3283a142a5bb6978b549a511ac | demo | True | | b7445f221cda4f4a8ac7db6b218b1339 | admin | True | +----------------------------------+---------+---------+ Get user information: # keystone user-list +----------------------------------+-------+---------+-------------------+ | id | name | enabled | email | +----------------------------------+-------+---------+-------------------+ | 5a9149ed991744fa85f71e4aa92eb7ec | demo | True | | | 5b419c74980d46a1ab184e7571a8154e | admin | True | admin@example.com | | 8e37cb8193cb4873a35802d257348431 | UserC | True | | | c11f6b09ed3c45c09c21cbbc23e93066 | UserB | True | | | ca567c4f6c0942bdac0e011e97bddbe3 | UserA | True | | +----------------------------------+-------+---------+-------------------+ Create the external network and its subnet by admin user: # neutron net-create Ext-Net --provider:network_type local --router:external true Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 2c757c9e-d3d6-4154-9a77-336eb99bd573 | | name | Ext-Net | | provider:network_type | local | | provider:physical_network | | | provider:segmentation_id | | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | b7445f221cda4f4a8ac7db6b218b1339 | +---------------------------+--------------------------------------+ # neutron subnet-create Ext-Net 30.0.0.0/24 --disable-dhcp Created a new subnet: +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} | | cidr | 30.0.0.0/24 | | dns_nameservers | | | enable_dhcp | False | | gateway_ip | 30.0.0.1 | | host_routes | | | id | ba754a55-7ce8-46bb-8d97-aa83f4ffa5f9 | | ip_version | 4 | | name | | | network_id | 2c757c9e-d3d6-4154-9a77-336eb99bd573 | | tenant_id | b7445f221cda4f4a8ac7db6b218b1339 | +------------------+--------------------------------------------+ provider:network_type local means that Networking does not have to realize this network through provider network. router:external true means that an external network is created where you can create the floating IP and router gateway port. Add an IP on the external network to br-ex. Because br-ex is the external network bridge, add an IP 30.0.0.100/24 to br-ex and ping the floating IP of the VM from our network node. # ip addr add 30.0.0.100/24 dev br-ex # ip link set br-ex up Serve TenantA. For TenantA, create a private network, subnet, server, router, and floating IP. Create a network for TenantA: # neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 net-create TenantA-Net Created a new network: +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | admin_state_up | True | | id | 7d0e8d5d-c63c-4f13-a117-4dc4e33e7d68 | | name | TenantA-Net | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 247e478c599f45b5bd297e8ddbbc9b6a | +-----------------+--------------------------------------+ After that, you can use admin user to query the provider network information: # neutron net-show TenantA-Net +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 7d0e8d5d-c63c-4f13-a117-4dc4e33e7d68 | | name | TenantA-Net | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 1 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 247e478c599f45b5bd297e8ddbbc9b6a | +---------------------------+--------------------------------------+ The network has GRE tunnel ID (for example, provider:segmentation_id) 1. Create a subnet on the network TenantA-Net: # neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 subnet-create TenantA-Net 10.0.0.0/24 Created a new subnet: +------------------+--------------------------------------------+ | Field | Value | +------------------+--------------------------------------------+ | allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} | | cidr | 10.0.0.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 10.0.0.1 | | host_routes | | | id | 51e2c223-0492-4385-b6e9-83d4e6d10657 | | ip_version | 4 | | name | | | network_id | 7d0e8d5d-c63c-4f13-a117-4dc4e33e7d68 | | tenant_id | 247e478c599f45b5bd297e8ddbbc9b6a | +------------------+--------------------------------------------+ Create a server for TenantA: $ nova --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \ --nic net-id=7d0e8d5d-c63c-4f13-a117-4dc4e33e7d68 TenantA_VM1 $ nova --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 list +--------------------------------------+-------------+--------+----------------------+ | ID | Name | Status | Networks | +--------------------------------------+-------------+--------+----------------------+ | 7c5e6499-7ef7-4e36-8216-62c2941d21ff | TenantA_VM1 | ACTIVE | TenantA-Net=10.0.0.3 | +--------------------------------------+-------------+--------+----------------------+ It is important to understand that you should not attach the instance to Ext-Net directly. Instead, you must use a floating IP to make it accessible from the external network. Create and configure a router for TenantA: # neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 router-create TenantA-R1 Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 59cd02cb-6ee6-41e1-9165-d251214594fd | | name | TenantA-R1 | | status | ACTIVE | | tenant_id | 247e478c599f45b5bd297e8ddbbc9b6a | +-----------------------+--------------------------------------+ # neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 router-interface-add \ TenantA-R1 51e2c223-0492-4385-b6e9-83d4e6d10657 Added interface to router TenantA-R1 # neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 \ router-gateway-set TenantA-R1 Ext-Net Associate a floating IP for TenantA_VM1. Create a floating IP: # neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 floatingip-create Ext-Net Created a new floatingip: +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | fixed_ip_address | | | floating_ip_address | 30.0.0.2 | | floating_network_id | 2c757c9e-d3d6-4154-9a77-336eb99bd573 | | id | 5a1f90ed-aa3c-4df3-82cb-116556e96bf1 | | port_id | | | router_id | | | tenant_id | 247e478c599f45b5bd297e8ddbbc9b6a | +---------------------+--------------------------------------+ Get the port ID of the VM with ID 7c5e6499-7ef7-4e36-8216-62c2941d21ff: $ neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 port-list -- \ --device_id 7c5e6499-7ef7-4e36-8216-62c2941d21ff +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | id | name | mac_address | fixed_ips | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ | 6071d430-c66e-4125-b972-9a937c427520 | | fa:16:3e:a0:73:0d | {"subnet_id": "51e2c223-0492-4385-b6e9-83d4e6d10657", "ip_address": "10.0.0.3"} | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+ Associate the floating IP with the VM port: $ neutron --os-tenant-name TenantA --os-username UserA --os-password password \ --os-auth-url=http://localhost:5000/v2.0 floatingip-associate \ 5a1f90ed-aa3c-4df3-82cb-116556e96bf1 6071d430-c66e-4125-b972-9a937c427520 Associated floatingip 5a1f90ed-aa3c-4df3-82cb-116556e96bf1 $ neutron floatingip-list +--------------------------------------+------------------+---------------------+--------------------------------------+ | id | fixed_ip_address | floating_ip_address | port_id | +--------------------------------------+------------------+---------------------+--------------------------------------+ | 5a1f90ed-aa3c-4df3-82cb-116556e96bf1 | 10.0.0.3 | 30.0.0.2 | 6071d430-c66e-4125-b972-9a937c427520 | +--------------------------------------+------------------+---------------------+--------------------------------------+ Ping the public network from the server of TenantA. In my environment, 192.168.1.0/24 is my public network connected with my physical router, which also connects to the external network 30.0.0.0/24. With the floating IP and virtual router, you can ping the public network within the server of tenant A: $ ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=1.74 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=1.50 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=1.23 ms ^C --- 192.168.1.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2003ms rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms Ping the floating IP of TenantA's server: $ ping 30.0.0.2 PING 30.0.0.2 (30.0.0.2) 56(84) bytes of data. 64 bytes from 30.0.0.2: icmp_req=1 ttl=63 time=45.0 ms 64 bytes from 30.0.0.2: icmp_req=2 ttl=63 time=0.898 ms 64 bytes from 30.0.0.2: icmp_req=3 ttl=63 time=0.940 ms ^C --- 30.0.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 0.898/15.621/45.027/20.793 ms Create other servers for TenantA. You can create more servers for TenantA and add floating IPs for them. Serve TenantC. For TenantC, you create two private networks with subnet 10.0.0.0/24 and subnet 10.0.1.0/24, some servers, one router to connect to these two subnets and some floating IPs. Create networks and subnets for TenantC: # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 net-create TenantC-Net1 # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 subnet-create TenantC-Net1 \ 10.0.0.0/24 --name TenantC-Subnet1 # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 net-create TenantC-Net2 # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 subnet-create TenantC-Net2 \ 10.0.1.0/24 --name TenantC-Subnet2 After that you can use admin user to query the network's provider network information: # neutron net-show TenantC-Net1 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 91309738-c317-40a3-81bb-bed7a3917a85 | | name | TenantC-Net1 | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 2 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | cf03fd1e-164b-4527-bc87-2b2631634b83 | | tenant_id | 2b4fec24e62e4ff28a8445ad83150f9d | +---------------------------+--------------------------------------+ # neutron net-show TenantC-Net2 +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 5b373ad2-7866-44f4-8087-f87148abd623 | | name | TenantC-Net2 | | provider:network_type | gre | | provider:physical_network | | | provider:segmentation_id | 3 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | 38f0b2f0-9f98-4bf6-9520-f4abede03300 | | tenant_id | 2b4fec24e62e4ff28a8445ad83150f9d | +---------------------------+--------------------------------------+ You can see GRE tunnel IDs (such as, provider:segmentation_id) 2 and 3. And also note the network IDs and subnet IDs because you use them to create VMs and the router. Create a server TenantC-VM1 for TenantC on TenantC-Net1. # nova --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \ --nic net-id=91309738-c317-40a3-81bb-bed7a3917a85 TenantC_VM1 Create a server TenantC-VM3 for TenantC on TenantC-Net2. # nova --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \ --nic net-id=5b373ad2-7866-44f4-8087-f87148abd623 TenantC_VM3 List servers of TenantC. # nova --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 list +--------------------------------------+-------------+--------+-----------------------+ | ID | Name | Status | Networks | +--------------------------------------+-------------+--------+-----------------------+ | b739fa09-902f-4b37-bcb4-06e8a2506823 | TenantC_VM1 | ACTIVE | TenantC-Net1=10.0.0.3 | | 17e255b2-b14f-48b3-ab32-5df36566d2e8 | TenantC_VM3 | ACTIVE | TenantC-Net2=10.0.1.3 | +--------------------------------------+-------------+--------+-----------------------+ Note the server IDs because you use them later. Make sure servers get their IPs. You can use VNC to log on the VMs to check if they get IPs. If not, you must make sure that the Networking components are running correctly and the GRE tunnels work. Create and configure a router for TenantC: # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 router-create TenantC-R1 # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 router-interface-add \ TenantC-R1 cf03fd1e-164b-4527-bc87-2b2631634b83 # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 router-interface-add \ TenantC-R1 38f0b2f0-9f98-4bf6-9520-f4abede03300 # neutron --os-tenant-name TenantC --os-username UserC --os-password password \ --os-auth-url=http://localhost:5000/v2.0 \ router-gateway-set TenantC-R1 Ext-Net Checkpoint: ping from within TenantC's servers. Because a router connects to two subnets, the VMs on these subnets can ping each other. And because the gateway for the router is set, TenantC's servers can ping external network IPs, such as 192.168.1.1, 30.0.0.1, and so on. Associate floating IPs for TenantC's servers. Because a router connects to two subnets, the VMs on these subnets can ping each other. And because the gateway interface for the router is set, TenantC's servers can ping external network IPs, such as 192.168.1.1, 30.0.0.1, and so on. Associate floating IPs for TenantC's servers. You can use similar commands to the ones used in the section for TenantA.
Use case: per-tenant routers with private networks This use case represents a more advanced router scenario in which each tenant gets at least one router, and potentially has access to the Networking API to create additional routers. The tenant can create their own networks, potentially uplinking those networks to a router. This model enables tenant-defined, multi-tier applications, with each tier being a separate network behind the router. Because there are multiple routers, tenant subnets can overlap without conflicting, because access to external networks all happens through SNAT or floating IPs. Each router uplink and floating IP is allocated from the external network subnet.