openstack-manuals/doc/admin-guide-network/app_demo_flat.xml
Diane Fleming 64b6c9261e Folder rename, file rename, flattening of directories
Current folder name	New folder name	        Book title
----------------------------------------------------------
basic-install 	        DELETE
cli-guide	        DELETE
common	                common
NEW	                admin-guide-cloud	Cloud Administrators Guide
docbkx-example	        DELETE
openstack-block-storage-admin 	DELETE
openstack-compute-admin 	DELETE
openstack-config 	config-reference	OpenStack Configuration Reference
openstack-ha 	        high-availability-guide	OpenStack High Availabilty Guide
openstack-image	        image-guide	OpenStack Virtual Machine Image Guide
openstack-install 	install-guide	OpenStack Installation Guide
openstack-network-connectivity-admin 	admin-guide-network 	OpenStack Networking Administration Guide
openstack-object-storage-admin 	DELETE
openstack-security 	security-guide	OpenStack Security Guide
openstack-training 	training-guide	OpenStack Training Guide
openstack-user 	        user-guide	OpenStack End User Guide
openstack-user-admin 	user-guide-admin	OpenStack Admin User Guide
glossary	        NEW        	OpenStack Glossary

bug: #1220407

Change-Id: Id5ffc774b966ba7b9a591743a877aa10ab3094c7
author: diane fleming
2013-09-08 15:15:50 -07:00

393 lines
22 KiB
XML

<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook" xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink" version="5.0" xml:id="app_demo_flat">
<title>Single Flat Network</title>
<para>This section describes how to install the OpenStack Networking service
and its components for the "<link
linkend="use_cases_single_flat">Use Case: Single Flat Network
</link>".</para>
<para>The diagram below shows the setup. For simplicity all of the
nodes should have one interface for management traffic and one
or more interfaces for traffic to and from VMs. The management
network is 100.1.1.0/24 with controller node at 100.1.1.2. The example uses the Open vSwitch
plugin and agent.</para>
<para><emphasis role="bold">Note</emphasis> the setup can be tweaked to make use of another
supported plugin and its agent.</para>
<mediaobject>
<imageobject>
<imagedata scale="60" fileref="figures/demo_flat_install.png"/>
</imageobject>
</mediaobject>
<para>Here are some nodes in the setup.</para>
<informaltable rules="all" width="100%">
<col width="20%"/>
<col width="80%"/>
<thead>
<tr>
<th>Node</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Controller Node</td>
<td>Runs the OpenStack Networking service, OpenStack Identity and all of
the OpenStack Compute services that are required to deploy
VMs (<systemitem class="service">nova-api</systemitem>,
<systemitem class="service">nova-scheduler</systemitem>, for example).
The node must have at least one
network interface, which is connected to
the "Management Network". The hostname is 'controlnode', which
every other node resolve to the controller node's IP.
<emphasis role="bold">Note</emphasis>
The nova-network service should not be running. This is
replaced by OpenStack Networking.</td>
</tr>
<tr>
<td>Compute Node</td>
<td>Runs the OpenStack Networking L2 agent and the
OpenStack Compute services that run VMs
(<systemitem class="service">nova-compute</systemitem> specifically, and optionally other
nova-* services depending on configuration). The
node must have at least two network interfaces.
The first is used to communicate with the
controller node via the management network. The
second interface is used for the VM traffic on the
Data network. The VM will be able to receive its
IP address from the DHCP agent on this
network.</td>
</tr>
<tr>
<td>Network Node</td>
<td>Runs OpenStack Networking L2 agent and the DHCP agent.
The DHCP agent will allocate
IP addresses to the VMs on the network. The node must have
at least two network interfaces. The first
is used to communicate with the controller
node via the management network. The second
interface will be used for the VM traffic on the data network.</td>
</tr>
<tr>
<td>Router</td>
<td>Router has IP 30.0.0.1, which is the default gateway for
all VMs. The router should have ability to access public networks.</td>
</tr>
</tbody>
</informaltable>
<para>The demo assumes the following:</para>
<para><emphasis role="bold">Controller Node</emphasis></para>
<orderedlist>
<listitem>
<para>Relevant OpenStack Compute services are installed, configured and
running.</para>
</listitem>
<listitem>
<para>Glance is installed, configured and running. In
addition to this there should be an image.</para>
</listitem>
<listitem>
<para>OpenStack Identity is installed, configured and running. An OpenStack Networking
user <emphasis role="bold">neutron</emphasis> should be created on tenant <emphasis
role="bold">servicetenant</emphasis> with password <emphasis role="bold"
>servicepassword</emphasis>.</para>
</listitem>
<listitem>
<para>Additional services <itemizedlist>
<listitem>
<para>RabbitMQ is running with default guest and its password</para>
</listitem>
<listitem>
<para>MySQL server (user is <emphasis
role="bold">root</emphasis> and
password is <emphasis role="bold"
>root</emphasis>)</para>
</listitem>
</itemizedlist></para>
</listitem>
</orderedlist>
<para><emphasis role="bold">Compute Node</emphasis></para>
<orderedlist>
<listitem>
<para>OpenStack Compute compute is installed and configured</para>
</listitem>
</orderedlist>
<section xml:id="demo_flat_installions">
<title>Installation</title>
<para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Controller Node - OpenStack Networking Server</emphasis><orderedlist>
<listitem>
<para>Install the OpenStack Networking server.</para>
</listitem>
<listitem>
<para>Create database <emphasis role="bold">ovs_neutron</emphasis>.
See the section on the <link linkend="arch_overview">Core
Plugins</link> for the exact details.</para>
</listitem>
<listitem>
<para>Update the OpenStack Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename> setting
plugin choice and Identity Service user as necessary:</para>
<programlisting>[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controlnode
notification_driver = neutron.openstack.common.notifier.rabbit_notifier
[keystone_authtoken]
admin_tenant_name=servicetenant
admin_user=neutron
admin_password=servicepassword
</programlisting>
</listitem>
<listitem>
<para>Update the plugin configuration file, <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
<programlisting>[database]
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0
</programlisting>
</listitem>
<listitem>
<para>Start the OpenStack Networking service</para>
</listitem>
</orderedlist></para>
</listitem>
<listitem>
<para><emphasis role="bold">Compute Node - OpenStack Compute </emphasis><orderedlist>
<listitem>
<para>Install the <systemitem class="service">nova-compute</systemitem> service.</para>
</listitem>
<listitem>
<para>Update the OpenStack Compute configuration
file, <filename>
/etc/nova/nova.conf</filename>. Make sure the following is at the end of this file:</para>
<programlisting>network_api_class=nova.network.neutronv2.api.API
neutron_admin_username=neutron
neutron_admin_password=servicepassword
neutron_admin_auth_url=http://controlnode:35357/v2.0/
neutron_auth_strategy=keystone
neutron_admin_tenant_name=servicetenant
neutron_url=http://controlnode:9696/
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
</programlisting>
</listitem>
<listitem>
<para>Restart the OpenStack Compute service</para>
</listitem>
</orderedlist></para>
</listitem>
<listitem>
<para><emphasis role="bold">Compute and Network Node - L2 Agent</emphasis><orderedlist>
<listitem>
<para>Install and start Open vSwitch.</para>
</listitem>
<listitem>
<para>Install the L2 agent(Neutron Open vSwitch agent).</para>
</listitem>
<listitem>
<para>Add the integration bridge to the Open vSwitch:</para>
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-int</userinput></screen>
</listitem>
<listitem>
<para>Update the OpenStack Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename>:</para>
<programlisting>[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controlnode
notification_driver = neutron.openstack.common.notifier.rabbit_notifier</programlisting>
</listitem>
<listitem>
<para>Update the plugin configuration file, <filename>
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini</filename>:</para>
<programlisting>[database]
sql_connection = mysql://root:root@controlnode:3306/ovs_neutron?charset=utf8
[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth0</programlisting>
</listitem>
<listitem>
<para>Create the network bridge
<emphasis role="bold"
>br-eth0</emphasis> (All VM
communication between the nodes
will be done via eth0):</para>
<screen><prompt>$</prompt> <userinput>sudo ovs-vsctl add-br br-eth0</userinput>
<prompt>$</prompt> <userinput>sudo ovs-vsctl add-port br-eth0 eth0</userinput></screen>
</listitem>
<listitem>
<para>Start the OpenStack Networking L2 agent</para>
</listitem>
</orderedlist></para>
</listitem>
<listitem>
<para><emphasis role="bold">Network Node - DHCP Agent</emphasis><orderedlist>
<listitem>
<para>Install the DHCP agent.</para>
</listitem>
<listitem>
<para>Update the OpenStack Networking configuration file, <filename>
/etc/neutron/neutron.conf</filename>:</para>
<programlisting>[DEFAULT]
core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
control_exchange = neutron
rabbit_host = controlnode
notification_driver = neutron.openstack.common.notifier.rabbit_notifier</programlisting>
</listitem>
<listitem>
<para>Update the DHCP configuration file <filename>
/etc/neutron/dhcp_agent.ini</filename>:</para>
<programlisting>interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver</programlisting>
</listitem>
<listitem>
<para>Start the DHCP agent</para>
</listitem>
</orderedlist></para>
</listitem>
</itemizedlist>
</para>
</section>
<section xml:id="demo_flat_logical_network_config">
<title>Logical Network Configuration</title>
<para>All of the commands below can be executed on the network node.</para>
<para><emphasis role="bold">Note</emphasis> please ensure that
the following environment variables are set. These are
used by the various clients to access OpenStack Identity.</para>
<para>
<programlisting language="bash">export OS_USERNAME=admin
export OS_PASSWORD=adminpassword
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://127.0.0.1:5000/v2.0/</programlisting>
</para>
<para>
<orderedlist>
<listitem>
<para>Get the tenant ID (Used as
$TENANT_ID later):</para>
<screen><prompt>$</prompt> <userinput>keystone tenant-list</userinput></screen>
<screen><computeroutput>
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| 247e478c599f45b5bd297e8ddbbc9b6a | TenantA | True |
| 2b4fec24e62e4ff28a8445ad83150f9d | TenantC | True |
| 3719a4940bf24b5a8124b58c9b0a6ee6 | TenantB | True |
| 5fcfbc3283a142a5bb6978b549a511ac | demo | True |
| b7445f221cda4f4a8ac7db6b218b1339 | admin | True |
+----------------------------------+---------+---------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Get the User information:</para>
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput></screen>
<screen><computeroutput>
+----------------------------------+-------+---------+-------------------+
| id | name | enabled | email |
+----------------------------------+-------+---------+-------------------+
| 5a9149ed991744fa85f71e4aa92eb7ec | demo | True | |
| 5b419c74980d46a1ab184e7571a8154e | admin | True | admin@example.com |
| 8e37cb8193cb4873a35802d257348431 | UserC | True | |
| c11f6b09ed3c45c09c21cbbc23e93066 | UserB | True | |
| ca567c4f6c0942bdac0e011e97bddbe3 | UserA | True | |
+----------------------------------+-------+---------+-------------------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Create a internal shared network on the
demo tenant ($TENANT_ID will be
b7445f221cda4f4a8ac7db6b218b1339):</para>
<screen><prompt>$</prompt> <userinput>neutron net-create --tenant-id $TENANT_ID sharednet1 --shared --provider:network_type flat \
--provider:physical_network physnet1</userinput></screen>
<screen><computeroutput>Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 04457b44-e22a-4a5c-be54-a53a9b2818e7 |
| name | sharednet1 |
| provider:network_type | flat |
| provider:physical_network | physnet1 |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | b7445f221cda4f4a8ac7db6b218b1339 |
+---------------------------+--------------------------------------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Create a subnet on the network:</para>
<screen><prompt>$</prompt> <userinput>neutron subnet-create --tenant-id $TENANT_ID sharednet1 30.0.0.0/24</userinput></screen>
<screen><computeroutput>Created a new subnet:
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} |
| cidr | 30.0.0.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 30.0.0.1 |
| host_routes | |
| id | b8e9a88e-ded0-4e57-9474-e25fa87c5937 |
| ip_version | 4 |
| name | |
| network_id | 04457b44-e22a-4a5c-be54-a53a9b2818e7 |
| tenant_id | 5fcfbc3283a142a5bb6978b549a511ac |
+------------------+--------------------------------------------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Create a server for tenant A:</para>
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 boot --image tty --flavor 1 \
--nic net-id=04457b44-e22a-4a5c-be54-a53a9b2818e7 TenantA_VM1</userinput></screen>
<screen><prompt>$</prompt> <userinput>nova --os-tenant-name TenantA --os-username UserA --os-password password \
--os-auth-url=http://localhost:5000/v2.0 list</userinput></screen>
<screen><computeroutput>
+--------------------------------------+-------------+--------+---------------------+
| ID | Name | Status | Networks |
+--------------------------------------+-------------+--------+---------------------+
| 09923b39-050d-4400-99c7-e4b021cdc7c4 | TenantA_VM1 | ACTIVE | sharednet1=30.0.0.3 |
+--------------------------------------+-------------+--------+---------------------+
</computeroutput></screen>
</listitem>
<listitem>
<para>Ping the server of tenant A:</para>
<screen>
<prompt>$</prompt> <userinput>sudo ip addr flush eth0</userinput>
<prompt>$</prompt> <userinput>sudo ip addr add 30.0.0.201/24 dev br-eth0</userinput>
<prompt>$</prompt> <userinput>ping 30.0.0.3</userinput>
</screen>
</listitem>
<listitem>
<para>Ping the public network within the server of tenant A:</para>
<screen><prompt>$</prompt> <userinput>ping 192.168.1.1</userinput></screen>
<screen><computeroutput>
PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data.
64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=1.74 ms
64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=1.50 ms
64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=1.23 ms
^C
--- 192.168.1.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 1.234/1.495/1.745/0.211 ms
</computeroutput></screen>
<para>Note: The 192.168.1.1 is an IP on public network that the router is connecting.</para></listitem>
<listitem>
<para>Create servers for other tenants</para>
<para>We can create servers for other
tenants with similar commands. Since all these
VMs share the same subnet, they will be able
to access each other.</para></listitem>
</orderedlist>
</para>
</section>
</section>